May 15, 2023

Improving application responsiveness by utilizing the Grid

Improving application responsiveness by utilizing the Grid

By Mats Eriksson, CEO, Arctos Labs

Imagine a future where computing is a commodity just like electricity or water, where the resources for computing you need, is just around the corner but also optimally linked to the cloud.

Also Imagine a world where data is generated at an exponential pace, and we move the compute closer to where the data is created,instead of moving the data to where the compute resides. To start our digital future journey, we need to master how software components can be optimally distributed across a computing grid.

Isn’t it amazing how internet has become indispensable in our lives? Just a couple of decades ago, only quite a few people even knew, it existed.

Why is it that we are so dependent on the internet, the network of interconnected computers? The world has become increasingly dependent on the value data brings in and information is created when computers process this data. So, we could say we are dependent on data rather than computers! This is manifested by the amazing explosion in the amount of data we produce and consume. Data creation is estimated to grow by more than 50% from 2022 to 2024. For reference, links to reference reports can be found here: https://www.statista.com/statistics/871513/worldwide-data-created/, https://www.idc.com/getdoc.jsp?containerId=US49018922).

We are utilizing the Internet for all kinds of applications, ranging from the Internet of things (IoT), connecting everything everywhere, to the consumption of video for entertainment and normal enterprise business applications All these applications are critically dependent on how we transport data fast and efficiently. It implies that when we “use” the internet we are creating “rivers of data flows” that traverse the internet to reach their intended destinations.

The problem of data transport

Is this a problem? Yes, it is! Let us unwrap this a bit.

Firstly, the rivers of data are flowing in non-optimal directions, with sometimes taking very long detours. This makes it time-consuming for data to arrive where it needs to be processed. We call this aslatency. But also, the shear load of the networks causes queues which then cause jitter. Jitter is the variation in latency and results from network congestion,timing drift and route changes which is often more harmful to applications than latency. Latency and jitter together, impact responsiveness for businessapplications eventually risking customer experience and decreasing revenue streams.

So now focusing on the real problem –applications are typically built from many micro-services (or any form of components), including SaaS. These components perform various functions that are part of the application functionality. Examples of such functions include data processing, such as object recognition based on AI/ML capabilities, databases,or other functionality. Latency between data transmission and reception, can cause problems in real-time applications such as video conferencing, online gaming, and financial transactions. High latency can result in delays, lag, and poor performance in transmission.

Application responsiveness depends on how application components are distributed

The application responsiveness largely depends on how these components are distributed across the network. Consequently, the industry has begun to explore the concept of edge computing. This involves bringing computing resources closer to the location where the output is required,thereby enhancing the application's responsiveness.

But do we know how our applications are distributed including their dependencies to SaaS, and how that affects overall application performance? Very often, we do not. The intrinsic structure of our applications are often complex, so it is challenging to obtain a close visibility.The application may for example have an authorization procedure that happens to branch off to a SaaS provider far away.

So, there is sort of a tricky issue, on which components that should go to the edge, and which should stay in the cloud(or on-premises). And more importantly which edge? And which cloud?

When the demand for the number of application components increases – which they do, and the number of candidate compute locations also increases, then the challenge of optimally placing components to compute locations grows out of hand.

This is referred to as the workload placement challenge. Already at small numbers, this problem is challenging, and we also need to weigh in the characteristics of how those compute locations,are interconnected to understand the effect of placement.

IT automation is necessary to deal with this complexity

Therefore, Enterprise IT admins need tools whereby they can:

- Use edge computing alongside their on-premises and cloud resources

- Understand and control the costs related to moving data around (an increasing portion of the entire IT budget)

- Understand & control the characteristics whereby the app components are interconnected to ensure the it performs acceptably

- Understand the security vulnerabilities involved in data transfers over the wire

- Understand and predict trends in how their applications behave to pro-actively scale resources or move components

- Understand and control the total costs of distributing parts of the application architecture

These tasks can be done manually at small numbers, but when the enterprise is increasingly dependent on service providers then a manual approach is no longer feasible.

So instead, IT admins need an intent-based mechanism whereby they can specify what they need and then submit that to orchestration capabilities including advanced workload placement features that can fulfil application characteristic constraints as well as optimize for the lowest possible cost. Such orchestration capabilities should be agnostic to, and make use of, multiple service providers for compute and data transport to ensure the best possible solution is made available for the enterprise.

Open Grid Alliance (OGA) has member companies that together address these challenges with emerging technologies such as orchestration, edge cloud placement optimization, and service quality prediction. A strong collaboration is formed for development and adoption of thegrid technologies empowering organizations to discover innovative solutions.

About Arctos Labs:

Arctos Labs is a high-tech start-up that develops a solution for edge-to-cloud workload placement optimization. It is intended to be integrated into orchestration and IT automation solutions thereby providing an intent-based mechanism to help enterprises utilize a larger variety of service providers and minimize their IT costs. The solution can deal with constraints and costs related not only to compute but also to data transport.

Link to website: https://www.arctoslabs.com/

Come see us at Edge Computing Expo, booth #302 to learn more (link to event: https://edgecomputing-expo.com/northamerica/).


 

 

Recent Articles