Home Cloud Computing The Highs and Lows of Data Latency

What is Latency?

Typically, when we hear the word “latency” it relates to video streaming, music downloads, or mobile phone connections. And though issues with latency in these instances can be frustrating or inconvenient, low latency in the realm of edge computing and data transfer can make or break companies. Latency is defined as the length of time it takes for an end user to retrieve data from its source. Note that latency should not be confused with bandwidth.

Latency relates to the time it takes for data to reach the end-user as opposed to how much data can travel over a connection. Latency comes in multiple forms, each of which can cater to all businesses.

The 3 Types of Data Latency

  1. Some-time data is not updated regularly. Generally, this data can be entered in the database once and there is little to no change.

    Example: Vendor and customer contact information. This type of data is typically only stored once,and the success of the businesses not dependent on timeliness that data is updated.

  2. Near-time data is information that is updated in set intervals. Unlike real-time data, near-time data is recorded “as timely as needed” as opposed to continuous. Near-time data is more cost effective and easier to manage than real-time data.

    Example: Monthly sales report or daily cash report. This information is recorded and sent in set intervals, and the retrieval of this information does not have to be presented in real-time.

  3. Real-time data is what we associate with edge computing. It is data that becomes immediately available in the database as soon as the business activity occurs, with zero or very little latency. Real-time data is the costliest and the most challenging to achieve. However, it offers immediate ROI when the right devices and processes are in place.

[sc name=”Edge_1″]

The Layers That Impact Latency

Successful latency management is dependent on a reliable infrastructure, which consists of 3 layers:

  • Edge – the source of where the data, intelligence, and/or computer power is collected
  • Gateway – where data travels and is stored until it is centralized in either the cloud or an edge platform
  • Datacenter – physical building or rooms where the cloud and edge computing platforms are stored

The functionality of these three layers is critical to application performance and end user experience.

Data Latency, The Cloud, and Edge Computing

In a typical cloud environment, data processing occurs in a centralized data storage location. As a result, latency within a cloud environment is less predictable and more challenging to measure. Its services are more prone to latency issues because shifting applications to the cloud does not remove the underlying issue of distance between the cloud services and users. Factors contributing to latency include the number of ground-to-satellite communication hops or the number of router hops between the source and the destination. Additionally, if virtual machines (VMs) are on separate networks, this could also introduce delays in service delivery.

Enter Edge Computing

Edge computing can alleviate latency issues within the cloud because low data latency is the foundation of edge computing. Edge computing takes place near the physical location that the data is processed and uses the Industrial Internet of Things (IIoT) devices, such as smart sensors, to collect and analyze data. Those devices can then make decisions in real-time. Real-time edge analytics can help find correlations, hidden patterns and other valuable information within organizations. Because the data becoming immediately available as soon as the business activity occurs, is it incredibly beneficial to mission-critical processes.

To see how the 3-Layer IIoT architecture supports real-time control and data collection for specific applications check out our earlier post, Understanding Edge Architecture Through the IIoT Lens


[sc name=”Edge_Computing_CTA_1″]

Related Posts