Resolving Latency Issues That Affect Carrier Networks and Quality of Experience

By: Scott St. John

The speed with which data travels from one side of the network to a subscriber’s end point can mean the difference between a positive or negative Quality of Experience (QoE) and subsequent loyalty or churn.

Delivering the required transport performance — and making the customer happy with a solid QoE — has become more challenging with the insatiable demand for data and the expanding number of data-hungry digital services.

Transport performance can suffer for many reasons: excess packet loss, latency (delay) and jitter (delay variation). Of those variables, latency appears to have the biggest impact on customers.

Latency can be simply thought of as the time it takes to send a unit of data between two points in a network. It is not fixed and varies over time, and is a natural consequence of utilization of the entire network — not only subscribers’ services.

By its nature, latency is highly asymmetrical, just as traffic on a busy urban highway is congested in one direction in the morning commute, but slowed in the other direction in the afternoon commute.

Increasingly, the QoE of Internet-based applications is sensitive to latency. Packet loss is relatively rare and easy to measure and manage: either the packet shows up, or it does not. But, latency is trickier, and you can’t tell a network, “No delays, please.” Without proper attention paid to the problem, QoE can suffer greatly due to latency.

Enterprise customers become frustrated with their cloud apps, background tasks time out or freeze, calls and sessions are dropped, and potential customers with ever-shorter attention spans click away when a website or service appears to be unresponsive.

According to Gartner, “High latency tends to have greater impact than bandwidth on the end-user experience in interactive applications, such as web browsing.” Amazon once reported that a 100-millisecond delay in serving web pages decreased online sales by one percent. Similarly, Google has said that slow response time reduced the number of searches and, therefore, reduced the ability to serve ads. This is a big deal, especially when some studies say that as much as 80 percent of network traffic is affected by latency issues. Carriers can no longer rely on the most common method of understanding delays: global positioning systems (GPS) synchronization (see “GPS-Based Clock Synchronization,” below).

Network Latency and Asymmetry

There are four main causes of latency across end-to-end carrier networks:

  • Transport — The longer the links, the more delay experienced by packets as they traverse the links. Network topology can affect the transport, as well as physical distances between nodes. It also takes time for TCP to establish connections. Transport-related delay cannot be managed under most circumstances.
  • Congestion—As more traffic traverses links or routers, there can be bandwidth contention in the transport layer, or resource congestion in forwarding devices. Congestion can be caused when an unexpected amount of traffic inundates a node, when high CPU or memory utilization exists, or when packet congestion ensues because of problems elsewhere on the network — all of which require substantial routing changes.
  • Processing — Although many network forwarding devices are billed as “wire-speed,” in reality, many are plagued by processing delays when determining routes and how to best forward packets. This is particularly true in highly virtualized networks, such as those managed by SDN/NFV, because of the intense processing required on busy devices.
  • Routing changes — In an IP-based network, not all packets take the same path, so if some traffic traverses longer routes, it results in latency issues, excess jitter, and out-of-order packet transmission. In extreme cases, packets may be considered lost, and the service may be instructed to resend, even if those packets show up later, which can further exacerbate jitter and latency issues.


Latest Updates