SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

A New Network Edge Platform for CSPs


The network edge is the ideal place to deliver latency-sensitive services, quickly deploy new services, monitor and optimize customer experiences, and explore new revenue opportunities.

models. With the rapid evolution of networking technology and data demand, service providers must find ways not only to deploy but also to monetize their services and support new business models to remain profitable. This can be difficult, as it requires access to network characteristics (such as bandwidth, response times, cybersecurity options, or multiple levels of customer service) and application capabilities (time or geographic awareness, access to messaging, and more). A good rule of thumb is that if it can be measured or monitored, it can be monetized. All in all, the recent network market and technological evolution have created major challenges for communications service providers, each presenting significant barriers to successfully meeting and overcoming them.  

In response to these needs, a bewildering variety of new open technology solutions have been developed and are coming into play from all parts of the networking ecosystem. With all these options, a key question becomes where to start. What mix of hardware, software functionalities, and control mechanisms need to be provided? And because CSPs have significant investments in their current network infrastructure, can solutions be built that can integrate with and expand the in-use lifetime existing paid-for network assets?

The advantages of the network edge

For many CSPs, the network edge provides an advantageous point-of-leverage for service delivery: it is where a variety of services such as bandwidth management, Quality of Service (QoS) management, and secure access control are deployed in the network. And its strategic position as the interface between CSPs and their customers, as well as its proximity to customers, make it the ideal place to deliver latency-sensitive services, quickly deploy new services, monitor and optimize customer experiences, and explore new revenue opportunities.

From a hardware perspective, MEC platforms provide the flexible compute, storage and—critically—the networking capacity needed to deploy these functionalities as close as possible to service users, and to do so using economical, open, standards-based white box hardware. To make it possible to leverage virtual network functions (VNFs), virtual machines (VMs), and containers, the platform should have software that supports these standards and integrates with service providers’ network orchestration software via standard application programing interfaces (APIs). A final consideration is the ability to elastically implement key network services such as load balancing, filtering, and packet brokering in software, along with the monitoring and control needed to change the behavior of these network capabilities in real time.

Optimization from anywhere in the network

From a software perspective, service providers need a way to make services addressable from anywhere in the network, making it possible to deploy or move service compute and storage to where capacity is available (or closer to actual demand where low latency is required). For example, a North American telecoms network operator might have a demand surge to servers located on the East Coast in the early morning as local workers start logging in, whereas West-Coast-based servers in the same network are underutilized because most West Coast users are still in bed!

This is where Segment Routing (SR) standards such as SRv6 come into play. SR makes it possible to easily redirect traffic transparently to users, and with a proxy function, even transparently to the service itself. For SR-unaware services, an SR service proxy would be needed for immediate deployment and to protect investments in the current service infrastructure. When implementing SR using SRv6, the benefits



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel