How NFV Can Enable 'Pay-as-you-Protect' Network Protection

By: Nicolas St. Pierre

The Challenge

The scale of malicious attacks against communications service providers (CSPs) is increasing at a frightening pace. According to a report from Arbor Networks report, in the first half of 2016, the peak DDoS attack size reached 579Gbps, up 73 percent from 2015. There were 46 attacks over 200Gbps monitored in the first half of 2016, versus 16 during all of 2015. Meanwhile, despite massive growth in attack size at the top end, 80 percent of all attacks are still less than 1Gbps.

Tier-1 CSPs understand first-hand that legacy network protection solutions, such as those for DDoS attacks, create enormous inefficiencies, as they require proprietary hardware that is scaled for the rare peaks (579Gbps) and that otherwise either sit idle or are massively overprovisioned as they battle much smaller attacks (1Gbps). Even then, a CSP cannot be sure that the scale will be sufficient when the next attack peak comes.

The costs associated with such large-scale attack mitigation (or “scrubbing”) platforms — including those related to hardware, network resources, IT personnel, and routing infrastructure — are an incredible burden for any network operator, particularly Tier-1 CSPs, given their size.

The answer to this growing problem may lie in virtual network functions (VNFs) that can automatically scale up only when needed, and to the exact size needed, then be broken down when no longer needed — true elastic scaling. Resources are paid for only when used and to the extent used. While this concept may look promising on a whiteboard, I was challenged by an Asian tier-1 CSP to design a proof-of-concept that would demonstrate the viability of such an approach. So I did.

The Evolution of Network Function Virtualization

Network Functions Virtualization (NFV) is likely the highest-impact technology shift in telecommunications this decade, for both CSPs and traditional vendors. Certainly, NFV — and the closely related adoption of software-defined networking (SDN) — represents the largest strategy shift in telecom since the migration from analog services to digital services.

From the early "stewardship" by the European Telecommunications Standards Institute (ETSI), commitment and buy-in from large CSPs that signaled the future will be software, to the large-scale availability and general proliferation of software-based VNFs, network operators have transformed the landscape from the central office to the datacenter.

This enormous shift has driven manufacturers and vendors to begin to move away from proprietary, sometimes-cumbersome architectures towards embracing large-scale, community-driven open source initiatives such as OpenStack.

But this shift brings with it uncertainty for all parties: monolithic, proprietary architecture-based products tend to have much tighter vertical quality assurance processes around components, design, and certifications (e.g., Apple’s successful consumer products). By opening up the network function to commercial off-the-shelf (COTS) components, the performance, service level agreements (SLAs), and reliability guarantees of closed systems, as well as the related support and service contracts, are now distributed across many third-party components. 

The result is that hardware infrastructure, host environments, management and orchestration (MANO), element management systems (EMS), and hardware dependencies now must coexist across a complex and sometimes dynamic environment provided by a long list of vendors and organizations. 

These new architectures pose several challenges: the multitude of combinations of hardware components, supplied by different vendors, deployed on varied software stacks (both open and closed) or on proprietary forks of these software stacks, makes striking a balance difficult. For example, outright packet processing performance and maximum compatibility are two goals that often run at odds with each other. Performance can be maximized on a targeted subset of hardware components, which limits interoperability and openness; compatibility can be maximized at the expense of optimized performance.

It was with this backdrop that I gathered together a group of world-class technology partners in a lab in Texas to see if an elastically scaling DDoS solution could be built to meet the needs of a tier-1 CSP.


Latest Updates