By: Ray LaChance
Data consumption and production has grown enormously in recent years, and regardless of the overwhelming amount of information that already traverses today’s network architectures, this upswing will only continue to gain momentum. Furthermore, the nature of consumption has changed, with next-generation applications requiring even lower latency, faster speeds, higher bandwidth and enhanced accessibility that reaches further toward the edge and the users that reside there.
In 2018 alone, the world created 33 zettabytes of data. If we put this number into perspective, that amount of information is equal to 660 billion standard Blu-ray disks or 33 million human brains. Rapid digitalization will push this total to 175 zettabytes in 2025 — an increase of more than fivefold.
Underlying this movement of data is a physical infrastructure made up of fiber optic cables, interconnection points, wireless devices, and siting locations that are uniquely architected to allow carriers and mobile network operators to fulfill their promises of seamless connectivity. The network architecture has continued to evolve with the explosive growth in data, but a massive investment in upgrading these architectures across the country is necessary for the United States to maintain a best-in-class communications infrastructure.
A new network standard is being necessitated by the host of new mobile and wireless applications that 5G promises: exciting Internet of Things use cases, Artificial Intelligence-based efficiencies, smart city capabilities, the autonomous vehicle ecosystem and more. Recently, more practical and critical use cases drove an even further explosion of data transmission, including remote work, e-learning, telemedicine, and public safety efforts. These applications aren’t just influencing network infrastructure due to their required level of data generation, storage, processing and transfer — they’re also pushing compute functions closer to the data’s point of origin.
Performance, speed and capacity are paramount if these futuristic opportunities are to be realized. While networks have been gradually extending these capabilities closer to the edge and focusing on densifying, the rate at which they’re evolving is too slow, and many existing network architectures are simply not built to this contemporary standard. Furthermore, the challenges that arise when trying to reconcile density and accessibility must be overcome, and the level of deployment that must occur in order to offer this density is creating its own challenges.
The expectations on networks and the challenges they need to solve have fundamentally changed. As a result, the underlying infrastructure that supports these expectations must be reinvented.
Traditional fiber networks were built to support enterprise applications involving high capacity, ultra-redundant, backhaul cables meant to interconnect relatively sparse endpoints throughout the network. Today, enterprise applications are riding along wireless networks that require a dense (and in some markets ultra-dense), deployment of small cell radios, and these wireless networks require a completely different fiber network architecture than legacy networks, which were built to solve a different problem. In the D-RAN (Distributed Radio Access Network) era, when coverage was the main priority, high-powered radios deployed across distributed macro sites coupled with densely deployed small cells have provided the necessary coverage to support