SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

AI and Other Evolving Applications Demand
New Approaches to Network Design and Testing


With the AI genie out of the bottle, hyperscalers are already rethinking how they architect their networks...

Any of these issues can significantly delay or undermine the ROI that operators anticipate from billion-dollar AI investments. It’s also important to keep in mind that, as groundbreaking as current high-profile LLMs are, these AI models are still in their infancy—literally. If you think of AI models like human beings, they’re still babies, just learning how to walk and talk and feed themselves. They will grow up very, very quickly as they’re trained on more data. As they do, the demands on the network only grow. 

Going from ChatGPT-2 to -3, for example, represented a thousandfold increase in the amount of data provided to the model. Additionally, there is typically no endpoint where the model is “trained,” and demands on the network recede. In most cases, training is continuous, which means that demand for new levels of capacity, efficiency, and speed in the network are just beginning.

Reimagining Testing

With the AI genie out of the bottle, hyperscalers are already rethinking how they architect their networks, so they can handle these requirements that simply didn’t exist before. For telecom service providers, it’s likely just a matter of time until demand for AI–along with other emerging applications in cloud, streaming, IoT, and 6G—forces their hands in revisiting their own architectures. In the long term, no one wants networks to be the bottleneck holding back new AI-driven analytics, automation, and generative capabilities. Even in the short term, however, when organizations invest hundreds of millions of dollars acquiring GPUs, creating and training models, and building dedicated clusters to provide state-of-the-art processing capabilities, they need to make sure they’re extracting the best performance possible from those investments. 

Legacy network architectures simply can’t meet the needs of these new applications—particularly AI. And, as networks evolve, testing must, too. Indeed, as hyperscalers—and soon, telcos—reimagine their architectures for AI and other emerging applications, testing becomes even more important. These new network investments are decidedly not plug-and-play. Operators must understand and be able to test for critical KPIs affecting network congestion, latency, throughput, timing, and other parameters for mission-critical applications—both before and after they deploy.

For any organization investing in future AI/ML capabilities—telecoms, hyperscalers, and enterprises alike—the basic testing strategies used for decades simply won’t work in tomorrow’s network architectures. More than ever, they’ll need testing that is:

  • Scalable, with the ability to accommodate rapidly growing infrastructures, changing traffic patterns, and massive amounts of network and telemetry data
  • Automated, so that operators can develop, refine, and execute test cases quickly and repeatedly across complex multivendor environments
  • Continuous, allowing consistent testing and validation across both lab and live environments in a world where the network constantly changes and testing never stops
  • Proactive, with the ability to test against the latest applications, protocols, standards, and industry specifications, maintained by testing partners that are deeply embedded in industry groups and standards bodies

Looking Ahead

Today, hyperscalers are leading the way in designing new network architectures to meet the requirements of AI and other emerging applications. But service providers are closely observing—seeking to understand where they’ll fit into this new paradigm—how these applications will affect their own networks and operations, and where they can best play in the rapidly growing AI ecosystem. We don’t yet have clear answers. But whether bringing AI/ML applications to new customers, tapping into AI-driven automation and assurance in their own networks, or (most likely) all of the above, operators will need to revisit basic assumptions around network design and testing.

Of course, AI is just one of multiple disruptions coming to telecom service providers. Network cloudification, new cloud and streaming applications, radio network densification, the looming explosion of IoT traffic that will come with 6G—these and other evolving trends will all bring huge changes, and each comes with its own unique requirements. But we can already see some common threads.

Service providers should expect far more heterogeneous traffic and more extreme timing requirements as they support new cloud and IoT applications, as well as AI/ML. They should prepare for a world where managing network congestion, delivering the needed throughput and latency, and assuring consistent performance for AI and other mission-critical applications will require more precision than ever before. By laying the groundwork now for continuous, automated, and scalable network testing, they can put themselves in position to capitalize on these changes, instead of getting overwhelmed by them.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel