By: Steve Douglas
In the initial surge of investment following the first commercial language models (LLMs), it seemed like every enterprise, in every industry, was racing to apply generative AI (GenAI) to their business. Almost all of them used public cloud. This was a sound decision; public cloud providers were among the first to build out the massive, specialized computing infrastructures needed for AI model-training and inferencing. Since then, however, much has changed in the AI landscape.
Due to multiple factors (especially concerns about AI data privacy, governance, and costs), many organizations are now rethinking a public cloud-centric approach. Indeed, cloud repatriation is a growing trend we’re tracking in 2025, as we expect more enterprises to move more of their AI investment into private and hybrid deployments.
This shift will have implications throughout the AI value chain, but no one is paying closer attention than telecoms. As businesses seek to apply AI intelligence to more of their operations (especially at edge locations), and also to exert tighter control over AI governance and costs, telcos see a significant opportunity. Many industry leaders believe that telecom is perfectly situated to address the evolving AI requirements of enterprise and government customers. And they’re preparing a major push into AI network infrastructure as a service (NIaaS) offerings to meet them.
How is ongoing AI evolution creating new opportunities for telecom? And what can telcos bring to the table, both through their own networks and in partnership with public cloud providers, to make AI NIaaS offerings so attractive to customers? Let’s take a closer look.
Organizations have a variety of concerns driving them to rethink public cloud AI approaches, but we can summarize them as the “Three Big Cs”: Control, Capabilities, and Costs.
Start with the new regulations emerging around AI, as governments and regulators implement stringent data sovereignty laws mandating that sensitive data remain within national boundaries. Organizations seeking to apply GenAI in sectors like government, healthcare, and finance must also comply with strict data security and privacy requirements. Shifting to sovereign and on-premises AI systems can enable such customers to exert tighter control over their data, while offering better auditability.
In response to both regulatory mandates and ongoing cybersecurity threats, organizations also want greater Control over how AI applications and data are secured. Enterprises that routinely handle sensitive data, as well as government and defense agencies managing classified information, increasingly view private AI as a means to minimize exposure to cyberattacks. For any business investing in AI, shifting to on-premises deployments allows for tighter control over proprietary algorithms, intellectual property, and customer data, safeguarding against leaks or misuse.
From a Capabilities perspective, private and sovereign AI models allow for greater customization to meet specific organizational needs, such as applying unique workflows, specialized hardware optimizations, or domain-specific models. Additionally, more enterprises are bumping up against technical limitations of public cloud-centric deployments, especially at distributed edge locations. For a growing number of AI use cases that involve real-time inferencing and decision-making, having to route traffic back through a geographically centralized public cloud data center introduces too much latency, effectively breaking the application.
Finally, many enterprise leaders have expressed concerns about high Costs for public cloud resources as they scale up AI footprints. That’s on top of longstanding unease with cloud lock-in and the difficulty of moving workloads from one cloud to another, problems that many enterprises want to avoid as AI becomes a more important element of business strategy.
Given all these factors, it shouldn’t be surprising that more organizations planning AI investments are exploring alternatives to public cloud. Most businesses don’t plan to abandon public cloud entirely. But by pursuing a hybrid AI model, they believe they’ll be able to address the needs of each application workload optimally, while keeping costs and governance under tighter control. This is where telecoms see a growing opportunity: by acting as a cloud-agnostic AI partner, they can help customers combine the best of both public and private cloud worlds.
Telcos offer a number of geographic and technical attributes that make them well-suited to support hybrid, sovereign, and private AI deployments. They have significant assets in 5G, fiber networks, data centers, and other investments that can be monetized to optimize AI networking services, especially at distributed edge and branch locations. And they have proven expertise providing multitenant, scalable, and secure networking and hosting solutions. Indeed, most organizations expanding their AI footprints already work with a telco partner to connect branches and data centers, and often to provide additional services like software-defined wide-area networking (SD-WAN) and Secure Access Service Edge (SASE) as well. Telcos in multiple