SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

AI Traffic Transforming Networks


Edge AI generates fundamentally different traffic patterns than Data Center AI. Instead of being hub and spoke, point to multi-point networks, Edge tends to be multi-point to multi-point. This is particularly true of Edge intelligent agents communicating with each other and with people.

The recent Amazon outage is a good example of what can happen when people or organizations depend on data center network-accessible AI. Having an Edge implementation, either as a standalone or as a backup, can overcome these outage problems.

Recently Gartner extolled Edge’s advantages. “Edge computing … is evolving from a buzzword to a necessity. By processing data closer to its source, edge reduces bandwidth needs and enables instant insights … critical for IoT-heavy industries facing 5G proliferation … enhancing resilience against outages. For example, autonomous vehicles rely on edge for split-second decisions …”  Latency can be important. Especially for intelligent agents, time can be critical. Just the round-trip network communication time to and back from the data center may be problematic.

Working with vendor-provided data center AI has some inherent privacy and IP (Intellectual Property) exposures. For some applications, these exposures can be quite important. For them, the fact that their data can be used in training LLMs, or get into the context windows of other users, etc., may be too great a concern. The data may not go to others in exactly the form received by the data center. But it may be used in training. Thus, it is part of the reasoning data that the GenAI system uses. Resulting in what is termed ‘IP Leakage’.

Some organizations may meet this concern by implementing their own private data center. However, this still has a data exposure risk on the network that accesses the data center. Plus, Edge systems may be more manageable, more cost-effective, have more predictable expense profiles, etc. For these reasons, and possibly just convenience, users may prefer running GenAI locally on edge systems.

Edge AI generates fundamentally different traffic patterns than Data Center AI. Instead of being hub and spoke, point to multi-point networks, Edge tends to be multi-point to multi-point. This is particularly true of Edge intelligent agents communicating with each other and with people. Local chat AI with RAG (Retrieval-Augmented Generation) will produce similar multi-point to multi-point traffic. There may also be a tendency for groups to share a specialized AI processor, such as a Mac Studio Ultra. For office-based work groups, this will primarily produce LAN traffic. But, for remote workers, this will produce traffic that appears to be multi-point.

A significant rise in Edge AI will substantially change traffic patterns. Some see Edge eventually displacing Data Center AI. Others suggest that the rise in demand for AI services will be such that Data Center AI will still be very active. That there will just be a change in the proportion of traffic from each one. Either way, there will be very significant changes in the traffic pattern.

San Francisco Problem Conundrum

When the pandemic hit the San Francisco Bay Area, practically no one traveled to the financial district. The hub and spoke network had no traffic. There was no user revenue coming in. System managers felt that with government assistance and reserves on hand, they could weather the storm until the pandemic waned and things returned to ‘normal’. When the Pandemic waned, much of the work stayed at the Edge. Work had evolved into a hybrid remote / office pattern. There was now some traffic and user revenue going to the financial district, but not enough to support the system. Also, users wanted more multi-point to multi-point services that the current hub and spoke system was not configured for. As this is written, the public transportation system operators are struggling with how to respond to the new traffic pattern.

The risk is that WAN network operators, in responding to the current and expected rise in demand for hub and spoke networks, will find themselves in the same position as San Francisco public transportation providers when Edge AI grows substantially.

The Need for Hybrid Network Architectures

Rather than get caught in the San Francisco conundrum, it seems prudent to design the AI networks as hybrid networks from the beginning. Don’t wait till the traffic patterns change and end up in crisis trying to respond. This can be thought of as using a portfolio management approach to lower risk. That is, build for existing traffic based on an understanding of likely near-term traffic growth patterns. But also provide infrastructure for the expansion of multi-point to multiple-point networks.

While building and operating in this hybrid mode, it is important to constantly study the development of AI to inform the projection of near-term traffic growth. Ongoing study of AI technology and adoption evolution needs to be built into traffic models. But projections may always be imperfect. So, it is important to design networks to be able to grow and morph as AI traffic patterns evolve.

Conclusion

Today, our networks are being transformed by evolving AI traffic patterns. This transformation is not a single event. Rather, to be successful, network leaders have to anticipate and plan for a series of changes in traffic patterns triggered by developments in GenAI and supporting hardware.




FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel