SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

An AI-Driven Future for Network Operations


Thanks to developments in big data analytics and the availability of scalable computational power, it has become possible to manage, process, and extract information from the billions of records available in each network.

This creates a problematic layer of complexity. In a three-vendor backhaul network scenario, one would find three different OSS with different ways to address equipment, manage alarms and retrieve, store, and access data on performance indicators—leading operators to invest significant CAPEX to reconcile the data across the network.

With the implementation of SDN, we will face a new paradigm. The industry is putting great effort in standardizing the domain controller’s southbound (SBI) and northbound interfaces (NBI). This harmonizes how the controller interacts with network elements (SBI) and the way network data are exported and presented in higher-order systems (NBI), allowing easy queries and management of the whole or portions of the network, as needed.

This opens the market for software applications that, through NBI, can interact with the whole network, regardless of the equipment or controller provider. This new paradigm offers operators data analysis and services at a fraction of the time and cost with an accuracy and reliability previously unattainable.

Revolutionizing resolution through AI and ML

Thanks to developments in big data analytics and the availability of scalable computational power, it has become possible to manage, process, and extract information from the billions of records available in each network. 

This is revolutionary. Previously, due to the complexity of the task, the KPI information database was analyzed only after an impairment occurred. If this analysis caused loss of traffic, the interruption was related to propagation issues or network-related problems. In those cases, investigation activity aimed to understand the root cause and put in place corrective actions. This was a reactive process, taking place after the event occurred, under pressure and with hard time constraints. It pushed for broad non-cost-effective corrections.

The introduction of AI into the process is transformative. Using AI, the big-data database can be proactively analyzed to identify and highlight any network issues and present them in an aggregated format so that the operations team can plan maintenance in advance, based on priority.

The AI algorithms identify multiple network issues, including those with minimal effects that have not yet triggered any network alarm. This insight is brought to the attention of the expert network engineer, who can proactively assess the identified network elements and network portion. As a result, the issue can be corrected before it escalates, without any time pressure, leading to a targeted and cost-effective response.

Moreover, the AI algorithm can be retrained periodically to learn how the network has changed or has been expanded, improving its ability to recognize new network scenarios, and increasing its added value. Furthermore, with the introduction of reinforced learning, the AI algorithm can autonomously discover and learn new ways to autonomously solve upcoming network issues, ensuring its status is continuously up to date.

A deeper dive into the AI/ML process

There are substantial benefits to using an AI application with machine learning algorithms. To unlock them, designing and training the artificial neural network is key—and even more so is the quality of the big data on which the application is trained. Missing data, disorganized formatting, and uneven reading will yield incoherent results.

Consequently, data mining and data preparation are fundamental steps. Today in a multivendor installed base environment, organizing data is an onerous function, as



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel