Yet as with each technology advance, SDN and NFV present another huge management adaptation problem. Fortunately SDN and NFV themselves are part of the solution to this problem. Because everything will be software, we can use software to sense what is going on and manage it directly. But because humans may have problems truly understanding what is going on in such a large and complex system, AI must be trained to help.
Today without AI, we are aided by sophisticated software products that package data for human consumption: in graphs, the "trails of network blood," and postcard suggestions of actions. Then we allocate to humans the final decision on what to do. Then often humans take, or at least authorize, the required action. Human beings set the rules and policies, at a fine-grained level; and although there is “learning” taking place, that learning tends to be along linear trajectories pre-designed by the programmers.
The maturity already reached by today’s neural networks will be changing this. The AI community is interested in creating machines that can do certain useful things exceedingly better than humans. And that is especially the case when it comes to managing the big impersonal networks that collaborate to carry all the data in the world.
Human data input, or wetware input, is entirely sensory. Our seven senses (sight, smell, taste, hearing, touch, vestibular, proprioception) have evolved to create a sophisticated way of interacting with the external world. This is not much different in mice. Often fallible and incomplete, our minds are still rather impressive.
Machines do not yet have access to such an array of sensitive and subtle sensory inputs. Nor do they have the kind of cognitive integration processes that allow humans to build mental models of the world they interact with. But scientists are trying to move them there. Dr. Paul Werbos, winner of the neural network Hebb Award, believes existing mathematical models of how brains work are pretty good at explaining, and on the way to creating, an intelligence that can learn from its mistakes and adapt its behavior – what he calls “mouse level thinking,” although at different scales. Anything more and were just not there yet. But we do not need artificial consciousness to manage our complex networks.
Machines have this benefit: they can “observe” patterns in data directly, without having to go through a process of converting that data to a form humans can understand - no need for graphs and dynamically updated maps, no need for alarm bells and sirens, no need to even smell the burning plastic. If we provide that sensory input in the form of a number, machines can deduce what is going on by analyzing the data. More specifically, they don’t really need to know what is going on in the world outside the data, because to them, the data is the world.
And if the world of things, networks and customers is converted by sensors and measured into data, we should be able to create a machine, with existing methods, models and technology, which can “understand” what’s going on in a complex service network and send “orders” to make things happen. We have such systems now. It’s how Google understands what people like and can push advertisements at them, to mention one enormously successful example.
By creating machines that can observe patterns in data and how one pattern follows another, and act on the environment (that is, send instructions to the network, and do things for customers) we surely go a long way to addressing the scale and complexity challenges of large-scale software-driven service networks.
We will, in effect, have a machine (consisting in reality of a massive interconnected network of machines and agents and sensors and data collection devices) that is able to “sense” network behavior and performance, and “sense” customer experiences based on customer behavior, and -- this is the key necessary breakthrough -- “sense” what we, the humans, expect of it.