Just as with other AI systems that learn and evolve, bad guys will add more than “evolutionary fitness” as a criterion for mobile agent success. They will provide their malware with a “mission”. Self-replicating, evolving malware will be rewarded and reproduce, not just for the sake of surviving, but also for fulfilling their mission. In other words, if the world of malware becomes autonomous, self-improving and self-replicating; don’t expect a happy outcome. Imagine The Great Plague, now imagine the Great Plague with a hard-coded mission statement.
This analysis is not intended for shear shock. In counter cyber-espionage practice, we need to apply the same thinking to machines built for evil, as we do for machines built for good. Else we will be savagely surprised.
Probably, the future solution lies with incorporating AI pattern recognition and automated response with the flexibility and speed enabled by NFV and SDN. The game strategy will be early identification of an attack pattern, isolation of the fingerprint of attacking traffic, and where possible using NFV to switch the traffic to cleaner sites in the cloud. If traffic cleaning is not possible, the effected routes must be suspended. Virtual test software, like EXFO’s integration to NFV, when placed under control of AIs will insure dynamic new routes are within expected norms.
NFV technology will allow for deep insertion of software probes into the network infrastructure. Realtime traffic will be searched for known good and bad patterns. Abnormal traffic will be cloned and routed to cloud based data stores for deep analysis by AI. AI will allow for rapid identification of complex, multi-phase attacks and the application of real time multi-phase responses. Many of the responses will involve real-time reprogramming of network interfaces and control points.
Post event, AI will be trained with the cloned and stored traffic. Many gaming scenarios will be modeled to determine the optimal responses. These will be loaded into the short term, device resident policy-based responses and distributed in intelligent agents. In the best of worlds, these solutions would be freely distributed. Governments could act to make solutions a legal reporting requirement and then distribute solutions freely. This would avoid sticky situations, such as where Google discovers vulnerability and publishes it before Microsoft can find a solution. However, in our problematic real world, markets will likely control who gets the solutions, who pays for them, but also who is excluded from the solution.
Unfortunately, but realistically, this scenario leads us to the multiplayer cyber-arms race scenario described above. It would be prudent to plan for it.