There is also evidence of GenAI using social engineering for itself to gain credentials and/or access to cyber infrastructure and/or applications. In one recorded case, an AI system was able to manipulate a person into performing a CAPTCHA for the AI system by pretending to be a visually impaired person seeking help. Current experience indicates that in such attacks the AI system is seeking access to another system it does not have authorized access to. This may be for: totally hallucinated reasons; for a misguided implementation of instructions it received in its creation, development, training, maintenance, use; or other reasons we don’t understand.
AI systems are now providing bad actors with fake image IDs for social engineering attacks. Images of IDs, such as driver’s licenses and passports, are often used in online transactions. Bad actors can use the fake IDs for cybercrimes, such as: fraud, privacy invasion, abuse and harassment of individuals, and more. For example, ID images taken with a computer camera are used in online notarization services and KYC (Know Your Customer) transactions. One example of KYC online transactions involves the sale of dangerous biologic substances. Researchers concerned about the use of AI systems to create bioweapons recommend use of KYC techniques by those selling RNA and DNA materials.
With advanced 3D printing it may be possible to create physical versions of these images, thus creating counterfeit physical IDs. IDs that can be used in execution of physical criminal activities.
The current solution offered by major vendors to stem the social engineering threat is focused on training and procedures. But while training is clearly helpful in defending against cyberattacks it is not by itself enough. Moreover, most of the cyber defense tools offered by major vendors in the market today rely on finding attack signatures (patterns in data previously found to be associated with particular types of cyberattacks). Some of the newer technology solutions have tried other ways of finding patterns of attacks. Unfortunately, by the time these patterns have been identified in transaction data, indicating that a social engineering attack has taken place or is underway, it is already too late. The attack has already been successful.
So, what can a large organization do to protect itself?
Large cybersecurity tool vendors are economically successful with their products based on currently well understood technologies and competencies. Extending these existing competencies will not provide what is needed to protect against social engineering. Moreover, these vendors lack a motivating incentive to develop and try something completely new.
The most likely place that a solution to social engineering will emerge from is a new innovative technology entrant. The best way that large organizations can speed up the process that produces this kind of innovation is to partner with emerging vendors bringing innovative technology. In other words, large organizations need to work with innovators to develop and implement effective social engineering defenses.
Social engineering attacks are a very serious problem that GenAI is turbocharging. Existing solutions focusing on training are helpful but insufficient. Existing large vendors lack the technology, expertise, and economic incentives to provide fully adequate solutions. Organizations need automated social engineering defensive tools that are not currently available. The best way to get them is for large organizations to partner with innovators, speeding up the process of bringing these solutions online.