Organizations are moving from manual network help to GenAI automated systems. Some may feel that this move will reduce cybersecurity vulnerabilities. Unfortunately, this is not true. Organizations must start investing in effective techniques to combat attacks on both manual help desks/contact centers and AI automated ones.
As computer tools to detect and defend against cyber-attacks developed, people became the weak link. Skilled scam artists call help desks (both internal and external), contact centers, wire transfer desks, etc., and convince well-meaning people to give crooks the keys to the kingdom. They do this by taking advantage of people’s desire to be helpful. In the cybersecurity industry, these types of scams became known as social engineering attacks.
For example, an IT help desk at the MGM Grand Hotel and Casino in Las Vegas got a call from someone who said that he was a senior system administrator. Had lost his phone. Needed to make a critical system update, but didn’t have his passwords. The help desk person, trying to be helpful, gave him a new set of credentials, and he used them to launch a ransomware attack. The attack cost the casino well over $100M to get systems back up and running. It is not known if the casino paid a ransom. There were also substantial government fines. This attack was launched by a group of technically sophisticated people.
To counter these types of attacks, the primary defense relies on training staff to identify social engineering scams and not fall for them. Although helpful, a rash of successful attacks has shown that training is not sufficient.
With GenAI, unsophisticated criminals are empowered to create highly sophisticated, successful attacks. An example helps illustrate this.
A finance person of a multi-national corporation who is based in Hong Kong starts getting communications from a variety of channels that appear to be from a CFO based in London. The London CFO says that there is a secret project underway that requires the transfer of money by the finance person in Hong Kong. The Hong Kong person thinks it sounds sketchy and ignores it. Then, the apparent CFO in London sets up a Zoom session with the person in Hong Kong, two colleagues that the person in Hong Kong knows and trusts, and the CFO in London. They all talk about this secret project. It seems these people he knows are taking it seriously. So, the finance person decides it must be on the up and up. He transfers almost $25M. Later he learns that the CFO and the two colleagues on the Zoom were GenAI avatars, and the money is gone.
Recently, it has been shown that today’s AI systems can autonomously design, develop, launch, control, and succeed in very sophisticated attacks. As AI system powers continue to grow dramatically, future attack capabilities will only increase.
Cisco touts itself as having a product with the premier set of cybersecurity tools. In early August 2025, it became known that Cisco had been successfully attacked by a phone-based social engineering scam. An author who described the circumstances of the attack wrote “… attacks, particularly those relying on voice calls, have emerged as a key method for ransomware groups and other sorts of threat actors to breach defenses of some of the world’s most fortified organizations… Some of the companies successfully compromised in such attacks include Microsoft, Okta, Nvidia, Globant, Twilio, and Twitter." If the market-share leaders are not fortified against these sophisticated attacks, how much more vulnerable are other enterprises and the rest of us?
Today, companies are using GenAI to automate their customer portals. It is understandable how organizational leadership might think that taking people out of the loop would eliminate the vulnerability. Unfortunately, it is not that simple. There are three primary ways that scammers can successfully attack a GenAI automated contact center:
Many contact centers today use multiple media, including voice, text, Slack, email, and website-based text conversation systems. In the discussion below, for simplicity reasons, the word call is used to represent all of these media. Most automated AI contact center systems have human staff backing them up. As autonomous AI contact centers become more common, attacking AIs are likely to use a process designed to get the contact center AI to escalate the call to a human. This is similar to people who call a contact center and try immediately to get the answering person to connect them with a supervisor. Accessing a human allows the attacking AI to then leverage