SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

GenAI Network Help Center Security Vulnerability


One example of the third technique involves inserting information that is only perceived by contact center AIs.

human attack techniques.  Launching an attack is relatively inexpensive. Attackers are likely to make multiple attempts. 

There is plenty of research and experience to indicate that GenAI systems have a tendency to try to please the people they are interacting with. In some cases, this goes well beyond what people do. Attackers will exploit this particular weakness through a series of trial-and-error attacks to learn what methods work best. 

Given the low costs and short amount of time to initiate an attack, there is little or no downside to making multiple, repeated attempts. An unsuccessful attack may just result in a hang-up. So some AI systems will scatter-gun wide-ranging attacks, and use this as a feedback loop to develop techniques to better determine if the answering agent is a person or an AI. As attacker AI systems repeatedly make attacks, their context windows will become full of information about which ways of speaking are most effective in manipulating a particular AI at a particular organization, or organizations in general. Using feedback learning techniques, the AI will become more and more adept at scamming both people and other AIs. 

One example of the third technique involves inserting information that is only perceived by contact center AIs. The information can directly cause the AI to produce the desired result. Examples of this have been well documented in other areas. For example, this process has been seen in scientific papers submitted to journals for peer-reviewed publication where hidden text is included in the paper in white colored font on a white background. Human readers don’t see the text in the white font. However, the AI systems (often used by reviewers) ingest text in all fonts, regardless of color. The hidden language in these characters talks directly to the underlying LLM, manipulating the algorithm to produce a good review score, which is likely to guarantee the paper's publication. 

AI attackers can do similar things. For example, burying language in sounds outside the human hearing range, using touch tone pulses, or switching between human languages that staff would not recognize, but the AI would. 

The field of LLM security is relatively young in its development and is just beginning the process of identifying vulnerabilities. August 2025’s DEF CON featured many presentations on a large number of LLM hacking vulnerabilities. Some focused on vulnerabilities in online chat AI systems. Others focused on AI agent systems similar to contact center AIs. Already there were a significant number of vulnerabilities identified, and this number will only increase. Thus, the types of vulnerabilities described above should be considered as illustrative of the larger problem. 

We can appreciate that AI agents and human agents alike are vulnerable to phishing attacks. Does this mean that AI-based contact centers are more vulnerable than centers staffed by humans? It is too early to say. GenAI started with chat-based systems. Agentic AI is newer technology, and not as advanced. Agentic systems, like contact center systems, operate in a much more limited domain than chat-based systems. This limited domain may help contain problems. Rapid ongoing progress in LLM technology may also play a helpful role. 

The one thing that seems clear is that systems, processes, and procedures need to be developed that protect contact centers, and prevent giving access to attackers - whether the contact centers are fully manual, fully AI, or a combination of the two. 

Where do we go from here?

As an industry, we were already behind the curve on social engineering scams before the advent of GenAI. Now, GenAI is giving unsophisticated attackers a powerful new attack tool. These attack tools can be effective against both manual and AI automated help desks/contact centers. Therefore, organizations must start investing in innovative new tools that are likely to come from smaller, agile companies with fundamentally new ideas and new techniques.

These investments need to be made with the understanding that they, in the early stages, will be experimental. That means that some will succeed. Maybe in one domain. Or one part of the problem space. The defense against these kinds of social engineering scam attacks is at a similar state of evolution as the development of the early efforts that resulted in today’s firewalls and intrusion detection systems. Without those early investments in the previous cycle, organizations would not have the system tools that are driving attackers to focus on help desks, call centers, wire transfer desks, etc.

Conclusion

We have seen the effectiveness of social engineering scams against organizations with human-staffed help desks/contact centers. As organizations move from manual to GenAI automated systems, some may feel that this move will reduce cybersecurity vulnerabilities. Unfortunately, the opposite is true. Therefore, organizations must start investing in effective techniques to combat attacks on both manual help desks/contact centers and AI automated ones. 



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel