SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

AI in Cybersecurity: The Good News and the Bad News

By: Jacob Ukelson, D.Sc.

The impact of AI on our way of life is accelerating and to no one’s surprise, AI has become prominent in the opposing realms of cybercrime and cybersecurity. From the beginning of our digital age, the dynamic between these opposing sides has not changed. Each side has always used the capabilities of emerging technologies to achieve their goals. The role of AI in both cyber offense and defense is being played out now. Those responsible for the security of their organizations, from technologists to executives, should stay informed on how AI is being used by their adversaries and how AI can counter these threats. 

Tools of the Trade: Machine Learning and Machine Reasoning

To begin, we should clarify some of the terms used when discussing AI. Without going into too much detail, think of AI as an umbrella term encompassing different computer-based technologies that replicate human problem-solving or decision-making. Machine learning and machine reasoning are two types of AI technology that are used to solve different problems.

Machine learning (ML) applies statistical analysis and pattern recognition to large data sets to uncover patterns of behavior. There are several subsets or types of machine learning that are differentiated based on the types of data they use (structured or unstructured), the size of data sets they can work with, and the types of services they provide. Applications of ML are varied; examples include services such as fraud detection, self-driving cars, and customer retention.

Generative AI (for example ChatGPT) is a learning-based AI capable of creating original text, images, audio, and data. Cyber attackers are using Generative AI in several ways that will be described later in this article.

While machine learning is based on the statistical identification of hidden patterns within a large amount of data through correlation, machine reasoning is based on using facts and relationships, and drawing conclusions from them. For example, a reasoning system can differentiate the meaning of the words “put on” in the sentences “I put on my clothes” and “I put on a show.” Personal assistants such as Siri and Alexa use machine reasoning to generate answers to the questions we ask—including questions they have never encountered before.

How Attackers Use AI

Cybercrime is big business. One recent estimate is the global annual revenue for cybercrime is $1.5 trillion. The total cost of cybercrime is even greater—about $6 trillion by some estimates. 

Like any business, cybercrime enterprises strive to grow revenue and reduce costs. Their KPIs (key performance indicators) are cost per attack, success rates, and revenue per attack. Learning AI is proving to be highly effective in driving the success of cybercrime as a business. Learning based AI systems are being used to great effect in the following ways:

  1. Generative AI enables attackers to produce more convincing phishing emails quickly and cheaply. These AI systems are well adapted to crafting convincing emails that appear to come from a legitimate source. These generative systems learn and improve over time, growing their input data sets and adjusting based on the effectiveness of past attempts.

  2. Generative AI is also used to conduct finely targeted spear-phishing attacks. These attacks are often emails or voicemails based on highly specific information and circumstances pertaining to key individuals at an organization. Many of these are business email compromise (BEC) attacks that convince victims to reveal key information or authorize wire transfers. These attacks involve the impersonation of a trusted party (a company executive, vendor, etc.) to trick a victim into making a financial transaction or sending sensitive data.

  3. Attackers are using AI to generate self-learning malware that adapts its course of action in response to the situation, and particularly target its victims’ systems. AI-generated malware can avoid detection and adapt to the environment and defenses of its targets.

  4. AI tools such as chat bots are used to conduct so called “deep fake” attacks in which the voice of a trusted party is used to convince the victim to perform some action. For example, in 2020, a manager at a Hong Kong bank received a call in the voice of a director he knew well, asking him to authorize a $35M wire transfer. This request was backed up by what appeared to be legitimate emails, and the transfer was carried out. Deep fakes can mimic voices and images, and can be used to interact in conversational mode with victims.

  5. A sophisticated, sustained cyberattack known as an advanced persistent threat (APT) occurs when an intruder enters a network undetected and stays there for a long time to steal sensitive data. APTs frequently involve the use of artificial intelligence to avoid detection and target specific organizations or individuals.

Cybercrime statistics bear out the impact AI is having on the ability of criminal enterprises to conduct attacks more cheaply and more effectively than ever before. For example, phishing attacks have grown at a 150% annual rate since 2019 (see Figure 1 on next page), facilitated by the ability to use AI-driven automation to generate attacks.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel