SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Ethical Use of Generative AI

By: Mark Cummings, Ph.D., Zoya Slavina, William Yeack, CSE

Generative AI promises profound benefits for society. Unfortunately it comes with some nasty side effects, including serious cybersecurity threats. Taking an approach of identifying the bad side effects and then developing ways to mitigate them, may make the ethical use of Generative AI possible across a wide application area. In the cybersecurity space, there is potential to use Generative AI systems in an ethical fashion to test defensive systems and train staff. Given that, professional ethics in this field require us to do so.

Experts in Generative AI warn against using it to test mitigation tools out of a concern that doing so will make the Generative AI system better and more likely to create these nasty side effects. In the cybersecurity space, this creates a dilemma. How can we test and train effectively without using Generative AI? Below we will discuss the dilemma and its resolution in the context of cybersecurity, as well as how the approach taken for cybersecurity can be generalized to the other application areas.

Bad actors using Generative AI are fundamentally changing the cybersecurity attack space. The bad actors will do everything possible to improve the Generative AI systems for creating attacks. They will do this by subverting attempted controls on public systems and creating private systems stripped of all controls.

In rogue states where cybercrime is a significant portion of the GDP, they will build fit-for-purpose Generative AI attack systems. In fact, just recently a Generative AI SaaS (Software as a Service) specifically structured to create cybersecurity attacks appeared on the Dark Web. This is forcing the creation of a new generation of defense tools—ones that are dynamic and adaptive.

The question is: how do we test and improve our defenses without increasing the strength and ease of Generative AI systems to create attacks? There is a large body of published material documenting the capability of Generative AI systems to do bad things. In each one, the authors recommend that using Generative AI systems to test around these bad things shouldn’t be done. It shouldn’t be done, because in so doing, the Generative AI systems will be trained to do bad things better and to develop easier ways to circumvent ethical controls.

In cybersecurity recommendations relative to Generative AI, authors start with not using Generative AI for “Red Teaming.” A Red Team is a group of people and tools that are used by cybersecurity professionals to test defensive systems. The concept is that the Red Team can find vulnerabilities that a “Blue Team” of defenders can then discover. Based on this discovery, the Blue Team then can improve defensive tools, process, procedures, and training. The argument is that in performing Red Team functions, a Generative AI system will learn how to create more new types of attacks, which may be more damaging, faster, etc. The logic here seems unassailable. Carrying that logic further, employing a Generative AI system as part, or all, of a Blue Team would lead to the same result: a better attacking Generative AI system. Without being able to use Generative AI systems in testing defenses and the training of staff to respond to Generative AI-created attacks, however, defenders will be at a great disadvantage.

The challenge then is to create a way to use Generative AI while avoiding this ethical conundrum. What is presented below is the beginning of an approach to meeting this challenge.

An Ethical Approach to Using Generative AI

Here we will present the basic principles of an approach to using Generative AI while avoiding the ethical conundrum the cybersecurity defense space. Then, we will discuss some of the implementation issues.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel