SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Ethical Use of Generative AI


This outline of the trade-offs shows that it is possible to meet both principles while using Generative AI technology to test defenses and train staff. Clearly, there is still work to do for interested organizations to create their own environments that meet these principles.
cloud. And, not even on a private cloud. Some estimate that the hardware for the largest models can approach the billion-dollar level. So, the hardware and operating expense of such a model is also quite large.

For large government organizations and very large enterprises, this may not be a problem. For the innovative, small software companies, however, from which the needed innovative dynamic/adaptive (S2-D2) defensive tools are likely to originate, this is out of reach.

There is an alternative: Open Source. There are open source Generative AI models and open source Internet crawls for training. Furthermore, there are a range of such tools available. Some will run on a single desk-top computer. At the other extreme is the recently announced Meta LLMA2.

The open question is how large a Generative AI system will need to be. A smaller system will be more economical and easier to control. But, will it have enough capability to be a good stand-in for systems being used by bad actors and rogue states to launch cyber attacks?

A new approach is emerging that may help with this problem: MOE. A MOE (Mixture of Experts) is a smaller expert model trained on its own tasks and subject areas. With a MOE, it may be possible to have a small Generative AI system that is just as capable at producing attacks as a large, more general system. Of course, the attackers will likely use MOEs, too, so keeping track of the cost/capability trade-off over time will be critical.

This outline of the trade-offs shows that it is possible to meet both principles while using Generative AI technology to test defenses and train staff. Clearly, there is still work to do for interested organizations to create their own environments that meet these principles. And different organizations may take different paths in so doing.

Cybersecurity Defense Conclusion

The bottom line is that it is indeed possible to use Generative AI systems in an ethical fashion to test defensive systems and train staff. And, as stated above, professional ethics in this field demand that we do so.

Other Generative AI Application Areas

The above discussion only considers cybersecurity issues around defending other systems from attacks created by a Generative AI system. It does not consider protecting Generative AI systems themselves from attack. This is one of the other areas that needs to be considered.

Other application areas have problems with harmful side effects of Generative AI like misinformation, lying, libel, manipulation, and so on. A similar approach can be applied to those side effects. For each, the side effect(s) must be well defined. Then, principles for overcoming those bad side effects must be established. In some cases, the principles stated here may be relevant. In others, new principles will need to be developed. In any case, developing such principles may not be easy. For example, a high school student in an Oakridge, Tennessee, community meeting proposed: “AI should be used to assist a doctor but should never be used instead of a doctor.” This may seem reasonable on its face. But, there are already completely automated surgical systems currently in use. Once the principles are well defined, various implementation approaches can be considered. In some cases, the approaches discussed here can be a foundation; for others, new approaches will be needed and those approaches may require new technology. What seems clear is that there is no one-size-fits-all solution. Identifying the harmful side effects and working to mitigate them, however, may make the ethical use of Generative AI possible across a wide application area.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel