SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Smart AI Ops for IT


Any online AI system is likely to use any input data for training.

Apple has announced a set of AI enhancements to its OS and suite of office programs. The enhancements run on both Macs and iPhones. They are less intrusive than Microsoft Co-Pilot and Apple promises strong security and privacy controls. How well they will work has yet to be determined. Elon Musk says he will not allow any Apple computers with AI in the OS to come into any of his companies. Others have similar concerns.

Central site GenAI systems have, for some time, been seen as capable of generating very effective cyber-attacks. Those attacks have ranged from introducing malicious code to social engineering attacks in the form of fake video calls from the CFO. Operators of central site AI systems have attempted to put controls in place to limit such malicious activity. Their guardrails, however, have had limited effectiveness. Moreover, as GenAI moves out of central sites onto edge systems the ability to use them to create attacks will increase. For example, there is a report of a social engineering attack that cost $25 million where it appeared that the CFO was on a video call giving wire transfer instructions.

IT Actions to Minimize Risk

In general, IP leakage can be a serious concern depending on the job function and the type of data being handled. This means that a one size fits all set of end point configurations doesn’t work. IT has to engage senior management to get a cross section of functional units in the organization to identify the types of data that must be protected from IP leakage and the functions that work with that data.

Once there is a map of functions and data to be protected, senior management must explain to the whole organization what steps are being taken to maximize the benefits from AI while limiting the risks. It is important that all members of the organization understand the situation and support the initiative that senior management is leading. With that in place, specific steps can be taken to limit IP leakage. The Gemini warning is a good general caution. Any online AI system is likely to use any input data for training. Other companies may also not be so careful about protecting raw user information. So, guarding against IP leakage can be important. For access to online public systems, certain functions and staff handling certain data may have to be restricted from accessing public online AI systems. Staff cooperation can be enhanced by IT working with security operations to block access to external online AI systems for those end points used by those staff members.

Some may think that internal, private online company AI systems don’t share in this risk. Not necessarily so. If the internal system is built on a platform that a vendor provides which takes user input for the vendor to train with, the same risk is apparent there. If that is the case, IT must treat these systems as if they were external public ones.

Another risk with internal AI systems involves the ones used to help organization members find documents. In these cases, all the organization’s documents are fed into the system. This makes that system an attractive attack target. IT needs to work closely with security to add extra layers of protection and monitoring and fast response to attacks.

Instead of private online AI systems, IT may deploy LLM (large Language Models) in personal computers. If done with encapsulation procedures that erase and then reinstall the LLM’s after use, the IP leakage problem may be well contained.

End point AI systems such as Microsoft’s and Apple’s are in their infancy. It may well be that the IP leakage and cyber security vulnerabilities we see today will be minimized or eliminated as the technology matures. But for now and the foreseeable future, IT needs to develop and deploy a portfolio of end point configurations for very sensitive functions, or for people handling very sensitive data, with no need to deploy AI. For example, this means for high risk functions or data, turning off and de-installing Microsoft Co-Pilot and not allowing the Apple OS containing AI to be installed. For moderate risk functions and data, this means turning off Co-Pilot, but not necessarily



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel