SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Smart AI Ops for IT

By: Mark Cummings, Ph.D.

The advent of PCs with built in AI added to the existing online AIs is creating new challenges for IT Ops. The recent announcements of Co-Pilot from Microsoft, Gemini from Google, AI embedded in MacOS, plus the ability to run GenAI on PCs, promise productivity increases but open the door to new vulnerabilities in cybersecurity, privacy, and maintenance of proprietary information assets. In smart organizations, IT Ops and senior management need to partner in developing solutions that will maximize the benefits of AI while protecting the organization.

In most organizations IT is responsible for configuring staff PCs. How phones are handled varies. Some companies provide phones for some or all staff configured by their IT people. Others use a BYOD (bring your own device) policy with configuration requirements. Still others don't go beyond BYOD. Senior management typically only gets involved at the level of setting objectives in costs, cybersecurity, and ease of use. IT has the sole responsibility of turning those objectives into standard configurations and processes to deploy those configurations. AI is going to require management and IT to do more.

At a high level, the use of AI, while creating benefits in productivity, also creates risks in cybersecurity, IP leakage, and privacy. This means that management is going to have to identify specific job functions and data sets where the risks are too high for integration with AI. Moreover, IT ops is going to have to develop AI protection configurations, processes, and procedures for those functions and data sets.

AI Risks

Google has issued a specific warning. It says that Gemini will maintain confidentiality of information put into it by end users. But it also warns that all such information will be used for AI training. The training activity can become a conduit for Intellectual Property (IP) leakage. So, Google says, don’t put anything in that you are concerned about leaking.

For example, what this means is that specific proprietary information about an upcoming advertising campaign that company X is working on will be kept confidential. But because it is fed into the training, a competitor who asks how to effectively compete with company X may get a plan that doesn’t reveal the specifics about company X’s upcoming advertising campaign, but does include a plan that effectively responds to it.

Google is clear about using the data. Others are not clear. Still others promise not to use the data. But one wonders. Can they be believed? Finally, we learned in the dot com bust that companies in bankruptcy do sell user data even when they are contractually bound not to. And not all of today’s GenAI companies will be around tomorrow. So, it pays to be cautious about data leakage ― no matter what.

Microsoft Co-Pilot captures every key stroke and every screen image for the last 30 days for a PC on which it is active. Microsoft says that the resulting data set is encrypted. It also says that the resulting AI system can find anything and answer questions like, what was that memo from senior management about security precautions I saw a couple of weeks ago? Or, what was the competitor’s web site I looked at three weeks ago that said that it had product Y? This kind of support is very attractive.

Unfortunately, there has been a stream of published material on how that data can be accessed by malicious actors. Microsoft has responded by double encrypting the data. The problem is that there are a number of ways that have been published whereby an attacker can gain access to the Co-Pilot app itself and exfiltrate data that way. There are also reports of GenAI systems being used to create cyber-attacks that are both novel and very effective. So, once sensitive data is in Co-Pilot, it may be exposed.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel