Cambridge Analytica and AI - The Unignorable Lesson for CIOs

AI systems don’t only learn from the initial training data; their learning continues as more and more users interact with them.

Balancing data privacy and innovation is a tightrope walk. Amid all the debate about data privacy and controls, there is a renewed vigor to restore public faith in the technology. We need what can be called an AI 'responsible enough' to tackle this crisis. This Responsible AI is all about aligning an enterprise’s AI pursuits with its core values and ethical principles so that it can benefit customers, employees, the business, and society. In the long term, this could create a ripple effect and build trust.

AI solutions are coming closer to being responsible than AI products. AI solution vendors design bespoke solutions for clients. In most cases, they deploy such bespoke solutions over a customer’s private cloud or on-premise infrastructure. Responsible AI vendors ensure that not a single byte of data leaves the security of a client's firewalls.

There are other advantages. AI solutions are bespoke. They configure to an enterprise’s need better than any AI product ever could. They are also trained specifically, making them more accurate. Still, these considerations dwarf what Cambridge Analytica taught all of us: entrusting data to third parties outside a firewall’s protection is always dangerous.

In particular, there are a couple of AI solutions companies on the horizon that recognize the potential roadblocks inherent in this direction (both concerning data security risk, PR, and competitive differentiation). They understand the need to protect sensitive customer data at all costs, even if it means slowing things down on the AI front. Examples include giants like Microsoft, upstarts like Coseer and, to an extent, even IBM Watson.

A Virtuous Cycle of Virtue

Trust from enterprises and from their customers is going to be important for AI vendors as well—solutions and products alike. It is not just about the compliance or ethical risks. This trust is going to be important for the accuracy of the solution as well.

AI systems don’t only learn from the initial training data; their learning continues as more and more users interact with them. AI systems that can win their users’ trust and provide better user experience will see more adoption. Their targeted users will prefer these systems over alternatives. This higher traffic would, in turn, train the AI system better so that it can provide even better user experience.

In other words, maintaining users’ trust using responsible privacy practices is beneficial not only to enterprise CIOs, but also to the business of AI solution providers.

The Journey Forward

The journey forward is indeed murky. However, tech revolutions have always happened nevertheless, changing the lives of an entire generation. In the long term, enterprises need to stay vigilant; consumers need to be ready to ask the hard questions and pay attention to what happens with—and to—their data.

As this wonderful technology evolves, responsible AI needs to partner with responsible CIOs—those who understand the appropriate importance of data security and the nuance between AI products and AI solutions. Government and leaders (in public and private sectors) have a role to play as well. They need to be made accountable for long-term as well as short-term thinking. The future is ours to take!


Latest Updates

Subscribe to our YouTube Channel