SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Case for Big Tech's Embrace
of AI Regulation

By: Joshua Grossman

As others have stated, we have passed the event horizon for artificial intelligence. That is, the gravitational pull of AI deployments and technology have passed the moment whereby we can avoid its influence and “opt out” of its impact in our world. From the perspective of some AI operators, hardware and software providers, and members of the AI ecosystem, this can seem like a good and proper thing.  

However, for many outside the world of AI, especially among the younger people that I speak with, there seems to be at best an ambivalence toward the changes AI is creating in the world and in many cases a deep antipathy. For example, let's consider  “Cassie” (Cassandra) as a representative subject with whom I've discussed AI several times. Cassie's view is that AI is a tool of oppression built for our capitalist overlords; she sees a world where technology is used to bring about a complete surveillance state with unbridled authoritarian power and energy crises at best, and a world that looks a lot like Terminator: The Rise of The Machines at worst. Unfortunately, this is not an uncommon view. Even those at the frontier and creation of the best AI models admit that the possibilities range from a utopian “machines of loving grace world” to a dystopian authoritarian infrastructure for panopticon like control by malign actors. Unfortunately, Cassie is not alone in her dark view of AI. Recent polling comes to the same conclusion. Among voters 18-34 the net approval rating for AI is minus 44 per a recent NBC News poll

If that wasn’t reason enough for AI stakeholders to take the initiative in advocating for enlightened regulations, then the lack of operational efficiency in the way business is transacted should alarm them enough to prompt a re-evaluation in strategy and tactics with regard to creating a rules-based order for the uses of AI.  

The recent conflict between The US government, open AI and Anthropic regarding the use of AI is a case in point. It is instructive as a lesson in the sort of business practices and sales motion that the industry almost certainly wants to avoid.  

Very briefly summarizing the issues: the Pentagon in contracting with Anthropic determined that the guardrails and safeguards which Anthropic insisted upon, specifically, no use for domestic surveillance and no use of automated weapons by AI, were red lines for government acceptance of contract terms. Open AI, on the other hand, was more than willing to adapt its requirements to suit the needs of the government. (Open AI argues they got the same restrictions but through modified contract language.)

Anthropic's position resulted in its listing as a supply chain risk with the US government which amounts to a black ball against any future contracts. For its part, OpenAI is now the target of boycotts and a plethora of bad press from both industry insiders as well as the public at large. The sort of race to the bottom between private companies and the government is just the sort of thing our industry should try to avoid. A much better approach would be to encourage lawmakers to create a set of standard federal guidelines and laws that regulate AI, as opposed to case-by-case negotiations pushing the ethical and moral frontiers of what sort of AI use cases are allowed.  Moreover, Anthropic is challenging the government’s designation in court with a lawsuit that claims the company suffered “public castigation” and a violation of its free speech and due process rights. Not a great outcome for the government or the companies. 

So what does this mean for practitioners and those that see the benefits of AI in everything from medical breakthroughs to ecological wins and the preservation of endangered species? It means we need to think seriously about common sense regulation that will answer the valid concerns of those worried about the future impacts of AI balanced with the need to continue moving forward in a fraught geopolitical world. There are a number of existing frameworks for AI regulation with the European Union being most advanced thus far. Let's have a look at a couple of different regulatory schema that could be adopted to create real guardrails for safe and responsible AI. 

The European Union system uses a four-tiered approach that is similar in some ways to the GDPR data privacy framework that the EU has adopted. That framework, in turn, has similarities to the US privacy enforcement regime as reflected in state privacy laws and federal rules like HIPPA. One benefit of these types of approaches is that they have already been deployed with relative success in both the EU and the US. In addition, there is a large cadre of privacy professionals familiar with this sort of approach who presumably could adopt a similar system for AI. 

The EU AI system is based on four different tiers of AI usage ranging from essentially safe uses to those that are considered dangerous and almost certainly prohibited.


FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel