SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Case for Big Tech's Embrace
of AI Regulation



As for AI uses in domestic surveillance, that use case is so antithetical to the constitution and our guaranteed rights that sacrificing the ability to surveil the domestic population of the US is a price worth paying...

Here's a brief summary of the acts categories: The Act establishes a four-tier hierarchy of risk, determining the level of regulatory intervention required for any given system: 

Unacceptable Risk 

  • Definition:AI practices considered a clear threat to the safety, livelihoods, and rights of people. 
  • Regulatory Posture: Prohibited. These systems are banned from the EU market entirely. 
  • Examples: Social scoring by governments and manipulative AI that circumvents a person's free will. 

High Risk 

  • Definition: AI systems that have a significant negative impact on people's safety or fundamental rights, specifically those used in critical infrastructures or essential private/public services. 
  • Regulatory Posture: Strictly Regulated. Must comply with mandatory requirements and undergo a conformity assessment before market entry. 
  • Examples: AI used in recruitment (CV sorting), credit scoring, and biometric identification. 
Limited Risk 
  • Definition: AI systems with a specific risk of manipulation or deceit. 
  • Regulatory Posture: Transparency-only. Users must be made aware they are interacting with an AI system. 
  • Examples: Chatbots, deepfakes, and AI-generated text or images.
Minimal Risk 
  • Definition: AI applications that present negligible risk to citizens' right or safety.
  • Regulatory Posture: No obligations. These systems may be used freely, though volunteary codes of conduct are encouraged. 
  • Examples: AI-enabled video games and spam filters.
Moreover, the EU AI Act prohibits specific AI practices deemed to be in contravention of Union values and fundamental rights. The rationale for these bans is the protection of human dignity and the prevention of mass surveillance and discriminatory social engineering. However, particularly in the US AI ecosystem, there are many influential voices opposed to federal AI regulation.

The counter argument to a regulation positive stance by industry players is often stated as somehow crippling the US as it strives to compete for AI superiority with the other main AI player, China. This argument overlooks the fact that with national regulatory legislation the government could have carve-outs for things like AI for defense-related use cases and thereby not hamstring development. As for AI uses in domestic surveillance, that use case is so antithetical to the constitution and our guaranteed rights that sacrificing the ability to surveil the domestic population of the US is a price worth paying to ensure a society in which we want to live.

US AI stakeholders would do well to implement a similar scheme backed through federal legislation, or they could take the approach that evolved over time with Privacy legislation where state privacy legislation frameworks like those in California became a blueprint for other states. Due to the scope of AI, a national system would be a much better alternative than a patchwork of state-based regulations, especially with regard to its uses in national defense. This is something that could be done through federal legislation along the lines of the legislative approaches that the Crypto industry is currently pursuing. That is, push for national laws and guidelines as opposed to state level enforcement. In sum, creating a national AI governance schema.

Ultimately, the benefits to a regulation forward approach by the AI business community will result in a better outcome for the residents of the US, a better outcome for the companies involved, and most importantly could be a huge step in establishing guardrails for the US AI industry to protect future generations. Not embracing regulation will result in a backlash due to destructive uses of AI like those prohibited by the EU schema and at worse could give rise to catastrophic consequences for humanity.

With a little luck, by adopting a “regulation forward” approach in our industry, perhaps we can act as the masters of our own fate. I believe regulation is coming whether the industry embraces it or not, better to be guiding that legislation than not. And importantly, maybe we change “Cassie’s” opinion and do some good while doing well.


FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel