The European Union sets itself up as AI police

The European Union sets itself up as AI police

Nick Chase
Nick Chase
December 12, 2023
4 mins
Table of content
User ratingUser ratingUser ratingUser ratingUser rating
Have a project
in mind?
Key Take Away Summary

EU's AI Act: A game-changer in global AI regulation, introducing binding rules, transparency in AI usage, and significant fines for...

While October's Executive Order on Artificial Intelligence was largely a voluntary and fact-finding mission, the European Union's parliament is set to become the world's AI police after putting forth its own regulations after serious and contentious negotiations.

The EU's new AI Act is a significant legislative step aimed at regulating the use of artificial intelligence, focusing on mitigating risks to fundamental rights in areas like healthcare, education, and public services. Here are the key points from the MIT Technology Review's analysis:

  • Binding Rules on Transparency and Ethics: The AI Act introduces legally binding rules for tech companies, requiring them to inform users when interacting with AI systems like chatbots, biometric categorization, or emotion recognition systems. It mandates the labeling of deepfakes and AI-generated content and designing systems for the detectability of AI-generated media.
  • Wiggle Room for AI Companies: The Act considers foundation models and requires them to have better documentation, comply with EU copyright law, and disclose training data. However, stricter rules apply only to the most powerful AI models, and it's up to companies to assess their compliance requirements. (Rules may, however, change as time goes on and the technology is better understood.)
  • EU as AI Police: A new European AI Office will enforce the Act, with fines ranging from 1.5% to 7% of a firm's global sales turnover for noncompliance. The EU aims to set a global standard in AI regulation, similar to what happened with GDPR.
  • National Security Exemptions: The Act bans certain AI uses, such as untargeted facial recognition scraping and emotion recognition in workplaces or schools. However, it exempts AI systems developed exclusively for military and defense uses. Police use of biometric systems in public places is allowed only with court approval and for specific crimes such as terrorism and human trafficking, but predictive policing is banned (unless "used with 'clear human assessment and objective facts'").
  • Implementation Timeline: The final wording of the bill is pending, with technical adjustments and approvals required from European countries and the EU Parliament. Once in force, tech companies have two years to comply with the rules, with bans on certain AI uses applying after six months, and foundation model developers having one year for compliance.

In addition, citizens will have the ability to sue to find out how an algorithm made a decision, so look for an uptick in Explainable AI in the months to come.

AI/ML Practice Director / Senior Director of Product Management
Nick is a developer, educator, and technology specialist with deep experience in Cloud Native Computing as well as AI and Machine Learning. Prior to joining CloudGeometry, Nick built pioneering Internet, cloud, and metaverse applications, and has helped numerous clients adopt Machine Learning applications and workflows. In his previous role at Mirantis as Director of Technical Marketing, Nick focused on educating companies on the best way to use technologies to their advantage. Nick is the former CTO of an advertising agency's Internet arm and the co-founder of a metaverse startup.
Read aloud
315
Upvote
Voting...
Share this article
Monthly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every month.