
5 Min
The European Union’s pioneering the adoption of the AI Act, has set a precedent for the Western world in the absence of U.S. legislation.
Europe is NOW a global standard-setter in AI.
The president of the European Parliament, Roberta Metsola, described the act as trailblazing, saying it would enable innovation while safeguarding fundamental rights. Central to this new law is the imperative to mitigate issues of bias, privacy, and other risks associated with the swift advancement of technology.
The European Parliament has ratified the groundbreaking AI Act, a comprehensive framework for AI governance, almost three years after the initial proposal. After implementing the AI Act, most of the rules will take effect after 24 months, while restrictions on some applications will come into force after a six-month period.
The AI Act establishes clear responsibilities for AI systems, reflecting the potential risks and implications they carry. This heralds a new chapter of compliance and implementation. Entities, especially those concerned with privacy and rights, must now shift their attention to these domains.
The new law meticulously organizes AI applications into clearly defined risk categories: minimal, limited, high, and unacceptable. Each category is linked to specific obligations that reflect the potential risks to users or the application’s area of use. The AI Act strictly prohibits tools categorized as “unacceptable risk.

Which applications are considered to pose an unacceptable risk and are, consequently, prohibited? Applications deemed to present an unacceptable risk and thus banned include those that compile facial recognition databases by harvesting images from surveillance cameras or the Internet, as well as automated systems that recognize emotions based on facial expressions or body language within workplaces and educational institutions.
The AI Act typically forbids law enforcement from utilizing biometric identification systems. Still, exceptions exist for specific scenarios with prior approval, such as locating a missing individual or averting a terrorist threat.
Rigorous standards apply to high-risk applications, such as those used in law enforcement and healthcare. They must eschew discriminatory practices and comply with privacy legislation. Additionally, developers are responsible for making these systems transparent, secure, and user-friendly.
The EU’s regulations also stipulate that developers must notify users when they engage with AI-generated content, even for low-risk AI systems like spam filters.
The AI Act explicitly outlaws:
The legislation also requires EU countries to set up national supervisory authorities. These bodies will oversee the creation of experimental zones where small and medium-sized businesses can trial AI systems before they go to market.
The act also set rules regarding generative AI and manipulated media. Deepfakes and any other images, videos, and audio generated by AI will have to be clearly labeled.
AI models will also have to comply with copyright laws. “Rights holders may choose to reserve their rights in their works or other material to prevent the extraction of text and data unless it is for scientific research purposes,” the AI Act reads.
Providers of general-purpose AI models must seek permission from rights holders for text and data mining if the right to opt-out is explicitly reserved.
However, AI models designed exclusively for research, development, and prototyping are exempt. The most powerful generative AI models, those trained with computing power exceeding ( 10^{25} ) FLOPs, are viewed as posing systemic risks. This threshold might evolve, but currently, models like OpenAI’s GPT-4 and Google’s Gemini are included in this category.
Such model providers must assess and mitigate risks, report serious incidents, reveal their systems’ power consumption, ensure cybersecurity standards are met, and perform advanced testing and evaluation of their models.
The AI Act imposes severe fines for non-compliance. Companies that breach the regulations may face fines of up to 35 million euros or 7% of their global annual turnover, whichever is higher.
The scope of the AI Act covers all AI models in operation within the EU, necessitating compliance from AI providers based in the U.S. for their European operations. Despite earlier reservations, Sam Altman, CEO of OpenAI, has assured that OpenAI remains committed to the European market.
Each EU member state is responsible for forming its own AI regulatory authority to enforce the AI Act. The European Commission plans to establish an AI Office to evaluate AI models and monitor risks. Systemically risky model providers will create codes of conduct with the Office.
The adoption of the AI Act is a landmark moment in the governance of AI, sparking a diverse array of reactions. It has the potential to bolster security, uphold rights, and drive innovation across Europe, yet its full impact is still unfolding. Ensuring its success and keeping pace with the rapid development of AI will require continuous oversight and periodic updates. What’s your take on this regulatory move? Will it streamline AI usage, or could it add layers of complexity? We’re eager to hear your thoughts in the comments section below.