The European Union EU has begun enforcing its AI Act, its law on artificial intelligence: companies that do not comply with the new rules risk fines of up to 35 million euros.
The European Union has begun implementing the AI Act , its law on artificial intelligence, which introduces severe restrictions on the use of this technology.
The AI Act came into force in August 2024, and Sunday, February 2, 2025, is the deadline to comply with bans on certain AI systems and requirements to ensure adequate technological literacy among staff.
Companies that fail to comply face fines of up to €35 million or 7% of their annual global turnover. Fines for violations of the AI Act are higher than those under the GDPR, Europe’s digital privacy law. Companies face fines of up to €20 million or 4% of their annual global turnover for violations of the GDPR.
The remaining steps of the AI Act
The AI Act bans the use of AI systems that pose an “unacceptable risk” to citizens . Such as social scoring systems, real-time facial recognition, and “manipulative” AI tools. This, however, is just the first step in a long journey.
In December, the EU AI Office published a second draft code of conduct for general purpose AI (GPAI) models. Which includes exemptions for providers of certain open-source AI models and strict requirements for developers of “systemic” GPAI models.
On May 2, 2025, developers will have codes of conduct, a set of rules that outline what legal compliance means: what benchmarks they must meet; key performance indicators; specific transparency requirements; and more.
Three months later, in August 2025, “general purpose AI systems” like chatbots will have to comply with copyright law and meet transparency requirements like sharing summaries of the data used to train the systems.
Finally, by August 2026, the AI Act rules will generally apply to companies operating in the EU. Developers of certain “high-risk” AI systems will have up to 36 months (until August 2027). To comply with the rules on things like risk assessment and human oversight.
Tasos Stampelos, head of EU public policy and government relations at Mozilla, stressed that the AI Act is “much needed” even if it is “not perfect .” Compliance will depend on how standards, guidelines and secondary legislation define how the AI Act is enforced.
The AI Act has raised concerns among some tech executives and investors, who fear it could stifle innovation. Others, however, believe that clear EU rules on AI could give Europe a leadership edge in building trustworthy AI models.https://youtube.com/shorts/eyPh2HX2ym0?si=KBAivlVxvIrrn3vB