The EU’s effort to soften its AI Act has hit another snag.
Lawmakers spent about 12 hours in negotiations on Tuesday and failed to reach an agreement. The sticking point: whether AI used in products already covered by existing safety rules — machinery, toys, medical devices — should be exempted from the AI Act entirely. Parliament wanted the exemption. Member states, represented by Cyprus in the rotating presidency, didn’t.
The talks have been pushed back to May.
Here’s why this matters. The EU has been trying to push back the AI Act’s hardest deadlines. The technical standards that companies need to demonstrate compliance aren’t ready — the standards body won’t have the full set before December 2026 at the earliest. Both the Council and Parliament had agreed to push high-risk obligations to December 2027 and August 2028 respectively.
But they couldn’t agree on the exemption, and now they’re running out of time.
If no deal is reached before August 2, the original strict rules apply. Full stop. That means high-risk AI systems face obligations as originally drafted — even if the harmonised standards aren’t ready, even if national enforcement authorities aren’t set up, even if companies haven’t had time to properly prepare.
Enforcement will be spotty, probably. But the obligations exist. Businesses ignore that at their peril.
One analyst from Forrester put it bluntly: “It is obvious that if the authorities responsible for enforcing the rules are not in place, there won’t be enforcement, despite the deadlines. Patchy readiness across member states does not reduce the risk for businesses.”
CIOs should treat August 2 as the hard deadline regardless. If it gets delayed, consider it a bonus. If not, you’ve already got a compliance problem.
The deeper problem here is that the AI Act was written in a particular context — an earlier era of AI development — and now the EU is trying to retro-fit it for a world of agentic AI systems, foundation models, and rapidly evolving capabilities. The regulatory framework wasn’t designed for this. The standards aren’t ready for this. And the political will to actually enforce it is, at best, uneven across 27 member states.
This isn’t just a technical problem. It’s a governance problem. Europe wants to be a thoughtful regulator of AI, but the speed of AI development is making that ambition increasingly difficult to sustain.
The question for businesses using AI in high-risk applications: don’t wait for the political drama to resolve. Get your compliance house in order now. The safe bet is that the strict rules apply sooner or later.
Comments
Leave a message below. Your comment saves to your browser.