The EU positioned itself as the global leader in AI regulation with the AI Act. Now, barely two years later, it’s already hitting the pause button — while also taking a hardline stance on one very specific type of AI abuse.

The European Parliament voted recently to delay compliance deadlines for developers of high-risk AI systems until December 2027. That’s a full 18 months past the original August 2026 deadline. Companies working on AI systems covered by sector-specific rules (toys, medical devices) get until August 2028. And rules requiring AI-generated content to be watermarked? Pushed to November 2026.

Why the Delays?

Let’s be honest: the EU bit off more than it could chew. The original timeline was ambitious to the point of fantasy. Member states were supposed to establish regulatory sandboxes, the commission was supposed to publish detailed guidance, and companies were supposed to somehow comply with rules that didn’t exist yet.

The EU missed its own deadlines to publish key guidance. Parts of the law have already been changed. And now Parliament is voting to delay the whole thing — except it can’t actually do that unilaterally. Parliament must negotiate with the European Council (ministers from all 27 member states) over the final text. So we’re looking at months more of uncertainty for businesses operating in Europe.

This is exactly the kind of regulatory uncertainty that makes companies hesitates to invest. You can’t build compliance infrastructure around rules that might not exist in their current form by the time they take effect.

Enter the Nudify Apps

But here’s the interesting part: while pushing back compliance deadlines, Parliament also backed a proposal to ban “nudify” apps — AI tools that create non-consensual intimate images. The decision follows widespread outrage over Grok’s flood of sexualized deepfakes on X earlier this year.

The ban is notable because it’s specific, targeted, and responds to a clear harm. Unlike the broader AI Act which tries to regulate everything from foundation models to high-risk systems, this is a narrow prohibition on a specific type of abuse. The proposal notes it “would not apply to AI systems with effective safety measures preventing users from creating such images” — which is actually a reasonable approach: ban the abuse, reward the safety features.

The Bigger Picture

What we’re seeing is the EU trying to have it both ways: delay the burden on industry while appearing to take hard action on harm. It’s politically convenient — “we’re protecting people from deepfakes” plays well, while “we’re giving companies more time to comply” keeps the business lobby happy.

But it also reveals the fundamental tension in AI regulation. The technology moves faster than legislation. The EU tried to be comprehensive and future-proof, and now it’s discovering that comprehensive means complicated, and complicated means delays.

The question now is whether the Council goes along with these delays, and whether the nudify ban actually becomes law. Either way, businesses operating in Europe need to plan for a regulatory landscape that’s anything but stable.