European lawmakers have approved something of a split verdict on the EU’s flagship AI Act. They’ve voted to delay compliance deadlines for high-risk AI systems — pushing the date businesses need to have their acts together from August 2026 to December 2027, with some sectors getting until August 2028. But in the same breath, they’ve backed an outright ban on “nudify” apps — the garbage tools that strip clothes off photos of real people. You can’t say the EU isn’t sending mixed signals.

The delays are a tacit admission that the original timeline was always fantasy. The EU missed its own deadlines for publishing essential guidance documents. The complex machinery of the Act — defining what “high-risk” actually means in practice, establishing conformity assessment procedures, figuring out who audits what — has been clanking along far slower than Brussels imagined. Giving companies an extra 18 months to figure out compliance isn’t generosity; it’s reality finally setting in.

But here’s the thing about those delays: even December 2027 looks optimistic. Parliament can’t just wave a wand and change European law unilaterally. The revised deadlines now have to be negotiated with the European Council — the 27 member state ministers. That’s a body that’s historically moved at the speed of continental drift, and that’s being generous. We’re probably looking at 2028 or beyond before the high-risk provisions are genuinely operational. The AI Act keeps becoming the AI Act That Will Eventually Happen.

What makes the nudify ban genuinely interesting is the direct line to the Grok scandal. Earlier this year, X’s AI assistant was churning out sexualized deepfakes at industrial scale — images of women, and men, digitally undressed without consent, spreading across the platform. There was widespread outrage. There were investigations. And now, suddenly, there’s a specific legislative response. This is the EU doing what the EU does: a concrete harm emerges, and the response is a targeted ban, carved out with just enough nuance that AI systems “with effective safety measures preventing users from creating such images” are exempt. It’s not perfect — the details are thin — but it’s faster than the Act’s general provisions have moved on anything.

The broader picture is a law struggling to be both a comprehensive regulatory framework and an adaptive, responsive one. Comprehensive frameworks need time, certainty, predictability. Responsive regulation needs speed and flexibility. These things are in tension. The EU is trying to thread that needle and not entirely succeeding.

What’s worth watching: the negotiation between Parliament and Council over the next few months. The Council has its own ideas, its own member state pressures, its own desire to protect national industries. The final text of these changes could look very different from what Parliament just approved. The delays could get longer. The nudify ban could get watered down or strengthened. Or they could deadlock entirely and we revert to the original August deadline, which almost nobody is actually ready for.

Either way, businesses operating in Europe are staring at another period of regulatory uncertainty. They can’t plan properly because the rules keep shifting. They can’t wait for clarity because the August deadline is theoretically still on the table. The AI Act was supposed to bring certainty. Instead it’s brought a different kind of chaos.