The EU’s AI Act was supposed to be the gold standard—the first comprehensive AI law from a major regulator, and a headache for every AI company wanting to do business in Europe. Now it’s been pushed back another 16 months.

On Thursday, the European Parliament voted to delay compliance deadlines for high-risk AI systems (biometrics, critical infrastructure, law enforcement tools) from August 2026 to December 2nd, 2027. The reasoning? The European Commission missed its own February deadline to publish the guidelines explaining what companies are actually supposed to do.

Let that sink in: The Commission wrote the law but couldn’t be bothered to explain it.

Arba Kokalari, co-rapporteur for the Internal Market and Consumer Protection committee, put it bluntly: “Companies now need clarity on whether they are high risk or not. If Europe wants to be competitive, we must increase investment and make it easier to use AI, not punish companies who introduce innovative AI features in safe products.”

This is the third significant delay to the AI Act. The original timelines were already optimistic; now they’re slipping into 2028. The practical effect is that companies building AI in Europe get more runway to figure out compliance—but also more uncertainty about what compliance actually looks like.

The new rule on bias detection: MEPs also approved allowing AI providers to process biometric data for bias detection purposes—something the original law was unclear on. This is important: it means companies can actually check whether their systems are discriminatory without fear of violating privacy rules. The catch? “Strictly necessary” is doing a lot of work in that phrasing.

The Grok ban: In lighter but related news, Parliament also backed a ban on “nudifier” apps—those wretched tools that use AI to digitally undress people in photos. This came after X’s Grok chatbot generated explicit images of women, including minors, sparking investigation and outrage. The ban outlaws the output, but providers can still develop the underlying capabilities (they just have to watermark).

For UK businesses watching from across the Channel: this is your competitor’s regulatory chaos. While the EU struggles to execute on its own ambitious framework, the UK was supposed to be faster and lighter. The new UK government is still figuring out its approach. If you’re building AI anywhere near Europe, you need to pay attention to both tracks—the EU’s rules are clearer (eventually), the UK’s are murkier.