The EU AI Act was supposed to start enforcement in August 2026. Now it’s December 2027 for high-risk AI systems. That’s a year and a half of breathing room for companies that still don’t have clear guidelines on what “high-risk” actually means—because the European Commission missed its February deadline to publish them.

This delay comes wrapped in a “digital simplification” package that passed the European Parliament with a large majority. And it’s not uncontroversial. MEP Arba Kokalari put it plainly: “Companies now need clarity on whether they are high risk or not. If Europe wants to be competitive, we must increase investment and make it easier to use AI, not punish companies who introduce innovative AI features in safe products.”

That’s a significant shift in tone from the Parliament. The EU positioned itself as the global leader in AI safety, but the reality of implementation is hitting hard. Companies need to know what rules apply to them before they can comply—and right now, they don’t.

But here’s what’s interesting: alongside the delay, MEPs also backed something more aggressive. A ban on “nudifier” apps—those AI tools that digitally undress people in photos without consent. This came after the Grok scandal earlier this year, where X’s chatbot was generating these images at scale, including of minors. The Commission opened an investigation, and Parliament responded with an explicit prohibition.

So the picture is mixed. On one hand, enforcement gets pushed back. On the other, the most politically charged AI harms get fast-tracked into law. The delays are about giving industry breathing room. The bans are about responding to public outrage.

The broader question is whether Europe can actually thread this needle—regulating seriously enough to claim moral authority on AI safety, while not strangling innovation in the process. The next eighteen months will be the test. Companies will use this time to build compliance frameworks. Regulators will use it to finally publish those guidelines. Both sides are watching the other closely.

If the guidelines never come, the delay won’t matter—companies still won’t know what compliance looks like. And if the bans don’t work, the entire regulatory framework will face even more scrutiny. This is the messy middle of AI governance. Not grand statements, but the actual, difficult work of making rules that function in the real world.