The EU AI Act was already the toughest AI regulation on the planet. Now it’s expanding its reach — and the latest target is something that didn’t even exist when the law was drafted: autonomous AI agents.

A new study has made the case crystal clear: AI agents do not require a new legal category to be regulated. They’re already covered. And the implications for companies building agentic systems across Europe are significant.

Why this matters

Traditional AI systems wait for a prompt and produce an answer. Agentic AI systems do something fundamentally different — they take a high-level objective, break it into subtasks, select the right tools, execute a plan, and adapt as they go. They operate completely autonomously.

That autonomy is exactly what makes regulators nervous.

According to a recent EY survey, 78% of business leaders admit that AI adoption is already outpacing their organisation’s ability to manage the risks it creates. When you add agents into the mix — systems that can make decisions and take actions without human in the loop — the governance gap becomes a governance chasm.

The DPD example everyone forgets

Remember when DPD’s AI customer service chatbot went rogue in early 2024? The company had to disable it after a routine system update stripped away its guardrails, and it started swearing at customers and calling DPD “the worst delivery firm in the world.”

That was a single chatbot with limited agency. Now imagine the same failure mode, but the agent can actually book flights, process refunds, or execute trades on your behalf.

What this means for you

If you’re building or deploying AI agents in Europe, the EU AI Act is already your problem:

  • Transparency obligations: You’ll need to disclose when AI agents are making decisions that affect people
  • Record-keeping: Every agent action that leads to a consequential decision needs to be logged
  • Human oversight: The “adult in the room” isn’t optional anymore — it may be legally required

The study suggests the EU is leaning toward interpreting existing AI Act provisions to cover agents, rather than waiting for new legislation. That means the enforcement clock is already ticking.

The bigger picture

This is part of a broadershift. The EU isn’t just regulating AI — it’s building an infrastructure for AI accountability that other jurisdictions will either adopt or react to. If you build for Europe, you’re building for the future.

Whether that’s a feature or a bug depends on whether you’re ready.