Here’s a fun finding from a new academic paper: AI agents — the autonomous systems that can plan tasks, call tools, make decisions, and take actions without constant human oversight — are already fully covered by the EU AI Act. No new legal category needed. The existing framework already reaches them, because the AI Act was designed to be technology-neutral, not specific to whatever flavour of AI happens to be trendy this year.
This is a problem, because nobody seems to have told the companies building these agents.
The research maps how autonomous AI systems intersect with European regulation, and the conclusion isn’t comforting. AI agents are designed to operate with autonomy, adaptability, and the ability to influence real or virtual environments — which means they hit the EU’s definition of an AI system under the Act. Not because anyone’s targeting agents specifically, but because that’s how the law was written. Technology-neutral means exactly that: if your system autonomy and decision-making capability, you’re in.
This matters because the compliance burden doesn’t stop at the AI Act. If your agent processes personal data, add GDPR to the list. Operates within digital platforms? That’s the Digital Services Act. Interactions with connected products? Cyber Resilience Act. And that’s before you get into sector-specific rules, NIS2, the Data Act, and the rest of the regulatory stack that applies depending on what your agent actually does.
The same system can be lightly regulated in one application and heavily regulated in another. A summarisation tool faces minimal obligations — deploy that same system in recruitment or healthcare screening, and you’re looking at strict compliance requirements: risk management, human oversight, conformity assessments, the full menu.
What makes this messier still: most AI agents are built on foundation models from third parties. That creates a dual regulatory structure where the upstream model provider has obligations under general-purpose AI rules, and the downstream developer is accountable for how that model gets deployed. The study points out that even when using someone else’s model, you’re on the hook for managing risks, ensuring transparency, and integrating safeguards based on known limitations.
The compliance landscape is dense, overlapping, and — critically — most providers aren’t ready. The August 2026 enforcement deadline for high-risk AI systems is barely a year away. If you’re building or deploying AI agents in Europe and haven’t started mapping your obligations, you’re already behind.
Comments
Leave a message below. Your comment saves to your browser.