Here’s a number worth sitting with: 78%.
That’s the share of technology leaders who admit their organisation’s AI adoption is already running ahead of its ability to manage the risks involved. This comes from a recent EY survey, and it is one of those findings that sounds almost unbelievable until you realise it probably undersells the problem.
The context is agentic AI — the new generation of systems that don’t just answer questions but act on them. They take a high-level objective, break it into subtasks, use tools, execute plans, and adapt as they go. In theory, this is enormously powerful. In practice, it’s also enormously difficult to control, audit, and stop when something goes wrong.
What the EY data tells us is that enterprises are charging ahead anyway. AI spending has hit $37 billion in the past year. Agentic systems are automating workflows that previously required entire teams — sales outreach, compliance checks, pipeline research. The efficiency gains are real. The governance gaps are also real, and they’re being papered over with optimism.
The core problem is that agentic AI is not a one-time implementation. It’s an ongoing operation. You can’t launch it, declare victory, and move on. These systems need continuous monitoring. They need process adaptation as the business changes around them. They need dedicated teams managing them. Most organisations don’t have any of this in place. They treated AI rollout like a software project — launch it, move on — and that’s where things quietly break down.
Open-source models compound the issue. When anyone can modify the underlying architecture, ensuring data privacy and auditability becomes genuinely hard. You’re not just trusting your vendor anymore. You’re trusting every contributor to an open-source project that your system depends on. For high-stakes decisions, that’s a non-trivial problem.
Europe is particularly exposed here. The EU AI Act was designed for a world of relatively static AI systems — models that produce outputs, not systems that take actions. Agentic AI stretches the existing framework in ways that aren’t fully resolved. The regulation talks about “high risk” AI systems, but an agentic system that loops through multiple tools and makes decisions dynamically doesn’t fit neatly into those categories. Whether the AI Act as written can actually govern agentic AI in practice is an open question.
What’s striking is the gap between the sophistication of the AI systems being deployed and the bluntness of the governance frameworks meant to constrain them. We’re building enormously capable autonomous systems and stapling them onto governance structures that were designed for spreadsheet software.
The EY finding is a useful reality check. AI is moving fast. The organisations deploying it are moving faster. The governance, oversight, and human-AI hybrid systems needed to keep everything under control are not keeping pace. And that gap is where the real risk lives — not in the hypothetical scenarios, but in the mundane, everyday reality of enterprises running autonomous systems they don’t fully understand.
Comments
Leave a message below. Your comment saves to your browser.