The EU is getting serious about regulating AI platforms as infrastructure. According to reports from Germany’s Handelsblatt, the European Commission is examining whether ChatGPT should be classified as a “very large search engine” under the Digital Services Act (DSA)—a designation that would bring significantly stricter obligations.

What this means

If ChatGPT is classified as a very large online platform (VLOP) under the DSA, OpenAI would face a suite of new requirements: transparency on algorithmic systems, mandatory risk assessments, enhanced data access for researchers, and potentially new obligations around content moderation and advertising. Essentially, the same rules that now apply to Google and Meta would apply to OpenAI.

This is notable because it treats AI models not just as products, but as platforms—digital infrastructure that millions of people depend on for information and decisions. That’s a meaningful shift in how regulators view AI.

The broader context

This isn’t happening in isolation. The EU AI Act is already rolling out with phased enforcement beginning in August 2026. The DSA classification would add another layer of oversight on top of that. For OpenAI, operating in Europe is becoming increasingly complex from a regulatory standpoint.

The company has published some statistics on monthly active users in the EU, showing they’re trying to comply with existing requirements. But the question of whether ChatGPT functions as a search engine—and whether it should be regulated like one—remains unresolved.

What happens next

The Commission’s examination is ongoing. OpenAI has declined to comment. If the classification goes through, it would set a precedent for how other AI assistants and language models are treated in Europe. It could also influence how other jurisdictions approach AI regulation.

For now, it’s another signal that the EU intends to treat AI platforms as utilities—essential services that need oversight, not just consumer products to be left to the market.