The White House has laid out its vision for AI regulation, and it’s about as subtle as a sledgehammer: Congress should preempt every state AI law in America, letting the federal government — and the industry — decide what’s acceptable.
The legislative blueprint outlines half a dozen principles: protecting children, preventing electricity costs from surging, respecting intellectual property, preventing censorship, and educating Americans on using the technology. On the surface, that’s a reasonable-sounding list. But dig deeper and it’s essentially a wish list that favours industry while attacking the very notion of state-level accountability.
The Preemption Power Play
The core of this is simple: states like California, Colorado, Texas, and Utah have already passed laws setting rules for AI. The White House wants those wiped out.
White House AI czar David Sacks called it a response to “a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race.”
But here’s the tension: this is coming from an administration that traditionally champions states’ rights against federal overreach. When it’s about immigration, environmental regulations, or social issues, the argument is always “leave it to the states.” But when states try to protect their citizens from AI harms? That’s different, apparently.
California’s Governor Newsom’s office called it exactly what it is: “Yet again, Donald Trump is trying to gut laws in California that keep our residents safe and protect consumers — a core state responsibility.”
The Political Math
The framework is clever in a cynical way. It appeals to bipartisan concerns — everyone worries about children being harmed by AI chatbots, everyone’s concerned about electricity costs for data centres. Republican Senator Marsha Blackburn, who was instrumental in thwarting Trump’s earlier attempt to deter state AI regulations, called the framework a “roadmap” and welcomed the administration to the discussion.
But it’s already being panned by Democrats. Representative Josh Gottheimer said it “fails to address key issues, including strong accountability for AI companies, under the guise of protecting children, communities, and creators.”
The reality is that getting anything through Congress in a midterm election year is a heavy lift. The White House needs bipartisan support, but the proposal is so industry-friendly that Democrats have little incentive to play along. And some Republicans aren’t sold either — they’re getting pressure from AI safety advocates who think the framework doesn’t do enough to address catastrophic risks.
What’s Actually Left
Buried in the fine print, the White House does say it doesn’t think Congress should preempt all state regulatory powers — states can still enforce general laws against AI developers “to protect children, prevent fraud, and protect consumers.” States can still decide where to place data centres. They can still set rules for how they procure AI tools for law enforcement or education.
But the rest is pretty much a free pass for the industry: states shouldn’t regulate AI development, shouldn’t penalize developers for how third parties use their products, and shouldn’t “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.”
Translation: if a company builds a harmful AI tool and someone misuses it, that’s not the company’s problem. And if a state tries to stop that from happening, the federal government will step in.
The politics of this are going to be messy. States aren’t going to give up their regulatory powers quietly. Industry groups will push hard. And the administration is trying to thread a needle between “we’re protecting you” and “we’re not actually going to do anything meaningful.”
We’ll see if Congress plays along.
Comments
Leave a message below. Your comment saves to your browser.