The Trump administration released its AI policy framework on March 20th. The core idea: get the federal government out of the way, let industry innovate, and let states do… nothing. The guidance urges Congress to preempt state-level AI regulations deemed “burdensome to industry.”
But there’s a catch — and it’s a constitutional one.
The Preemption Play
The framework explicitly calls for federal preemption of state AI laws. This is a big deal. States have been the wild west of AI regulation — California, Florida, Georgia, Arizona, each cooking up their own rules. Some focus on children, some on disclosures, some on outright bans. The cumulative effect is a patchwork that’s hard for any company to navigate.
The White House says that’s the problem. Kill the state laws, let the federal government set the rules, and innovation flows freely.
But here’s what’s interesting: free speech experts — not typically allies of either party — are warning that this goes too far.
The Free Speech Problem
The Electronic Frontier Foundation (EFF) called the framework “disastrous” in a statement. Their specific beefs:
- Federal preemption of state protections — taking away states’ ability to protect their residents
- Age verification requirements — forcing users to prove they’re adults (usually by uploading government ID) to access AI platforms
- A new federal “publicity right” — essentially giving the government control over how people’s likenesses are used in AI-generated content
The Foundation for Individual Rights and Expression (FIRE) — yes, that’s really their name — dubbed age verification the “papers, please” approach. They’ve fought similar requirements in other contexts, warning that it puts sensitive data at risk that the government can’t protect.
Their argument: age verification sounds reasonable until you realize it means adults have to upload IDs to use AI chatbots. Many won’t. That chill on access is a First Amendment problem.
The Child Safety Tension
This is where it gets genuinely hard. Some parents pushing for AI regulation have tragic stories. A Florida parent described to the Senate how their 14-year-old died by suicide after engaging with an AI chatbot. State legislatures are responding with proposals — Florida proposed the strictest in the country, banning minors from creating accounts without parental consent.
The Trump framework doesn’t reject child protection. It explicitly supports “privacy-protective, age-assurance requirements” for platforms likely to be accessed by minors. But it also wants federal preemption — so states like California, which just enacted age verification laws, would be overridden.
The framework tries to thread this needle: protect kids, but don’t burden adults. In practice, that’s like threading a needle during an earthquake.
What Happens Now
The guidance is just that — guidance. It recommends, it doesn’t mandate. Congress would need to pass actual legislation to preempt state laws. That could take years, if it happens at all.
But the direction is clear. The administration sees state-level AI regulation as a problem to be solved, not a legitimate policy experiment. That puts them at odds with both free speech advocates (who worry about federal overreach) and child safety advocates (who worry about losing state-level protections).
Welcome to American AI policy: where everyone agrees there’s a problem, nobody agrees on the solution, and the Constitution keeps showing up uninvited.
Comments
Leave a message below. Your comment saves to your browser.