The Trump administration published a national AI legislative framework this week. It does three things clearly: centralises AI authority in Washington by preempting state laws, shields AI developers from liability for how their models are used, and shifts responsibility for child safety onto parents.

The details are worth examining because the framework reveals what “pro-innovation” actually means in practice.

Preemption of State Law

The framework explicitly targets state AI regulations. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race,” the White House statement reads. The proposal would draw a hard line: states can regulate their own use of AI, general laws like fraud and child protection, and zoning. They cannot regulate AI development or deployment.

This is a significant shift. States like California, Colorado, and Illinois have been developing their own AI regulatory frameworks — often more protective of consumers than federal alternatives. The Trump framework would wipe those out.

The administration has been building toward this for months. An executive order in December directed the Commerce Department to compile a list of “onerous” state AI laws and flag states for potential loss of federal funds. That list hasn’t been published yet, but the framework makes clear where this ends.

Liability: Developer Immunity

The framework explicitly seeks to prevent states from “penalising AI developers for a third party’s unlawful conduct involving their models.” This is a broad liability shield. If a developer builds a model and someone uses it to do harm, the developer is not responsible.

The closest analogy is Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. Section 230 is controversial precisely because it allows platforms to profit from content they have no obligation to monitor or restrict. The AI equivalent would allow developers to profit from capabilities they have no obligation to constrain.

Child Safety and Parental Responsibility

The framework says Congress should require AI companies to implement features that “reduce the risks of sexual exploitation and harm to minors.” But it doesn’t specify what that means, doesn’t create enforceable standards, and places the primary responsibility for child safety on parents.

This is the lightest possible touch. “Should” rather than “must.” Recommendations rather than requirements. Parents bearing the burden for harms that AI systems could be designed to prevent.

What This Leaves Out

The framework’s seven objectives prioritise innovation and scaling AI. What’s absent: any liability framework for AI-caused harms, any independent oversight mechanism, any enforcement structure for the harms it acknowledges.

The honest assessment: this is a framework for AI companies, not for people who might be harmed by AI. Preemption protects companies from a patchwork of state rules. Liability shields protect them from accountability. Minimal standards protect them from real compliance costs.

“Winning the AI race” is the stated goal. Whether that’s worth the cost to state autonomy, consumer protection, and child safety is a values question the framework doesn’t engage with.

Sources: TechCrunch, White House framework