Remember when Colorado passed the most comprehensive AI law in America and everyone called it a big deal? Well, they’re rewriting it already.

On March 17, 2026, Colorado’s AI Policy Work Group - with explicit support from Governor Jared Polis - proposed an entirely new framework to replace the Colorado AI Act that was supposed to go into effect later this year. The new “Concerned the Use of Automated Decision Making Technology in Consequential Decisions” framework (Proposed ADMT Framework) isn’t just tweaking the old law. It’s fundamentally reconceptualising what AI regulation looks like in America.

What changed

The Colorado AI Act, inspired by the EU AI Act, classified certain AI systems as “high risk” and imposed obligations accordingly - algorithmic discrimination reporting, risk management policies, impact assessments. The new framework discards much of that.

Instead, it borrows language from data privacy laws. The focus shifts to “consequential decisions” - things like employment, housing, insurance, credit, and essential government services. Developers and deployers now need to provide transparency documentation, give consumers notice when AI is used in decisions affecting them, maintain records for three years, and provide adverse outcome notices within 30 days if a decision materially affected someone.

The key shift

“Borrowing from data privacy law” is actually a huge deal. It means the framework cares less about what the AI system is and more about what it does. An AI system that screens job applications triggers obligations. An AI system that generates marketing copy doesn’t - regardless of the technology inside.

There’s also a new threshold: the old law required AI to be a “substantial factor” in a decision. The new framework says the AI must “materially influence” the outcome - meaning its output has to be a non-trivial factor that actually affects the result. That’s a meaningfully higher bar.

Why Colorado changed course

The law was originally set for February 2026, then postponed to June. The delay wasn’t just procedural - it reflected real concerns about burden and feasibility. Risk management policies and algorithmic discrimination reporting proved difficult to implement practically. The new framework is lighter, more familiar to businesses already dealing with data privacy compliance, and focused on consumer rights rather than AI governance best practices.

The bigger picture

This matters because Colorado was positioned as the US state with the most aggressive AI legislation. Now it’s essentially saying: lighter touch works better. That will likely influence - or give cover to - other states considering AI bills. The EU goes heavy. Colorado just went lighter.

Whether that’s the right direction is debatable. But this rewrite signals that America’s AI regulatory path isn’t going to mirror Europe’s anytime soon.