Colorado just proposed a replacement for its own AI law. And if the federal government is paying attention, it should.

Backstory: Colorado passed the most comprehensive AI law in the United States in 2024 — the Colorado AI Act, covering developers and deployers of “high risk” AI systems, similar in structure to the EU AI Act. It was originally set to take effect in February 2026, but got pushed to June 30th after industry pushed back on the timeline. Now the state’s AI Policy Work Group, with strong support from Governor Jared Polis, has gone a step further and proposed a completely different framework to replace it entirely.

The new proposal is called the “Concerning the Use of Automated Decision Making Technology in Consequential Decisions” framework — the ADMT Framework for short — and it’s a meaningful shift in approach.

The Colorado AI Act was structured around risk management: algorithmic discrimination reporting, risk management policies, AI impact assessments. The proposed replacement ditches most of that in favour of something closer to data privacy law: transparency, recordkeeping, and consumer rights. If an automated system is making a decision that materially affects someone — a loan denial, a housing application, a hiring process — there need to be records. Consumers need to be able to request explanations. The burden shifts from proving you’ve managed risk to proving the process was transparent.

It’s a clever move, and it reflects a growing pragmatic streak in US AI governance. The EU approach — detailed risk classifications, technical standards, compliance requirements baked into development — has been difficult to implement even in the best circumstances. Colorado’s new framework is essentially asking a simpler question: did the decision-maker tell the affected person what was being decided and why? That’s easier to audit and easier to enforce.

The timing matters. The White House recently urged Congress to preempt state AI laws, arguing that a patchwork of different state rules creates burdens for industry. If Congress doesn’t act, states will keep going their own way. Colorado just demonstrated that you can have a substantive AI law that doesn’t require rebuilding your entire compliance programme — you can build it from existing building blocks like data privacy frameworks.

What’s notable is who supports this: Governor Polis, a Democrat in a purple state, has consistently taken a pro-innovation stance on technology. The fact that he’s backing a substantive AI rewrite — not a weakening, but a restructuring — suggests that the choice isn’t between “strong AI regulation” and “lax AI regulation.” It’s about what kind of framework actually works.

If this passes and holds up in practice, expect other states to look at the Colorado ADMT Framework as a template. The EU spent years building its risk-based approach. Colorado just bet that transparency and recordkeeping might get you most of the way there, faster.