Colorado had the toughest AI law in the United States. The Colorado AI Act — formally, the Concerning Consumer Protections in Interactions with AI Systems Act — was signed in 2024, covered “high risk” AI systems, required algorithmic discrimination reporting, mandated risk management policies, and demanded AI impact assessments. It was basically a mini-EU AI Act, and it was set to take effect in February 2026.

Now Colorado wants to replace it entirely.

On March 17th, the Colorado AI Policy Work Group — with strong support from Governor Jared Polis — proposed a new framework that fundamentally rewrites the approach. Instead of AI governance obligations, the new proposal focuses on transparency, recordkeeping, and consumer rights. It’s a meaningful shift: away from the EU-style “we’ll regulate AI systems directly” model, toward the “we’ll regulate automated decision-making” model you see in comprehensive data privacy laws.

Under the Proposed ADMT Framework (Automated Decision Making Technology), the obligations change depending on whether you’re a developer or a deployer. Developers need to provide technical documentation, disclose intended uses and known risks, and flag material updates. Deployers need to tell consumers when AI is being used in consequential decisions, provide notice within 30 days if an adverse decision was made, and maintain compliance records for three years. That’s notably lighter than the original law, which required actual algorithmic discrimination reporting and formal risk management policies.

The key change is the threshold for when these obligations kick in. The original Colorado AI Act used a “substantial factor” standard — if your AI system assisted and could alter a high-risk decision, you were covered. The new framework raises that to “materially influence,” meaning the AI output has to be a non-trivial factor that actually affects the outcome of the decision. That’s a meaningfully higher bar, and it reflects a different philosophy: don’t regulate AI as a technology, regulate it when it actually impacts people.

It’s an interesting evolution. Colorado started with the most aggressive state-level AI law in America, got feedback that it was unworkable, and instead of backing down entirely, they’re rewriting it with a focus on consumer transparency rather than corporate governance. The new framework goes into effect January 1st, 2027, giving companies until the end of 2026 to adjust.

Whether this approach works — whether transparency obligations are enough to protect consumers without choking innovation — is the question other states will be watching. Colorado’s experiment is about to become a test case.