Colorado made headlines in 2024 when it passed the most comprehensive AI law in the United States — the Colorado AI Act, which mirrored the EU AI Act’s risk-based approach. Now the state’s trying to walk it back.

The Colorado AI Policy Work Group, with strong support from Governor Jared Polis, has proposed a new framework to replace the original law. And the shift in philosophy is striking: instead of treating AI developers as bearers of systemic risk, they want to treat them more like data processors with disclosure obligations.

What they’re actually proposing

The new framework — “Concerning the Use of Automated Decision Making Technology in Consequential Decisions” (Proposed ADMT Framework) — keeps the focus on consequential decisions (housing, insurance, government benefits, employment) but changes the underlying logic.

Key changes:

  • Higher bar for “high risk”: Under the original AI Act, an AI system needed to simply assist and be capable of altering a high-risk decision. Under the new framework, the AI output has to be a “non-de minimis” factor that actually affects the outcome. That’s a meaningfully higher threshold.

  • Shift from risk management to transparency: Out go the requirements for algorithmic discrimination reporting, AI impact assessments, and formal risk management policies (like NIST AI RMF). In come requirements for technical documentation, pre-use notices to consumers, and record-keeping.

  • Consumer rights focus: If an AI renders an adverse consequential decision, consumers get a notice within 30 days explaining the decision and how to request a human review. That’s more like data privacy law than product safety law.

Why they’re making this change

The original Colorado AI Act was set to take effect on February 1st, 2026, then got postponed to June 30th. But industry pushback was fierce — and the new proposal reflects a fundamental disagreement about the right regulatory model.

Supporters of the shift argue that the original approach imposed compliance costs that would hamstring innovation. The new framework aims for guardrails that actually protect people without kneecapping the industry.

Critics will say it’s a regression — that transparency isn’t the same as safety, and that requiring notices doesn’t prevent algorithmic discrimination, it just documents it.

What happens next

If passed, the Proposed ADMT Framework goes into effect January 1st, 2027. That gives covered businesses until the end of 2026 to adjust.

But here’s why this matters beyond Colorado: if a state known for progressive tech policy decides the EU model doesn’t work, other states will take note. Colorado could become the template for a more industry-friendly approach to AI regulation — or the cautionary tale about regulatory capture.

Watch this space.