Remember when Colorado passed the most aggressive AI law in America? The one that required AI companies to report “foreseeable risks of algorithmic discrimination,” conduct impact assessments, and implement formal risk management policies? The law that had every AI lawyer in the country dust off their NIST RMF playbooks?

Governor Jared Polis killed it. Well, not killed—rewrote.

On March 17th, the Colorado AI Policy Work Groupunanimously voted to replace the Colorado AI Act with a fundamentally different framework: Concerning the Use of Automated Decision Making Technology in Consequential Decisions (the Proposed ADMT Framework). Effective date: January 1, 2027.

The shift in one sentence: Instead of requiring AI companies to manage risk and prove their systems aren’t discriminatory, the new framework requires them to be transparent and tells consumers they have rights.

Here’s what changed:

From “high risk” to “materially influence”: The old law applied obligations whenever AI was a “substantial factor” in consequential decisions (housing, employment, lending, etc.). The new framework uses a higher bar—AI must “materially influence” the outcome, meaning it’s a non-trivial factor that actually affects the decision, not just background noise.

Dropped obligations: The new framework eliminates:

  • Reporting algorithmic discrimination to the Colorado Attorney General
  • AI impact assessments
  • Formal risk management policies (NIST AI RMF, ISO 42001)

New obligations: Instead, the framework focuses on:

  • Transparency: Developers must provide technical documentation to deployers; deployers must post public notices when AI is used in consequential decisions
  • Adverse outcome notices: If AI makes a bad decision affecting a consumer, they get 30 days to notify the consumer with an explanation and how to request human review
  • Recordkeeping: Three years of compliance records

This is a massive shift from “prove your AI is safe” to “tell people when AI is involved and let them object.” It mirrors automated decision-making rules in data privacy laws rather than EU-style AI governance.

Why the rewrite? Costs.compliance costs were projected to be enormous for companies. Colorado wanted to be a leader in AI regulation; instead, it became a cautionary tale about overreach. Now Polis is repositioning Colorado as business-friendly while maintaining consumer protections—a different philosophy entirely.

For other states watching: this could become a template. The Colorado framework is lighter, cheaper, and less aggressive than the original. If you’re a state legislator considering AI bills, you’ll likely see arguments for the “Colorado approach” versus the “Colorado approach we rejected.”