This is a big deal.
Colorado had the most comprehensive AI law in the United States — the Colorado AI Act, which was set to take effect in February 2026 and was postponed to June. It mirrored the EU AI Act in approach: risk classifications, developer obligations, impact assessments, algorithmic discrimination reporting.
Now it’s being replaced entirely.
The Proposed ADMT Framework
On March 17, 2026, the Colorado AI Policy Work Group — operating with strong support from Governor Jared Polis — proposed a completely new framework called the Automated Decision-Making Technology (ADMT) Framework.
The shift is fundamental.
| Colorado AI Act | Proposed ADMT Framework |
|---|---|
| AI governance requirements | Privacy-law style requirements |
| Risk management policies | Transparency and disclosure |
| Algorithmic discrimination reporting | Consumer rights approach |
| AI impact assessments | Recordkeeping |
Instead of requiring developers to implement NIST frameworks, conduct risk assessments, and report discrimination — the new framework focuses on notice, disclosure, and recordkeeping.
What Actually Changes?
For developers: Instead of building risk management systems and conducting impact assessments, you provide technical documentation to deployers about intended uses, training data, limitations, and risks.
For deployers: You post a notice telling consumers “we use automated decision-making” — and if there’s an adverse outcome, you have 30 days to explain why and provide a human review process.
This is closer to how privacy laws work: tell people you’re collecting data, explain what happens if something goes wrong, keep records.
It’s dramatically simpler.
The “Materially Influence” Standard
Under the Colorado AI Act, obligations triggered if AI was a “substantial factor” in a decision — which is a low bar.
The new framework raises it to “materially influence” — meaning the AI output has to be a meaningful factor in the outcome, not just incidental or clerical use.
This narrows dramatically which systems are covered.
Why Rewritten?
The original Colorado AI Act was praised but also criticised as complex and potentially burdensome. It required specialised AI governance expertise that most companies don’t have in-house. Compliance timelines were aggressive.
The new framework trades complexity for:
- Clarity: Standard privacy-law terminology everyone already understands
- Feasibility: Companies can actually do this without hiring AI governance teams
- Consumer focus: Shift from “prove you’re safe” to “tell people what’s happening”
The National Picture
Colorado isn’t alone in reconsidering AI regulation. But it’s the first state with a serious law to effectively walk it back.
The White House recently urged Congress to take a “light touch” on AI regulation. Colorado’s move — from one of the most aggressive states to one of the lightest — fits that mood.
What happens here matters: it could become the template for other states who want to look “pro-innovation” without doing nothing.
What’s Next?
If passed, the Proposed ADMT Framework takes effect January 1, 2027 — giving companies until the end of 2026 to adapt.
Governor Polis’s support is meaningful. This isn’t a compromise. It’s a strategic choice about what AI regulation should look like.
And given the direction of federal policy, it might be the direction others follow.
Comments
Leave a message below. Your comment saves to your browser.