Colorado has been the battleground for America’s most ambitious AI legislation. Its AI Act — the Colorado AI Act — was the first comprehensive state-level AI law in the US, and it looked more like the EU AI Act than anything American: risk classifications, developer obligations, algorithmic discrimination reporting, mandatory impact assessments.

Now, with bipartisan support from Governor Jared Polis, the state’s AI Policy Work Group has proposed replacing it entirely with something fundamentally different.

The new approach: ADMT, not AI

The proposed framework is called the “Automated Decision Making Technology” (ADMT) Framework, and the shift in naming is revealing. Instead of regulating “AI systems” broadly — the approach that caught fire after ChatGPT — it regulates specific use cases: technology that “materially influences” consequential decisions.

That “materially influences” standard is a deliberate tightening over the current “substantial factor” test. Under the old Colorado AI Act, if your AI simply assisted a decision and was “capable of altering” it, you were in scope. Under the new framework, the AI output has to actually affect the outcome — constrain, rank, score, recommend, or meaningfully alter how the decision is made — not just be present in the room.

This is a narrower, more practical approach. And it’s a clear signal that California’s strict liability model isn’t the only path forward.

What changes in practice

The ADMT Framework swaps AI-governance requirements for transparency and record-keeping obligations:

For developers: Provide technical documentation to deployers — intended uses, harmful uses, training data, limitations, risks, instructions. Notify deployers of material updates.

For deployers: Pre-use notice to consumers that ADMT is being used. Adverse outcome notices within 30 calendar days if a decision was negative — describing the decision, the technology’s role, and how to request human review. Maintain compliance records for three years.

Gone are the requirements to: report algorithmic discrimination to the Attorney General, conduct AI impact assessments, or implement formal risk management policies (NIST AI RMF or ISO/IEC 42001).

In short: tell people you’re using it, keep records, and explain decisions when asked. Don’t prove your algorithm is unbiased.

Why this matters

The Colorado AI Act was originally set to take effect in February 2026, then pushed to June 30, 2026. The new framework, if passed, would go live January 1, 2027 — giving companies roughly a year to pivot from one compliance regime to another.

But the bigger question is whether other states adopt this approach. California is pursuing enforcement. New York has pending legislation. Texas has an executive order. Colorado’s ADMT Framework — if it works — offers a compromise: consumer protection without the compliance overhead that’s scared off companies.

Governor Polis has consistently positioned Colorado as pro-innovation while pro-consumer. This framework is the most concrete expression of that philosophy yet.

The practical implication for enterprises: you might not need a full AI governance program to operate in Colorado. You need transparency, record-keeping, and a human review process. That’s substantially cheaper than what the EU AI Act requires.

Whether states see that as a floor or a ceiling will define the next phase of American AI regulation.