Colorado just made a bold move. On March 17th, the state’s AI Policy Work Group—unanimously—proposed a complete replacement for the Colorado AI Act. The original law, passed in 2024, was the most comprehensive AI legislation in the US. Now they’re walking away from it, not because it failed, but because they’re trying something different.

Here’s the shift. The Colorado AI Act followed a familiar pattern: identify high-risk AI systems, then impose risk management obligations—impact assessments, algorithmic discrimination reporting, formal risk management policies (think NIST AI RMF or ISO compliance). The new proposal, called the “Automated Decision-Making Technology” (ADMT) Framework, drops most of that. Instead, it mirrors what you’d see in comprehensive data privacy laws: transparency, recordkeeping, consumer rights.

Under the new framework, developers still have to provide technical documentation to deployers—intended uses, training data, limitations, risks. But deployers get new obligations: tell consumers upfront that AI is being used in consequential decisions, provide notices within 30 days if an automated decision goes against someone, and maintain compliance records for three years. The focus shifted from “prove you’re managing risk” to “tell people what’s happening and let them challenge it.”

There’s also a meaningful threshold change. Under the old AI Act, an AI system could trigger obligations if it was a “substantial factor” in a decision—it just needed to assist and be capable of altering the outcome. Under the new framework, it has to “materially influence” the decision—basically, it has to actually change the result, not just be involved. That’s a higher bar, and it’s deliberate.

Governor Jared Polis pushed for this. The original AI Act was set to take effect in February 2026, but got postponed to June 30th—and in that time, the Work Group went back to the drawing board. The result is something that looks a lot more like European data protection law than American AI governance.

This matters because the US has no federal AI legislation. States have been filling the gap—California passed its own AI laws, others are watching. If Colorado’s new approach works, it could become the template: less about forcing companies to adopt specific risk management frameworks, more about ensuring transparency and giving consumers recourse. That’s easier to implement and arguably more politically durable.

The effective date would be January 1st, 2027—giving companies until the end of 2026 to adapt. Whether the legislature passes it as written is another question, but the direction is clear: the US state most serious about AI regulation is pivoting toward transparency over bureaucracy.