top of page

Colorado’s AI Act: Charting a Pragmatic Path Forward

  • laytonhurt
  • Aug 22
  • 4 min read

In a groundbreaking move, Colorado became the first state to pass comprehensive AI regulation with SB 24-205. One year later, lawmakers are revisiting the law amid sharp criticism from both consumer advocates and industry groups. Here’s how the legislation has evolved—and why a hybrid path forward may be the most effective solution.

ree

 

 1. Background: The Original Bill (SB 24-205, 2024)

 

Colorado’s SB 24-205 required developers and deployers of “high-risk” AI systems to:

  • Maintain a risk management program aligned with frameworks such as NIST AI RMF or ISO/IEC 42001.

  • Conduct annual impact assessments evaluating algorithmic discrimination risks.

  • Provide consumer disclosures when AI was used for consequential decisions.

  • Access a safe harbor defense if they complied with governance obligations.

 

While the bill aimed to balance innovation with consumer protection, it immediately drew criticism from both sides.

 

2. Criticisms of the Original Bill

 

Consumer Advocates
  • Terms like “unlawful discrimination” and “significant factor” were too vague, creating uncertainty for enforcement.

  • Safe harbor looked like a 'get-out-of-liability free card' for companies, allowing formal compliance without accountability for discriminatory outcomes.

 

Businesses & Industry Groups

Safe harbor was welcomed, but businesses argued key terms were undefined or ambiguous, creating compliance risk:

  • “Unlawful discrimination” lacked clear metrics.

  • “Consequential decision” swept too broadly (finance, housing, healthcare, employment).

  • “Significant factor” was subjective and hard to operationalize.


Businesses feared the costly assessments, especially given the lack of clear precedent or guidance.


3. The New Bills (2025 Special Session)


Four new bills were introduced to replace or amend the 2024 Act:

  • 25B-0017 (Sunshine Act → SB 4): Repeal and replace with a disclosure-centric model; requires deployers to list top 20 characteristics influencing consequential decisions; imposes joint liability on developers and deployers.

  • 25B-0013 (CCPA/CADA Approach → HB 1008): Repeal and replace with enforcement via Colorado’s Consumer Protection Act and Anti-Discrimination Act; requires disclosures in certain high-impact sectors; applies public-contract compliance requirements.

  • 25-0004: Amend SB 24-205, narrow consequential decisions to employment and public safety; delay effective date; add small-entity exemptions.

  • 25B-0012: Repeal SB 24-205 entirely; clarify that existing anti-discrimination laws apply regardless of whether decisions are made by AI.


Status: Only SB 4 and HB 1008 have advanced; the others have stalled.

 

4. Evolution from Inception to Current State

 

  • From Governance to Disclosure → Original focus on iterative governance, risk management, and safe harbor has been abandoned. Current bills center on transparency and statutory enforcement through existing consumer laws.

  • Shift in Accountability → SB 4 imposes joint liability on developers and deployers; HB 1008 leans heavily on CCPA/CADA enforcement, shifting disputes to the Attorney General and civil rights agencies.

  • Procedural Leverage → SB 4 has Senate leadership backing, giving it the inside track procedurally; HB 1008 enjoys bipartisan appeal and business support.


5. Remaining Concerns


  • Technical Feasibility: SB 4’s “top 20 personal characteristics” disclosure is both arbitrary and possibly technically infeasible for many deployers.

  • Timing: The bill dramatically changes compliance structure without granting adequate extensions, making implementation infeasible.

  • Delay of Protections: Proposals to delay effective dates until 2027 push consumer protections further out.

  • Accountability Misaligned: SB 4’s shift of liability to developers fails to recognize that deployers control context and use, and thus pose the greatest misuse risk.

  • No Affirmative Defense: Neither bill provides defenses for good actors making demonstrable efforts to mitigate harm.

  • Attorney General Bottleneck: Centralizing rulemaking and enforcement with the AG delays clarity and risks politicization, while removing legislative accountability.

 

6. A Proposed Hybrid Approach: Modified Safe Harbor + Clearer Terms

 

Safe Harbor 2.0 Key Features:
  • Governance Alignment: Companies may claim a safe harbor if they adopt a risk management program aligned to NIST AI RMF, ISO/IEC 42001, or an equivalent recognized standard.

  • Impact Monitoring: Must perform annual assessments and ongoing monitoring for disparate impact, using measures appropriate to the reasonably foreseeable risks.

  • Incremental Progress: Companies must demonstrate reasonable, good-faith progress toward reducing identified disparate impacts over time.

  • Documentation & Transparency: All monitoring and remediation steps must be documented and made available for regulatory or judicial review.

  • Revocation Clause: Safe harbor may be denied or revoked if a company ignores risks, misrepresents compliance, or fails to make incremental progress.

  • Clearer Definitions: Define “consequential decision,” “significant factor,” and “unlawful discrimination” in plain terms aligned with existing civil-rights case law and consumer protection norms.


Model Statutory Language (Hybrid Safe Harbor Amendment)

Section X. Affirmative Defense for Responsible AI Governance.


  1. A developer or deployer of a high-risk artificial intelligence system shall be presumed to have exercised reasonable care and shall have an affirmative defense to claims of unlawful discrimination if the entity demonstrates that it has:

    1. Implemented and maintained an artificial intelligence risk management program reasonably aligned with a nationally or internationally recognized standard, including but not limited to the NIST AI Risk Management Framework or ISO/IEC 42001;

    2. Conducted periodic impact assessments, not less than annually and upon material modification of the system, evaluating risks of algorithmic discrimination reasonably foreseeable in the context of use;

    3. Monitored the system on an ongoing basis for disparate impacts, using measures appropriate to the risks reasonably foreseeable in context;

    4. Demonstrated reasonable, good-faith progress over time in reducing identified disparate impacts, as documented in its assessments; and

    5. Maintained contemporaneous documentation of risk management, monitoring, and mitigation activities, available to the Attorney General upon request.

  2. The presumption and defense in subsection (1) shall not apply where an entity:

    1. Knowingly misrepresents or omits material facts in its documentation;

    2. Fails to take reasonable steps to address identified risks; or

    3. Acts with reckless disregard for the foreseeable discriminatory impact of the system.


This hybrid model preserves incentives for strong governance, ensures consumer protection, and provides businesses with clear expectations.

 


Summary

 

The Legislature is right to revisit SB 24-205. But the current proposals risk over-correction: dismantling governance obligations without fixing definitional ambiguity or feasibility issues.

 

When your car has a flat tire, you don’t replace the engine. Yet that’s what lawmakers risk doing with the current bills.

 

It’s time for a meaningful compromise: one that preserves accountability, ensures consumer protection, and provides businesses with clarity and incentives to adopt best-practice AI governance. A hybrid safe harbor with defined terms and incremental progress requirements is the balanced path forward. This approach aligns business interests with consumer safety, ensuring that responsible AI governance is not left to political posturing.

bottom of page