On May 17, Colorado Governor Jared Polis signed into law the Colorado Artificial Intelligence Act (SB 205)("CAIA"), a measure passed out of the legislature on May 8 and now scheduled to become effective February 1, 2026. The CAIA aims to combat intentional and unintentional algorithmic discrimination through new, broad-based notice, disclosure, risk mitigation, and opt-out requirements for developers and deployers of "high-risk" artificial intelligence ("AI") systems and disclosures applicable to AI systems generally.
Governor Polis' signing statement acknowledged his reservations in signing the bill, noting that the measure creates a "complex compliance regime" for AI developers and deployers operating in Colorado and interacting with Colorado residents, which could add to the potential for additional states taking action, resulting in a patchwork of state laws that could tamper innovation and deter competition. In that regard, the governor also called for federal regulation of "nascent AI technologies … to limit and preempt varied compliance burdens on innovators and ensure a level playing field across states." The Governor also encouraged the legislature to reexamine the scope of discriminatory conduct in the CAIA before it takes effect, noting that the CAIA deviates from the norm by prohibiting all discriminatory outcomes from AI system use, regardless of intent.
The law appears to build upon profiling and automated decision-making technology rules that the Colorado Attorney General finalized for compliance with the Colorado Privacy Act. The Colorado AG will also have enforcement and rulemaking authority to adopt rules to implement the extensive requirements of the CAIA, so additional requirements may follow as a result of that process. Developers and deployers may be able to leverage some of their existing Colorado Privacy Act processes to comply with the CAIA. Governor Polis signed the CAIA on May 17, 2024, and it will go into effect February 1, 2026.
The CAIA imposes substantial new restrictions and compliance obligations on developers and deployers of high-risk AI systems that are intended to interact with consumers and make or be a substantial factor in making "consequential decisions" in areas such as employment, insurance, housing, credit, education, and healthcare.
The CAIA requires the following:
We highlight these key aspects of the Act and address additional requirements below.
The CAIA applies broadly to developers and deployers of high-risk AI systems that are intended to interact with Colorado residents and make consequential decisions. Key definitions include the following:
On or after CAIA's effective date, developers of high-risk artificial intelligence systems must make the following documents and information available to the deployers or other developers of the high-risk AI system, as well as the Colorado AG, within 90 days upon request:
To the extent feasible, the above documentation and information should be made available through artifacts currently used in the industry, such as model cards, dataset cards, or other impact assessments, necessary for a deployer or for a third party contracted by a deployer to complete an impact assessment as required by the law.
Developers who also serve as deployers for high-risk AI systems are not required to generate the above documentation unless the high-risk AI system is provided to an unaffiliated entity acting as a deployer.
Under the CAIA, developers of high-risk AI systems must make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
Developers of high-risk AI systems must disclose to the Colorado AG and to all known deployers or other developers of the high-risk AI system any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system without unreasonable delay but no later than 90 days after the date on which:
Deployers of high-risk AI systems must implement a risk management policy and program to govern the deployment of high-risk AI systems. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of a high-risk AI system, requiring regular systematic review and updates.
A risk management policy and program implemented and maintained pursuant to the CAIA must be reasonable considering:
Deployers of high-risk AI systems must complete an impact assessment for the high-risk AI system at least annually and within ninety days after any intentional and substantial modification to the high-risk AI system. The impact assessment must include at a minimum and to the extent reasonably known or available:
Deployers of high-risk AI systems must maintain the most recently completed impact assessment, all records concerning each impact assessment, and all prior impact assessments for at least three years following the final deployment of the high-risk AI system, and annually review the deployment of each high-risk AI system to ensure that the high-risk AI system is not causing algorithmic discrimination.
An impact assessment prepared for the purpose of complying with another appliable law or regulation satisfies the CAIA impact assessment requirement if that impact assessment "is reasonably similar in scope and effect" to the one required by the CAIA. This means that deployers could, for efficiency, complete a single-impact assessment that satisfies both the CAIA and Colorado Privacy Act requirements.
Deployers of high-risk AI systems must notify the consumer that the deployer has deployed a high-risk AI system to make or be a substantial factor in making a consequential decision before the decision is made.
Deployers of high-risk AI systems must also provide to the consumer a statement disclosing the purpose of the high-risk AI system and the nature of the consequential decision, the contact information for the deployer, a description, in plain language of the high-risk AI system (easier said than done), and instructions on how to access the statement.
Deployers must also provide the consumer information, if applicable, regarding the consumer's right under the Colorado Privacy Act to opt out of the processing of personal data concerning the consumer for the purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
If a high-risk AI system makes or is a substantial factor in making a consequential decision that is adverse to the consumer, a deployer must provide the consumer with:
Deployers of high-risk AI systems must make available on the deployer's website a statement summarizing:
If a deployer discovers that the high-risk AI system has caused algorithmic discrimination, the deployer must provide notice disclosing the discovery to the Colorado AG without unreasonable delay but no later than ninety (90) days after the date of discovery.
A deployer must also disclose the risk management policy implemented, impact assessment completed, or records maintained to the AG upon request no later than ninety days after the request.
The CAIA also imposes a basic transparency obligation on developers or deployers using AI systems that interact with consumers. Specifically, a deployer or other developer who deploys or makes available an AI system that is intended to interact with consumers must ensure that the AI system discloses to each consumer who interacts with the AI system that the consumer is interacting with an AI system. The CAIA does provide an exception such that this duty does not apply in those circumstances where it would be obvious to a reasonable person that they are interacting with an AI system.
High-risk AI systems, as defined by the CAIA, do not include AI systems intended to perform a narrow procedural task or detect decision-making patterns or deviations from prior-decision-making patterns and that is not intended to replace or influence a previously completed human assessment without sufficient human review.
High-risk AI systems also do not include the following technologies unless the technologies, when deployed, make or are a substantial factor in making a consequential decision: anti-fraud technology that does not use facial recognition technology, anti-malware, anti-virus, artificial intelligence-enabled video games, calculators, cybersecurity, databases, data storage, firewall, internet domain registration, internet website loading, networking, spam and robocall filtering, spell-checking, spreadsheets, web caching, web hosting or any similar technology, and technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an acceptable use policy that prohibits generating content that is discriminatory or harmful.
Algorithmic discrimination, as defined by the CAIA, does not include the offer, license, or use of a high-risk AI system by a developer or deployer for the sole purpose of: the developer's or deployer's self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with Colorado and federal Law; or expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination. It also does not include an act or omission by or on behalf of a private club or other establishment that is not in fact open to the public under Title II of the Civil Rights Act of 1964.
Documentation and disclosure requirements do not require developers or deployers to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
A deployer's duties to establish a risk management policy and program, impact assessment, and website statement do not apply to a deployer if each of the following applies:
Other exceptions include, under certain circumstances, HIPAA-covered entities and banks.
The attorney general has exclusive authority to enforce the CAIA, and there is no private right of action. Violations of the CAIA are deemed to be per se unfair trade practices under Colorado consumer protection law.
If an action is commenced by the Colorado AG, it is an affirmative defense that the developer or deployer: (a) discovers and cures a violation as a result of feedback that the developer or deployer encourages deployers or users to provide, adversarial testing or red teaming, or an internal review process; and (b) is in compliance with NIST's AI Risk Management Framework and ISO/IEC Standard 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for AI systems, if the standards are substantially equivalent to or more stringent than the NIST AI RMF or ISO/IEC 42001.
The CAIA grants the Colorado AG rulemaking authority to implement and enforce the requirements of the bill, including rules regarding: the documentation and requirements for developers, the contents and requirements for the notices and disclosures, the content and requirements of the risk management and policy program, the content and requirements of the impact assessments, the requirements for the rebuttable presumptions, and the requirements for affirmative defenses under the CAIA.
The CAIA represents the first attempt to impose a risk-based regime to regulate AI and algorithmic discrimination in the United States, but it will most certainly not be the last.
In the absence of federal action to regulate artificial intelligence technology (as well as privacy and data protection more broadly), states may follow Colorado's lead in pursuing broad-based regulation of AI systems that make decisions impacting consumers.
DWT's privacy and security team and AI team regularly counsel clients on how their business practices can comply with state privacy and AI laws. We will continue to monitor the rapid development of other state and new federal privacy and AI laws and regulations.