[Part 1] How to deploy high-risk artificial intelligence systems and still comply with Colorado SB-205
Managing compliance risk while delivering business value in the fractured American AI governance landscape.
Despite the efforts of many, the Colorado Artificial Intelligence Act (Senate Bill 24-205 or SB-205) is set to take effect on February 1, 2026.
I don’t think people yet understand, however, how big an impact this law will make on the American AI governance landscape.
Since you are probably reading Deploy Securely to get actionable information, I won’t get into the policy or politics. Rather, I’m happy to share this actionable SB-205 compliance procedure. It takes the laws requirements and consolidates them as much as possible into something you can actually use.
This procedure presumes:
Your company is only a “Deployer” of a “High-risk Artificial Intelligence System.” “Developers” have an additional set of requirements, which I will tackle in a different post.
You already have an ISO 42001-compliant or NIST AI RMF-adherent risk management policy and program “specify[ing] and incorporat[ing] the principles, processes, and personnel that the Deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of Algorithmic Discrimination” as required by Section 6-1-1703 (2) (a) of the law.
Your company doesn’t qualify for any of the (very complex and hard to understand) carve-outs in the law.
Attempting to identify and selectively apply SB-205’s requirements to Colorado “Consumers” (residents of Colorado) per the law is too difficult, and you will just apply it to all of your customers (except any notification requirements to the Colorado Attorney General).
It is February 1, 2026 or later.
Finally:
References to specific sections of the law are in brackets ([]).
Capitalized terms are defined in SB-205.
This is not legal advice.
Scope
All Artificial Intelligence Systems that impact (or could in the future impact) residents of the State of Colorado.
Requirements
The Chief Information Security Officer must:
Affirmatively document whether a given System is a High-Risk Artificial Intelligence System per SB-205.
If so, ensure adversarial testing or red teaming of the System at least annually [6-1-1706 (3) (a) (II)].
Data owners of High-risk Artificial Intelligence Systems must:
Use reasonable care—including ensuring a review of the system prior to use, and at least annually thereafter [6-1-1703 (3) (g)]—to ensure that the System does not cause Algorithmic Discrimination [6-1-1703].
Request from the System Developer information per this questionnaire [6-1-1702].
Affirmatively notify any user of the System that it is an Artificial Intelligence System [6-1-1704 (1)].1
Prior to use, within 30 days of System’s Intentional and Substantial Modification, and at least annually thereafter, ensure documentation of an impact assessment for the System covering its:
Purpose [6-1-1703 (3) (b) (I)].
Intended use case(s) [6-1-1703 (3) (b) (I)].
Following Intentional and Substantial Modification, a statement how this is consistent with, or varied from, the Developer's intended use(s) [6-1-1703 (3)(c)].
Deployment context [6-1-1703 (3) (b) (I)].
Benefits afforded [6-1-1703 (3) (b) (I)].
Analysis of whether deployment poses any known or reasonably foreseeable risk(s) of Algorithmic Discrimination [6-1-1703 (b) (II)].
If so, the nature of the Algorithmic Discrimination and the steps taken to mitigate the risk(s) [6-1-1703 (3) (b) (II)].
Categories of data:
Inputs to the System [6-1-1703 (3) (b) (III)].
Outputs produced by the System [6-1-1703 (3) (b) (III)].
Used in prompt engineering, retrieval-augmented generation (RAG), fine-tuning or any other process to customize the System [6-1-1703 (b) (IV)].
Metrics used to evaluate performance [6-1-1703 (b) (V)].
Known limitations [6-1-1703 (b) (V)].
Transparency measures, including measures to disclose use of the System [6-1-1703 (3) (b) (VI)].
Post-deployment monitoring and user safeguards, including processes related to [6-1-1703 (3) (b) (VII)]:
Oversight.
Use.
Learning process and continual improvement.
Retain impact assessments for 3 years following final System deployment [6-1-1703 (3) (f)].
Ensure the following are posted publicly on the COMPANY_NAME web site and reviewed within 30 days of the System’s Intentional and Substantial Modification, and at least annually thereafter:
Types of High-risk Artificial Intelligence Systems deployed [6-1-1703 (5) (a) (I)].
System Description [6-1-1703 (4) (a) (II)].
Purpose of each System [6-1-1703 (4) (a) (II)].
Nature of the Consequential Decision(s) made by each System [6-1-1703 (4) (a) (II)].
Nature, source, and extent of the information collected and used by COMPANY_NAME, preferably via hyperlink to COMPANY_NAME privacy policy [6-1-1703 (5) (a) (III)].
COMPANY_NAME contact information [6-1-1703 (4) (a) (II)].
Instructions to opt out of the processing of personal data “for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the Consumer” per the Colorado Privacy Act, section 6-1-1306 (1)(a)(I)(C) [6-1-1703 (4) (a) (III)].2
How COMPANY_NAME manages known or reasonably foreseeable risks of Algorithmic Discrimination [6-1-1703 (5) (a) (II)]
As part of any communications regarding a Consequential Decision, notify the customer that COMPANY_NAME is using System to make, or be a Substantial Factor in making, the Consequential Decision. Include a hyperlink to the above website [6-1-1703 (4) (c) (I)(A)].
If the Consequential Decision is adverse3 to the customer, also provide [6-1-1703 (4) (b)]
A statement disclosing the principal reason(s) for the Consequential Decision [6-1-1703 (4) (b) (I)], including:
How and to what degree the System contributed to the Consequential Decision [6-1-1703 (4) (b) (I)(A)].
Data processed by the System in making the Consequential Decision [6-1-1703 (4) (b) (I)(B)]
Source(s) of this data [6-1-1703 (4) (b) (I)(C)]
An opportunity to correct any incorrect personal processed as part of the Consequential Decision [6-1-1703 (4) (b) (II)].
An opportunity to appeal an adverse Consequential Decision. If technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the customer [6-1-1703 (4) (b) (III)].
Ensure all notifications to customers are:
In plain language [6-1-1703 (4) (c) (I)(B)].
In all languages in which COMPANY_NAME provides information to customers [6-1-1703 (4) (c) (I)(C)].
In a format that is accessible to customers with disabilities [6-1-1703 (4) (c) (I)(D)].
The General Counsel must:
Within 30 days4 of discovery of an instance of Algorithmic Discrimination impacting a Colorado resident, notify the Attorney General of Colorado in the manner that office prescribes (6-1-1703 (7)).
All employees and contractors must:
Report, within 24 hours5, to the General Counsel any known, suspected, or imminent occurrences of Algorithmic Discrimination.
There are exceptions to this notification requirement in the law, but it is much easier to simply encode this requirement into the procedure (and product design) to avoid the risk of non-compliance.
“Profiling in furtherance of Decisions that produce legal or similarly significant effects concerning a consumer” as defined in the Colorado Privacy Act has a slightly different meaning than “Consequential Decision” does in the Colorado Artificial Intelligence Act. "Decisions that produce legal or similarly significant effects concerning a consumer" means "a decision that results in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health-care services, or access to essential goods or services" (StackAware emphasis added). The key difference here is that a Consequential Decision covers more than just provision or denial. It also includes cost and terms. Thus, if you are only making a decision about cost and terms with High-risk Artificial Intelligence System, you do not need to provide an opt-out.
SB-205 does not define “adverse,” so I recommend documenting an internal standard.
The standard in the law is “without unreasonable delay but no later than ninety days.” 30 days is plenty of time to get everything in order but I assess it avoids any chance of “unreasonable delay.”
This is not a requirement of that law, but I propose treating known or suspected Algorithmic Discrimination as an “AI incident” and triaging it rapidly.