[Part 2] How to develop high-risk artificial intelligence systems and still comply with Colorado SB-205
A procedure for how to manage risk if your company is building "High-risk Artificial Intelligence Systems."
With the collapse of efforts to ban state-level AI regulation until 2035, the Colorado Artificial Intelligence Act (Senate Bill 24-205 or SB-205) will take effect on February 1, 2026.
As I wrote previously, SB-205 will have a massive impact on the American AI governance landscape. So to complement the Deployer compliance procedure I put together, I am now releasing a Developer version of it.
This procedure presumes:
Your company is only a “Developer” of a “High-risk Artificial Intelligence System.” If your company is also a “Deployer,” incorporate this procedure as well. Note that Data Owners are not required to create the documentation described below in this case [6-1-1702 (3) (b)].
You already have an ISO 42001-compliant or NIST AI RMF-adherent risk management policy and program. Interestingly enough, this is not required for Developers (like it is for Deployers), but having such a program is an affirmative defense against allegations of some violations under certain conditions [6-1-1706 (3)]. So it’s probably a good idea to have one.
Your company doesn’t qualify for any of the (very complex) carve-outs in the law.
Attempting to identify and selectively apply SB-205’s requirements to Colorado “Consumers” (residents of Colorado) per the law is too difficult, and you will just apply it to all of your customers (except any notification requirements to the Colorado Attorney General).
It is February 1, 2026 or later.
Finally:
References to specific sections of the law are in brackets ([]).
Capitalized terms are defined in SB-205.
This is not legal advice.
Scope
Ensure the responsible and ethical development of Artificial Intelligence Systems as well as compliance with the Colorado Artificial Intelligence Act (SB-205).
Scope
All Artificial Intelligence Systems COMPANY_NAME develops that impact (or could in the future impact) residents of the State of Colorado.
Requirements
The Chief Information Security Officer must:
Affirmatively document whether a given System is a High-Risk Artificial Intelligence System per SB-205.
If so, ensure adversarial testing or red teaming of the System at least annually [6-1-1706 (3) (a) (II)].1
Data owners of High-risk Artificial Intelligence Systems must:
Use reasonable care—including ensuring a review of the system prior to use, and at least annually thereafter [6-1-1702 (1)]—to ensure that the System does not cause Algorithmic Discrimination [6-1-1703].
Ensure any System with which an end user interacts discloses it is an Artificial Intelligence System [6-1-1704 (1)].2
Prior to allowing any third-party Deployer access to the System, provide the below to the Deployer and confirm receipt in writing [6-1-1702 (2)]:
System purpose [6-1-1702 (2) (b) (III)].
Intended System use [6-1-1702 (2) (c) (V)].
Prohibited System uses for Deployers [6-1-1702 (2) (c) (V)].
Intended benefits and uses of the System [6-1-1702 (2) (b) (IV)].
Intended System outputs [6-1-1702 (2) (c) (III)] and how to monitor them [6-1-1702 (2) (d)].
Reasonably foreseeable uses and known harmful or inappropriate System uses [6-1-1702 (2) (a)].
Known and reasonably foreseeable limitations of the System [6-1-1702 (2) (b) (II)].
Known or reasonably foreseeable risks of Algorithmic Discrimination arising from intended System use [6-1-1702 (2) (b) (II)].
Measures taken to mitigate such risks [6-1-1702 (2) (c) (IV)].
Methods to monitor such risks [6-1-1702 (2) (d)].
How the System was evaluated for mitigation of Algorithmic Discrimination [6-1-1702 (2) (c) (I)].
Recommend human oversight measures when making, or being a Substantial Factor in, a Consequential Decision [6-1-1702 (2) (c) (V)].
Summaries of the type of data used to train the System [6-1-1702 (2) (b) (I)].
How the System was evaluated for performance [6-1-1702 (2) (c) (I)].
Data governance measures for training data including [6-1-1702 (2) (c) (II)]:
Measures used to examine the suitability.
Possible biases.
Appropriate mitigation(s).
Any other information required for the Deployer to comply with the StackAware Colorado SB-205 Deployer procedure [6-1-1702 (2) (b) (V) and 6-1-1702 (3) (a)].
Prior to System deployment [6-1-1702 (4) (a)], Intentional and Substantial Modification [6-1-1702 (4) (b) (II)], and otherwise at least annually, review COMPANY_NAME web site and ensure the following are available:
System name and type [6-1-1702 (4) (a) (I) ]
Methods for managing known or reasonably foreseeable risks of Algorithmic Discrimination arising from System use [6-1-1702 (4) (a) (II)].
The General Counsel must:
Within 30 days of discovery of a known or reasonably foreseeable risk of Algorithmic Discrimination arising from the intended use of the System impacting a Colorado resident, notify [6-1-1702 (5)]:
All known Deployers of the System
All other Developers of the System
The Attorney General of Colorado in the manner that office prescribes.
All employees and contractors must:
Report, within 24 hours, to the General Counsel any known, suspected, or imminent occurrences of Algorithmic Discrimination.
This is not a requirement of the law, but I recommend doing this because, according to SB-205, Section 6-1-1706 (3):
“In any action commenced by the attorney general to enforce this Part 17, it is an affirmative defense that the Developer, Deployer, or other person…Discovers and cures a violation of this Part 17 as a result of…adversarial testing or red teaming, as those terms are defined or used by the National Institute of Standards and Technology.”
As of July 2025, NIST does not appear to define “adversarial testing” but defines “artificial intelligence red-teaming” as a “structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”
Per 6-1-1704 (2), “Disclosure is not required…under circumstances in which it would be obvious to a reasonable person that the person is interacting with a Artificial intelligence system.” With that said, it’s just easier to implement a blanket disclosure requirement.