What High-risk AI deployers can ask their vendors to help with Colorado SB-205 compliance
Managing risk in the face of gray area.
Colorado’s Artificial Intelligence Act, Senate Bill 24-205 (SB-205), changes the game for AI governance.
Coming into force on February 1, 2026, it applies a range of requirements to “High-risk Artificial Intelligence Systems.” On top of the tight timeline for compliance, the State of Colorado’s own Artificial Intelligence Impact Task Force identified many gray areas in the law.
But that won’t stop enforcement.
Considering many systems already in use meet SB-205’s definition of “High-risk,” businesses need to take action.
So we put together sample questionnaire that deployers (i.e. companies purchasing software) can send to developers (vendors) to address some of the law’s requirements.
This doesn’t cover developer requirements, nor is it any sort of legal catch-all. But it gives companies a place to start the conversation with their suppliers about compliance.
Dear SUPPLIER_NAME,
We have identified PRODUCT_NAME as being a “High-risk Artificial Intelligence System” according to the Colorado’s Artificial Intelligence Act, Senate Bill 24-205 (“SB-205”). SUPPLIER_NAME is the “Developer” according the legislation, and we, as your customer, are a “Deployer.”
According to SB-205, “On and after February 1, 2026…a Developer of a High-risk Artificial Intelligence System shall make available to the Deployer or other Developer of the High-risk Artificial Intelligence System” a range of requirements.
Thus, in anticipation of these regulatory requirements, for your High-risk Artificial Intelligence System (“System”), please provide the below. Capitalized terms have the same definitions as in SB-205, unless noted otherwise in this letter.
A general statement describing reasonably foreseeable uses and known harmful or inappropriate uses.
Documentation disclosing:
The purpose of the System.
The intended benefits and uses of the System.
Known or reasonably foreseeable limitations and risks, including Algorithmic Discrimination.
Summaries of the type of data used in training.
All other information necessary to allow us to comply with the requirements of Section 6-1-1703 of SB24-205.
Documentation describing:
Intended outputs of the System.
How the System should:
be used
not be used
be monitored when used to make, or is a Substantial Factor in making, a Consequential Decision.
How you evaluate the System for performance.
Data governance measures for System training data.
Measures used to examine the suitability of data sources, biases, and mitigations taken.
Measures taken to mitigate known or reasonably foreseeable risks of Algorithmic Discrimination that may arise from System deployment.
Any additional documentation reasonably necessary to help us understand the outputs and to monitor for risks of Algorithmic Discrimination.
Documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments necessary for a us to complete an impact assessment pursuant to section 6-1-1703 (3) of SB24-205, including:
Categories of data the System processes as inputs.
A description of transparency measures taken for the System.
The public statement statement on your web site summarizing:
The types of High-risk Artificial Intelligence Systems (per SB-205) you have developed or made Intentional and Substantial Modification to and currently make available outside your organization.
How you manage known or reasonably foreseeable risks of Algorithmic Discrimination that may arise from the development or Intentional and Substantial modification of the types of High-risk Artificial Intelligence System described above.
Your processes for updating the above statement:
To ensure it remains accurate; and
Within 90 days of the Intentional and Substantial modification of any High-risk Artificial Intelligence System described within.
Your process for notifying us of known or reasonably foreseeable risks of Algorithmic Discrimination arising from intended uses of the System, within 90 days of:
Your discovery, through ongoing testing and analysis that the System has caused or is reasonably likely to have caused Algorithmic Discrimination.
Credible reports the System has been deployed and caused Algorithmic Discrimination.