How StackAware does AI impact assessments for ISO 42001
Our step-by-step guide for meeting a key certification requirement.
What is an AI impact assessment?
The ISO 42001 standard says it is a way to
determine the potential consequences of an AI system’s deployment, intended use[,] and foreseeable misuse has on individuals or groups of individuals, or both, and societies. (Clause 6.1.4)
Compare this to an AI risk assessment, the way to
assess the potential consequences to the organization, individuals and societies that would result if the identified risks were to materialize. (Clause 6.1.2)
Combined with the fact that organizations “shall consider the results of the AI system impact assessment in the risk assessment,” (Clause 6.1.4) I found it quite difficult to separate the two concepts.
To deal with this challenge, StackAware assesses AI at three levels:
Organization - model, system, and risk assessments
Individual(s) - model, system, impact, and risk assessments
Society(ies) - model, system, impact, and risk assessments
ISO 23894, which gives additional guidance on AI risk management, mirrors this three-level structure (while using the term “communities” along with “societies”). And the StackAware AI risk assessment application program interface (API) breaks them out in this way too.
How should I do an AI impact assessment?
Thankfully Annex B of ISO 42001 and ISO 42005 (which is focused solely on AI system impact assessments) give some guidance. I have consolidated this into the below procedure.
1. AI Model Assessment
Evaluate:
Algorithm types
Optimization methods
Tools to aid in development
Also look at the underlying training data's:
Quality
Categories
Provenance
Intended use
Known or potential bias
Last update or modification
Conditioning tools & techniques
This spans ISO 42001 Annex A controls 4.2-4.4, 6.1.2-2.23, and 7.2-7.6. And it is very similar to the process described in ISO 42005’s Annex E.2.3-E.2.4.
2. AI System Assessment
Look at real-world deployment of the model along with supporting infrastructure, specifically evaluating:
Complexity
Intended purpose
Frequency/timing of use
Accessibility and usability
Health and safety impacts
Testing and release criteria
Accountability and human oversight
Data retention and disposal policies
Data classifications/sources processed
Transparency, explainability, and interpretability
Reliability, observability, logging, and monitoring
Physical location (including applicable jurisdictions)
Software & hardware for development & deployment
This overlaps with some model assessment-specific controls for ISO 42001 and also covers all of Annex A.6.
3. AI Impact Assessment
Next, we determine if an impact assessment is appropriate based on the:
Criticality of the intended purpose and context in which the AI system is used
Complexity and level of automation of the AI system
Sensitivity of data processed by the AI system
For audit purposes, I recommend either:
Having a clear test for whether a system merits an impact assessment or not; or
Just doing impact assessments for every system
For example, a healthcare organization could limit impact assessments that systems that process protected health information (PHI).
If a system qualifies, StackAware then evaluates the categories of people and societies impacted. Through that lens, we look at the implications for:
Economics, including:
Access to financial services
Employment opportunities
Commerce
Taxes
Trade
Environmental sustainability, such as:
Natural resource consumption
Greenhouse gas emissions
Legal, governmental, and public policy, as it relates to things like:
National security and criminal justice systems
Misinformation for political gain
Legislative processes
Normative, societal, cultural, and human rights, such as those touching on:
Psychological safety and well-being
Life opportunities
Cultural biases
4. AI Risk Assessment
Using steps 1-3, we look at the probable frequency and magnitude of future loss.
Any information gaps often become risks themselves.
As do predictable failures.
For organizational risk, I use the "Rapid Risk Audit" approach from Doug Hubbard and Richard Seiersen.
This gives a quantitative annual loss expectancy (ALE), which is easy to compare to one's risk appetite.
I then compare individual and societal risks against the client's risk criteria to determine their acceptability.
Check out StackAware’s AI governance templates for purpose-built worksheets that track all of this.
Does that seem like an in-depth process, but still want to get ISO 42001 certified?
Through the AI Management System (AIMS) Accelerator, StackAware helps AI-powered companies in:
Financial services
Healthcare
B2B SaaS
get ISO 42001-ready in 90 days.
Ready to get started?