How StackAware does AI impact assessments for ISO 42001
Our step-by-step guide for meeting a key certification requirement.
What is an AI impact assessment?
The ISO 42001 standard says it is a way to
determine the potential consequences an AI system’s deployment, intended use[,] and foreseeable misuse has on individuals or groups of individuals, or both, and societies. (Clause 6.1.4)
Compare this to an AI risk assessment, the way to
assess the potential consequences to the organization, individuals and societies that would result if the identified risks were to materialize. (Clause 6.1.2)
Combined with the fact that organizations “shall consider the results of the AI system impact assessment in the risk assessment,” (Clause 6.1.4) I found it quite difficult to separate the two concepts.
To deal with this challenge, StackAware assesses AI at three levels:
Organization - risk assessment only
Individual(s) - risk and impact assessments
Society(ies) - risk and impact assessments
ISO 23894, which gives additional guidance on AI risk management, mirrors this three-level structure (while using the term “communities” along with “societies”). And the StackAware AI risk assessment application program interface (API) breaks them out in this way too.
How should I do an AI impact assessment?
Thankfully Annex B of ISO 42001 gives some guidance, which I have consolidated into this checklist:
Determine if an impact assessment is appropriate based on the:
criticality of the intended purpose and context in which the AI system is used
complexity and level of automation of the AI system
sensitivity of data processed by the AI system
If proceeding with the assessment, evaluate:
the System’s:
applicable jurisdictions
technical and societal context where deployed
intended use and foreseeable misuse
predictable failures, their potential impacts, and measures taken to mitigate them
complexity
transparency and explainability
security and privacy
accessibility
operating organization’s employment and staff skilling
how the AI system affects individuals, specifically their:
relevant demographic groups
legal position
life opportunities
safety and physical well-being
psychological safety and well-being
health
finances
human rights
specific protection needs such as for:
children
impaired persons
elderly persons
workers
how the AI system affects communities, societies, and universal human rights, specifically as it relates to:
environmental sustainability, including the impacts on:
natural resources
greenhouse gas emissions
economics, including:
access to financial services
employment opportunities
taxes
trade
commerce
government, including:
legislative processes
misinformation for political gain
national security and criminal justice systems
health and safety, including:
access to healthcare
medical diagnosis and treatment
potential physical and psychological harms
norms, traditions, culture and values, including misinformation that leads to biases or harms to individuals or groups of individuals, or both, and societies.
Does that seem like an in-depth process, but still want to get ISO 42001 certified?
Through the AI Management System (AIMS) Accelerator, StackAware helps AI-powered companies in:
Financial services
Healthcare
B2B SaaS
get ISO 42001-ready in 90 days.
Ready to get started?