Seven ISO 42001 Annex A controls you might be able to exclude
Tailoring AI risk management to your unique situation.
You don’t need all Annex A controls for ISO 42001 certification. None of them are “required.” But you’ll need to justify their exclusion if you don’t think they apply.
One good reason?
If you are aren’t training AI models yourself but rather consuming or wrapping them in applications.
This means you won’t have access to a lot of technical data about the training process. So excluding certain Annex A controls might make sense.1
Here are seven you might consider excluding:
A.4.3: Data resources
This include information about:
Retention
Intended use
Update/modification
Quality (duplicating A.7.4 in my opinion)
Provenance (duplicative as well, of A.7.3 and A.7.5)
of “data resources utilized for the AI system.” I’ll note it doesn’t specifically say “training,” so this can include data for AI processing.
A.4.4: Tooling resources
Things like:
Algorithm types
Optimization methods
Data conditioning processes
are covered by this control. They are all difficult to get unless you train the model yourself.
A.4.5: System and computing resources
Documenting the:
Hardware used
Physical location
Resources required
for the AI system would fulfill this control. Challenging to do unless you have a very transparent model provider.
A.7.3: Acquisition of data
A broad control you can summarize as “data governance.” It requires noting:
Sources
Categories
Quantities
Demographics/biases
Data rights/ownership
Privacy and security requirements
of information used with your AI systems. This is a toss-up in terms of exclusion, because I think you should be doing this stuff anyway.
Check out StackAware’s data classification and tagging practices for examples of how we do it.
A.7.4: Quality of data
ISO 25024 defines data quality as the degree to which the data’s
characteristics satisfy
stated and implied needs
when used under specified conditions.
You may have an argument that excluding this requirement wouldn’t impact your AIMS if you aren’t training models yourself.
But the quality of the model itself (and provider, if applicable) becomes critical. So your risk and impact assessment here will need to be solid.
A.7.5: Data provenance
This is information about data’s:
update
creation
validation
abstraction
transcription
transfer of control
This is a toss-up in terms of being able to exclude. I think these practices are part of any sound data governance program. But you might be able to make the case they don’t apply.
A.7.6: Data preparation
This control requires documenting granular steps in the model training process like:
Encoding
Data cleaning
Normalization
It’s highly unlikely you would have direct access to this information unless you were training the model yourself.
So it’s reasonable to exclude otherwise.
Need help deciding on (and implementing) your Annex A controls for ISO 42001 certification?
StackAware will be one of the first-ever companies to get certified under this global AI governance standard. On top of our own experience, we’re already getting other AI-powered companies ready for their audits as well.
So if you need help getting ISO 42001 certified:
StackAware does not train models but we did include all Annex A controls in our ISO 42001 audit. We looked at our subprocessor (mainly OpenAI) documentation and accepted any gaps as known risks.