How we implement ISO 42001 control A.10.3 and help clients do the same to manage AI vendor risk
AI risk management throughout the supply chain can be a compliance requirement.
ISO 42001 has a set of suggested controls in Annex A of the standard.
A.10.3 (Suppliers) focuses on the AI and data supply chain. Specifically, it requires certified companies to consider:
types of suppliers
what they supply
varying levels of risk posed
when determining the:
selection of suppliers
requirements placed on those suppliers
levels of ongoing monitoring and evaluation needed.
Here are three ways we do this at StackAware (and how we advise our clients to do so as well):
1. Data validation
If you are getting raw outputs from a 3rd-party hosted AI tool, you may want to do some post-processing.
For example, if you need to classify the sentiment of a customer conversation with an Large Language Model (LLM) as exactly one of the following:
positive
neutral
negative
and for some reason the LLM responds “the customer seems happy with the interaction,” this would not meet your business requirements (side note: classification isn’t a good use case for generative AI to begin with, but this is just an example).
Ways to handle these types of problems include:
Requiring human intervention for failed validation
Treating failures as “worst case” (e.g. negative)
Giving feedback to the supplier
2. Contractual terms
Nondisclosure agreements (NDAs) are a data protection contract.
But they are often vague about how you must protect confidential information, e.g. “commercially reasonably.”
So specifically with AI, StackAware is explicit that processing is allowed as long as the underlying model is not both:
trained on confidential information
available to anyone not bound by confidentiality
This forbids you from using ChatGPT in its default (training-enabled) mode, but Temporary Chats are okay.
I don’t give legal advice since I’m not a lawyer, but I did write an article on this topic with one.
3. Continuous AI risk and impact monitoring
AI-related:
terms
conditions
contractual standards
are changing all the time. Vendors are also rolling out new:
products
features
models
This means the risk picture is always in flux. A good third-party risk management program will ensure you stay on top of these things. But
wading through tons of dry legalese
tracking every new release
understanding the risk
can take up huge amounts of time.
Need a better approach?
The StackAware AI risk assessment API lets you track AI risk from all of your vendors and open source models.