Responsible AI runbook: how to talk about AI to build trust and meet compliance requirements
Be transparent, avoid platitudes, and take feedback.
How a lot of company "responsible AI" statements sound:
Transparency
We are transparent.
Explainability
We explain.
Ethics
We are ethical.
Fairness
We are fair.
Security
We are secure.
Privacy
We are private.
An exaggeration, but not by much.
One company goes as far to say in its “Commitment to Responsible Development and Use of AI and ML” that it “does not disclose the AI models used in our cyber security and protection services. These details are trade secrets.” That’s at the same time it says “[w]e are clear and honest about our use of AI systems.”
Why you should care about documenting responsible AI use
Annex A control 6.1.3 of ISO/IEC 42001:2023 requires organizations to “define and document the specific processes for the responsible design and development of the AI system.” So if you are going for certification and want to include this control (you probably should), you’ll need some sound documentation in place.
Furthermore:
Some specific topics to discuss according to the ISO 42001 standard include:
Change control and release criteria requirements.
Process for AI system impact assessments.
Training data expectations and rules.
Expertise/training for AI developers.
Engagement of interested parties.
Human oversight requirements.
Testing requirements.
Not everything described in ISO 42001 A.6.1.3 needs to be included (or publicly-available. But here are some:
Best practices for communicating responsible AI use
Be transparent about processes and techniques
Companies take radically different approaches here. The one I mentioned above affirmatively says it isn’t telling you anything about its AI models. Others, like Miro, however, take a completely opposite approach. It provides complete list of third-party (including open source) models in its product stack.
How detailed you want to get here is of course up to you. Weigh the risk of disclosure the against reward of increased trust.
At minimum, explain whether and how you train on customer data. StackAware has a template for exactly how to communicate this information simply. If you want to get really advanced, you can use the CycloneDX SBOM format, like we do.
If you are training on confidential data, explain how you are de-identifying or (pseudo)anonymizing the information. Especially for unstructured data like free text, images, audio, or video, this can be very tricky and prone to failure. Help customers understand what risks they face.
Avoid platitudes
“We have high ethical standards.”
“Be a good human.”
“Eliminate bias.”
These are all things I have seen companies claim (or require) as part of their responsible AI documentation. And they are all pretty weak. As I’ve said before:
Ethics are just a business requirement.
“Good human” isn’t a legal standard.
Delete your model to eliminate bias.
If you are going to put a statement of your principles out there, be ready to back them up, in detail.
Take feedback
Having a way to take feedback about your use of AI systems:
Is a proactive way to demonstrate how responsible you are.
Can meet ISO 42001 control A3.3’s requirements for reporting.
Also can satisfy the NIST AI RMF recommendation to collect it (GOVERN 5.1).
A vulnerability disclosure program (VDP) is the bare minimum here. And I recommend including unintended AI functionality as a reportable item as part of it.
Even better? A formal way of collecting comments from customers, researchers, and even the public prior to launching a new AI product or feature.
Need help deploying responsible AI?
Industry standards for:
Training on customer data
Who owns the resulting intellectual property
How to communicate about these types of things
are still in flux. Getting things right and avoiding public blowback requires a well-planned approach customized for your sector, customer base, and risk appetite.
So if you are an executive at an AI-powered company in:
Financial services
Healthcare
B2B SaaS
to learn how StackAware helps companies innovate responsibly through AI governance program design and ISO 42001 preparation.