How to get auditors and customers off your back while winning with AI
AI mythbuster manual: 3 ways transparency saves time and builds trust.
Will transparency about how you use AI work against you?
Long-term, no.
While it may take some initial effort to calibrate your communications, in the end your openness will become an asset. Here are some common misconceptions:
Myth #1: auditors will ask tougher questions if we document AI-related controls
For standards like SOC 2, it is true that you define your own controls. And you might be tempted to simply omit references to AI security in your policies and procedures to protect you from scrutiny.
Auditors, however, are rapidly catching up and likely to ask hard questions. The SOC Trust Services Criteria, especially:
security
confidentiality
processing integrity
are all impacted by AI use, and your security program needs to address them.
So if you are not prepared with a solid response, this can open the door to more probing
From my experience, the opposite approach, of “extreme compliance,” has worked well. Whenever an auditor requests evidence of how you are meeting a certain requirement, drop an avalanche of data on them. They’ll either:
a) just assume you have your stuff together; or
b) go through it all and confirm this to be the case.
Your biggest friends here:
Version controls
Structured databases
A single source of truth
Machine-readable formats
Auto-generated paper trails
Need evidence? Check out what the auditors of one of my clients said:
Myth #2: customers will get suspicious the more we tell them about AI
If you hide the ball about your AI training and processing, you might be able to keep things under wraps…for a while. Customers - and CISOs - talk to each other, though.
And they are already asking hard questions.
You can’t indefinitely stonewall people who are paying you, so eventually you’ll need to give a clear answer. Better to get everything out there all at once rather than having a steady drip of revelations about what you are doing.
Legalistic or stealthy approaches - like tweaking terms of service or updating your documentation, but nothing else - are likely to blow up in your face.
And some companies that received blowback due to their rollout of AI features (or even announcements about them) include:
This article has some suggestions about how to address common concerns.
Myth #3: competitors will gain an advantage from learning about how we leverage AI
When was the last time you tried to reverse-engineer what a competitor was doing from their web site or other public content?
If your answer was anything aside from “never,” was this exercise helpful?
To quote Eric Ries, author of The Lean Startup, “if you can’t out iterate someone who is trying to copy you, you’re toast anyway.”
If you are providing extremely granular detail about your AI processing, like training data and its lineage, you can protect this information using NDAs.
But otherwise, focus your time and attention on delivering value. That’s what we do: StackAware builds in public and makes 90% of our documentation public.
Bottom line: transparency is the best policy
Given the reputation (and now, due to recent FTC threats, regulatory) risk from concealing AI training and processing, being up front is the way to go.
Are you about to launch AI-powered features or products?
Do you need help communicating about your AI governance and security practices?
That’s our bread and butter. So let us give you a hand here.