Securely fast track AI projects in 5 quick steps
Breaking out of the "chicken and egg" paradox.
I recently spoke with a CISO about what he concisely described as the “chicken and egg” problem with AI governance. It goes like this:
Business teams are still figuring out their exact use cases for AI, and there will definitely be some abandoned projects along the way.
Security, legal, compliance, and privacy teams are concerned about sensitive data exposure and intellectual property risk.
Technology organizations (rightfully) don’t want be the decision makers here but also want to avoid wasting precious resources on experiments that don't pan out.
After reflecting about this paradox a bit, here is what I recommended:
1. Get a single business leader to spell out an AI-driven project’s potential outcomes.
This could take the form of three cases:
“bear” (no value generated)
“base” (industry-pacing value generated)
“bull” (home-run, paradigm-shifting value gains)
Ask this person to document these scenarios, preferably putting numbers (revenue gained, churn avoided, etc.) to them.
If you can’t get someone to sign up for this exercise, then I would say the proposed projects is not worth anyone’s time.
2. Get this single business leader to identify an overall risk appetite.
Optimally this would be in quantitative (i.e. dollar) terms. But sometimes scenario-driven criteria are all you’ll be able to agree on. For example:
malicious theft of PII
“denial of wallet” attacks draining SaaS API credits
accidental exposure of intellectual property to 3rd party AI training
Get this business leader to map out what s/he could stomach, given the gains to be had (per step 1).
If this person says the risk appetite is “zero,” advise shutting down your company and hiding under a rock somewhere.
3. Develop - and get buy in for - a clear procedure for approving new experiments and prototypes.
This will require input from all the people I've mentioned so far, and can include steps like:
Documenting the business case
Analyzing cyber risk
Contractual analysis
Data privacy review
Decommissioning (if ending project)
Official approval (if successful)
Get sign-off from everyone on the process itself, so you don’t need to reinvent the wheel every time. And don’t make innovators grind through a bureaucratic maze.
StackAware’s AI risk management SOP can serve as an example.
4. Get clear success/failure criteria for each new experiment from the business team.
These AI pilot programs should end in one of two ways:
the business pulls the plug on experiments that don’t pan out.
winners emerge and become formal programs of record.
Ensure your process accounts for both outcomes. And don’t let the business keep projects that miss their success criteria going without subsequent, formal risk acceptance.
Put these low performers out of their misery.
5. Continuously monitor productivity/innovation gains vs. cyber, privacy, and legal risk.
As part of the process developed in step 3, schedule regular check-ins with stakeholders to review agreed-upon metrics.
And just like you shouldn’t let the business move the goalposts of success without affirmatively signing off on the continued risk, be prepared to re-evaluate the risk of ongoing efforts.
If new risks emerge that exceed what the business leader approved, it should be up to him/her to make the call about whether to continue. If the answer is “keep going,” tough, there better be some newly-identified benefits from the project!
Bottom line
Innovating securely isn’t easy. There is going to be a give and take between key stakeholders throughout the process.
But security exists to enable the business, not get in the way of AI-powered innovation. So if you need help applying guardrails when it comes to:
Cybersecurity
Compliance
Privacy
without grinding progress to a halt, get in touch.
Related LinkedIn posts
great points. I see the lack of clear frameworks also as a problem and hopefully the new ISO 42001 standards and the EU AI act will help out here