7 things nobody tells you about ISO 42001 certification
Hard-learned lessons from the AI governance trenches.
1. You have way more AI than you think
It’s not just what engineering deployed.
Marketing tools
Sales integrations
Hidden vendor features
Your AI attack surface is already bigger than your inventory.
If you don’t map it first, everything else breaks.
2. “High / Medium / Low” risk is useless
AI (or any) risk doesn’t fit clean labels.
Everything becomes “medium.”
The shift that works?
Quantify risk in dollars
When you do that:
You can justify controls
You can prioritize correctly
Leadership actually engages
3. Overly strict policies create Shadow AI
“Don’t create IP risk with AI” sounds good.
But what does it actually mean?
The companies that win:
Involve the business
Create usable policies
Enable adoption within risk appetite
4. The standard tells you what, not how
ISO 42001 says:
“Do impact assessments.”
It doesn’t tell you:
What questions to ask
How to scope them
What auditors expect
This is where most DIY efforts die.
5. Security shouldn’t own risk decisions
Security advising ≠ security deciding.
If security owns risk everything gets blocked (and logically, should)
The model that works:
Security advises
Business decides
And owns accountability for outcomes
6. Certification is not the finish line
Most programs drift within 6 months.
Why?
Models update
Vendors change
Regulations shift
If your system isn’t continuously updated:
Your certification becomes fiction.
7. The ROI is there
Yes, it reduces regulatory risk.
But the real upside:
Faster enterprise sales
Shorter security reviews
Clear answers during procurement
Bottom line
Many approach ISO 42001 like a compliance project.
The ones that succeed treat it like a business system for scaling AI securely, safely, and responsibly.
Need help getting ISO 42001 ready?

