The 3 biggest mistakes security leaders make with AI governance (and how to avoid them)
The frustrating reality security leaders face.
The frustrating reality security leaders face
Have you felt off-track when trying to build out your AI governance, risk management, and compliance (GRC) program? You’re not alone. Despite their best efforts, even seasoned cybersecurity pros struggle.
Whether it’s having:
An incomplete (or no) AI asset inventory
Confusing data classification policy
AI policy loopholes and vagueness
the road to success is riddled with pitfalls and frustrations. But it doesn’t have to be that way.
Why you’re stuck (and what to do instead)
The reason so many people get stuck is that they’re making some common yet easily avoidable mistakes. Once you understand these missteps, you can start taking the right actions to get your governance program up and running
Here are 3 of the biggest blunders holding most people back:
1. Incomplete (or no) AI asset inventory
It’s impossible to protect your data if you don’t know where it is. But unfortunately this is a stark reality for many companies.
According to one survey, 8% percent of employees at companies banning ChatGPT admitted to using it anyway! While I don’t think blanket bans are the right approach, they are completely worthless if you don’t have any insight into what applications and models are actually in use.
And ungoverned AI tool use can be a huge liability. I estimate Amazon lost $1,401,573 from only 2 months of ChatGPT data leakage.
On top of “shadow AI” introduced by employees, even approved software often falls through the cracks. Another study found nearly one-third of organizations use 10 or more data sources to inventory their assets for security purposes.
This is another recipe for disaster.
Rather than falling into this trap:
Track all of your assets (not just AI) in a standardized format like CycloneDX
Use automation to update it continuously and completely
Create a clear procedure for onboarding new tools
Train your team on the dangers of shadow AI
Don’t just try to block every new AI app
2. Confusing data classification policy
Companies often take unnecessarily complex and contradictory approaches to information labeling. While their formal policy may refer to classifications like:
Private
Restricted
Confidential
in practice employees rarely use them. Even more confusing is when other documents refer to categories of information not defined in the data classification policy like:
Internal
Sensitive
Proprietary
So make sure to define your terms.
If you have a policy that bans processing certain types of information with certain (or all) AI tools, employees need to be able to understand what these types of information are.
Otherwise they will just make things up as they go along. Or, more realistically, ignore your guidance altogether.
Instead, try this:
Create a single source of truth for all data classification requirements
Keep the number of different categories to a minimum
Tie classifications to handling procedures
Label information consistently
Automate where practical
3. AI policy loopholes and vagueness
Policies represent a declaration of your organization’s risk appetite.
But if your employees cannot decode what they mean, or, even worse, they are riddled with loopholes, you aren’t going to stay within it.
I’ve seen AI policies that ban processing any confidential information with AI tools. But at the same time, already-approved enterprise applications are leveraging 4th party AI integrations which do just that.
Similarly, I often see requirements in policies to “not create intellectual property risk” when using AI tools.
I agree this is a good approach, but what exactly does this prohibit?
training models on copyrighted material?
using models known to be trained on copyrighted material?
introducing AI-generated material which may or may not be copyright-able into your products, services, communications, or content?
This isn’t legal advice, but there is certainly a lot of debate as to what the acceptable bounds are.
A better approach is:
Make your AI policy high-level but not vague; be clear on what is acceptable
Assign leaders to help expand (standards) and enforce (procedures) it
Wargame your policy to find loopholes ahead of time.
Start managing AI risk smarter today
AI governance, risk, and compliance doesn’t have to be an uphill battle. Being aware of the most common mistakes and implementing the step-by-step plan above, you’ll be on your way to achieving your goals in no time.
There is also an easy button: working with StackAware.
We’ll build a governance framework for you, handling AI challenges related to:
Cybersecurity
Compliance
Privacy
Ready to make a move?
great stuff .. I would add not having awareness of AI risks also and treating them like regular cybersecurity risks. Despite all the news .. awareness of AI risks is severely lacking in the corporate sector IMO