Compliance and AI: 3 quick observations
How to win with AI while keeping regulators and auditors happy
Check out the YouTube, Spotify, Apple podcast versions.
I recently spoke with a prospect at a publicly-traded company. His (Big 4) auditor had asked him about his plans for AI governance…
…and as you might guess, that’s how he found himself on a sales call with me.
I had heard rumblings from auditors - and auditees - about the changes rapid enterprise adoption of AI would bring with it, but this spurred me to write down my thoughts. Here are my top three observations:
1. Auditors don’t (yet) have strong opinions on how to deploy AI securely
While frameworks like ISO 27001 are technology agnostic - and SOC 2 is somewhat control agnostic - auditors generally have thoughts on:
MFA
Encryption algorithms
Logging and monitoring retention periods
But I have yet to see this from an auditor when it comes to specific AI deployment methods or how to protect them.
The only potential exception here might be ISO 42001 (for which StackAware is currently undergoing certification). But at this point, few “best practices” are firmly established.
2. Enforcement is here, just not evenly distributed.
For example, the:
SEC has already settled its first “AI-washing” cases
FTC is threatening AI companies that silently change T&Cs
EU AI Act passed but implementation details are still forthcoming
With a dizzying array of regulators and requirements to consider, all of which are at different levels of maturity and clarity, it’s not clear what companies should make of the landscape yet.
Because unfortunately there is so much “regulation-by-enforcement,” companies will need to be nimble when it comes to what the authorities are really focused on in terms of AI.
3. Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easy
One governance, risk, and compliance (GRC) automation company has rolled out support for ISO 42001, and I have no doubt others will follow.
Other security consultants appear to be spinning up AI security practices and offerings (but based on how much inbound requests I get to support them, these still look to be in the early stages).
Specialty AI security technology companies are springing up, but there are no clear winners yet.
The fact that there is market demand for all of these things suggests AI security isn’t going to be exactly the same as “normal” cybersecurity.
Although the same principles apply, there are many wrinkles that make it a distinct discipline. Merging existing risk management practices and tools with newer ones will be necessary, but challenging.
Are you an AI-powered company? Need compliance help?
Staying up to date with compliance requirements and regulations, especially new AI-specific ones, can be a huge hassle that distracts your team and slows down enterprise sales processes.
StackAware identifies - and manages - these challenges for its customers. So if you need help: