1. “Generative AI SaaS is insecure”
ChatGPT and some other Software-as-a-Service (SaaS) AI tools do train on inputs by default. They can also retain data indefinitely even if not training on it.
These are real risks.
But there are ways to avoid them, whether through:
Enterprise versions
Zero data retention (ZDR)
And self-hosting models has its own problem. Cloud misconfigurations are more likely, which can lead to breaches.
The key here?
Developing a balanced, risk-based approach that incorporates
security
compliance
privacy
concerns along with business needs.
And make sure you don’t accidentally become a data center company because you are afraid of SaaS risks.
2. “I should start my program with AI security tools”
Tools can help, but only with a firm foundation.
Security teams fixate on the shiny and new. But often it's boring stuff that needs to be done first, like:
Asset inventory
Vendor management
3. “Once I have an AI governance committee, I’m done”
A committee can help if structured the right way.
But if turns into a debate club and can’t make decisions, innovation will suffer.
Here is what I recommend for my clients:
Clear decision authority
An established risk appetite
Hard limits on time to respond to requests
And before you stand up a committee, consider assigning AI risk management (but not ownership) to a single business unit.
Having trouble sorting fact from fiction when it comes to AI governance?
There is a ton of fear, uncertainty, and doubt (FUD) out there when it comes to AI. And folks are making strange decisions as a result.
StackAware cuts through the noise and gives you tailored advice for your industry and use case. We help security and data leaders in:
Financial services
Healthcare
B2B SaaS
manage AI-related risk to:
Minimize data leaks
Build customer trust
Prevent lawsuits and regulatory fines
Ready to learn more?