A $6 billion tech company just put this in an article on “how to write a generative AI cybersecurity policy”:
But it’s a masterclass in what not to do. Here are 3 questions that show why:
1. What is “sensitive or private information”?
Does this mean:
Trade secrets?
Personal data per the GDPR?
Protected health information per HIPAA?
AI governance requires data governance, which needs a classification program and definitions.
And each type of information has its own handling requirements.
So be specific.
2. What are “public AI platforms”?
Does this include:
ChatGPT Plus?
ChatGPT Team?
ChatGPT Enterprise?
The OpenAI application program interface (API)?
These are important questions because:
The first one trains by default and gives no confidentiality assurances.
The second one does the opposite but retains data indefinitely.
The third doesn’t train on inputs and gives administrators control of retention.
The fourth doesn’t train, retains data by default for 30 days, and offers zero data retention (ZDR) for some use cases.
Again, you need to get into detail here. These are all very different!
3. What is “outside the control of the enterprise”?
Does this exclude anything not a:
company under nondisclosure agreement?
model hosted within your virtual private cloud?
system running on your own hardware/data center?
The line between “your” and other peoples’ networks is blurry. Be clear what you mean.
In any case, Software-as-a-Service (SaaS), which the above policy recommendations appear to refer to, can be the most secure deployment method for AI systems or otherwise. The ease of misconfiguration makes self-hosting models in Infrastructure-as-a-Service (IaaS) risky business for all but the most skilled.
Need a place to start for your AI policy?
I’m seeing a lot of bad advice from big brands when it comes to AI governance. Just tossing around FUD and vague terms doesn’t help.
The good news?
StackAware has a free template: