Answering the top 10 security questions non-technical executives ask about AI
Key steps to innovating securely with artificial intelligence.
Cybersecurity
Compliance
Privacy
Without further ado, below are his (slightly paraphrased) questions and my answers:
What are the 3 most important questions my CEO should be asking about security and privacy as it pertains to AI?
What are our business requirements when it comes to AI? ChatGPT pushed AI fully into the “mainstream” and led to an explosion in corporate interest. This hasn’t, however, necessarily led to the establishment of clear use cases for the technology. A key things to understand - from a security perspective - before embarking on an AI-related project is your goal.
Am I creating a chatbot so customers can ask questions about publicly available information on my web site? If so, the confidentiality isn’t much of a concern, but reputation damage from people posting screenshots of the chatbot saying crazy or offensive things is.
Are we using AI to provide medical diagnoses? In this case, confidentiality of protected health information (PHI) will be extremely important. At the same time, accuracy will also be a major concern because lives are on the line.
What are our AI-related regulatory and compliance obligations? Basically every large company with broad geographic reach will need to worry about things like the European Union (EU) General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). The Securities and Exchange Commission (SEC) has also been increasingly aggressive in terms of enforcing cybersecurity requirements and disclosures. Getting - and staying - on top of these things is mandatory.
What is our risk appetite and tolerance? Many organizations never state these things explicitly, making it very difficult for employees to understand what is acceptable behavior and what is not. Clearly defining how much cybersecurity risk you are able to stomach will make development and deployment decisions much easier, focus your approach, and allow you to quickly rule certain options out.
What are 3 things our company should be doing immediately in terms of AI security and privacy?
Develop and publish clear policy guidance. Some organizations are banning the use of certain AI tools and others are doing nothing to control them. The first approach is unrealistic. The second is unwise. Document your risk appetite in the form of written guidance and train your employees on exactly what it means.
Create and maintain an (AI) asset inventory. Alongside clear guidance should come a process for onboarding - and tracking existing - AI tools. Without a clear register of what systems you are using, “shadow AI” will proliferate. Tt will then become impossible to track how your company’s data is being used and retained.
Communicate clearly about your use of AI. As Zoom’s fiasco last summer demonstrated, concisely and proactively explaining how you are storing and training on customer information is vital to avoiding blowback. Even if you aren’t doing anything too aggressive with AI, perceptions matter. You can shape them by being transparent about exactly what you are doing.
Is OpenAI using my company data from user prompts to train ChatGPT?
Maybe. By default, the consumer version of ChatGPT does train on inputs provided to it, although you can opt-out. Team and Enterprise versions of ChatGPT do not train on user inputs. Neither does the GPT application program interface (API).
Amazon appears to have gotten burned by ChatGPT training on its confidential information.
Even if OpenAI isn’t training on my company data, aren’t they retaining user prompts?
ChatGPT’s consumer and Team versions likely retain prompts indefinitely, unless chat history is disabled (in which case they are retained for 30 days, barring a legal hold).
The GPT API also has a 30-day retention period and it is possible to request zero data retention (ZDR) for especially sensitive use cases (finance, healthcare, etc.).
ChatGPT Enterprise allows administrators to specify message retention periods.
Because of the granularity and volume of information stored in ChatGPT prompts, excessive data retention can create a major cybersecurity and compliance risk if not properly managed.
See this detailed guide for a breakdown of all of the major generative AI tool data retention policies.
If I remove personally identifiable and customer-specific information in a proposal, can I use ChatGPT to improve it?
This all comes down to your risk appetite. Especially if chat history and training are disabled, this seems like a relatively low risk move. In any case, ensure your policy and legal agreements are crystal clear with respect to what is acceptable and what is not.
Should we be worried about AI-powered cyber attacks?
Yes. Longer-term, AI-generated malware is likely to become a huge issue. Right now, the two biggest AI-powered threats are fraud and phishing. The former is going to radically change how business is done because of the ease with which people can mimic the voices of customers (to bypass authentication procedures) and generate authenticate-seeming images (to bypass know-your-customer checks).
Specifically, how real is the AI-powered phishing risk?
Serious, and growing. The National Security Agency (NSA) recently warned that attackers are using AI tools to refine their english grammar and spelling, as well as customizing these attacks more rapidly and convincingly.
What can be done about these AI-powered threats?
Employee training and awareness is the best defense against fraud and phishing. Because human decisions are often going to be the ones with the biggest impacts, especially financial ones like wiring funds or authorizing access, ensuring your workforce is properly trained should be priority #1.
Additionally, authentication methods such as facial and voice recognition are going to become easier and easier to defeat over time. Carefully consider the long-term risk/reward equation of continuing to use them in your products and services.
What is prompt injection and why should I care about it?
Prompt injection occurs when an attacker provides written inputs to a large language (LLM) and is able to force it to provide outputs which the designer of the model did not intend. This can include creating destructive or harmful content (e.g. instructions for how to commit crimes) or regurgitating sensitive information (personal identifiers, confidential intellectual property, etc.).
This can happen in two primary ways:
Directly, where the attacker’s action is the immediate cause of the model consuming the malicious prompt, whether directly or through intervening business logic.
Indirectly, it can occur when the operator of an LLM is a legitimate actor, whether human or acting autonomously, but is tricked into feeding it corrupted material. For example, this can occur when a browsing-capable AI agent visits a malicious web site and accidentally ingests - and acts on - instructions that violate the security policy of the model or organization hosting it.
The outcomes are generally the same in both cases, which can include:
Reputation damage from your LLM application offending or confusing customers.
Regulatory penalties due to unintended regurgitation of sensitive data.
Downstream impacts when LLMs are able to call functions or APIs.
Because AI systems are probabilistic, how is security monitoring different because of things like model drift, etc.?
Because AI systems are non-deterministic, traditional cybersecurity monitoring and analysis approaches will likely fall short. Understanding whether or not the system is - on average - operating within your defined risk appetite is more important than identifying and trying to remediate discrete vulnerabilities. This is especially true because AI models, especially ones available via Software-as-a-Service (SaaS), can “drift” as vendors make changes to them.
The Artificial Intelligence Risk Scoring System (AIRSS) is specifically designed to quantify your security risk in these situations.
I have more questions about AI-related security, compliance, and privacy. What should I do?
Paul didn’t ask this question, but if I had the time, I could have answered the top 100 or top 1000 ones I have gotten.
That’s because this is a complex topic. And it’s not getting any simpler.
So if you need help working through these types of issues:
Walter, thanks for putting time into thoughtful and succinct responses to these common questions..Paul