Patient protection playbook: 8 AI-related security risks in healthcare
How to deliver care securely with AI.
The American healthcare system cries out for automation and efficiency considering how much we spend and the unimpressive results. That makes it an especially promising use case for AI. Because of the high stakes, though, deploying these systems securely is not optional.
Here are the top 8 AI-related risks, broken down by data attribute, and what healthcare security teams can do about them:
Confidentiality
Protected health information (PHI) is incredibly valuable to attackers to begin with, making it a big target. AI models trained on it - or de-identified data derived from it - are too.
That’s why these risk vectors are especially relevant:
1. Unintended training
The accidental introduction of PHI or other sensitive information into AI models - especially of the generative kind - could be catastrophic from a privacy perspective.
Especially if the healthcare providers loses control of the data entirely, e.g. if one of its employees accidentally trains ChatGPT using it, there is little that can undo the damage.
That’s why having effective:
AI policies
technical controls
data governance procedures
are table stakes when considering exposing AI models to patient data.
2. Tainted trust boundaries
Even if you are able to avoid accidentally training a model on PHI or other sensitive info, it’s still possible to accidentally architect an application in a way that incorrectly exposes the information through retrieval-augmented generation (RAG) or similar processes.
The key here?
A neutral security policy.
3. External sensitive data generation
Assuming you’ve mitigated unintended training and tainted trust boundary risk, that doesn’t mean others won’t be able to infer it using their own AI tools. As research has shown, commercially-available Large Language Models (LLM) can infer all sorts of information about people, potentially de-identifying them entirely.
This makes it extra important to be very careful with public disclosures with any nexus to protected information. In addition to scrubbing it from:
Clinical study reports
Best practice documents
Even garden-variety press releases
consider red-teaming exercises using LLMs and other AI tools to see if it’s possible to recreate anything sensitive from seemingly innocuous documents.
4. Model theft
Finally, powerful medical breakthroughs, like the COVID-19 vaccines, apparently have gotten the attention of state-sponsored attackers like:
North Korea
Russia
China
Iran
So even properly sanitized and trained AI models will themselves be high priority cyber espionage targets needing proper defense.
Integrity
When lives are on the line, as they are in healthcare, you had better validate inputs into AI models. They keys risks here:
5. Data poisoning
If you are training diagnostic or other models in-house, carefully vetting the data used is a vital security task. With recent breaches of incredible detailed and sensitive data like genetic characteristics, it’s scary to consider what attackers might be planning to do with carefully poisoned data sets.
Combining stolen information with an understanding how easy it is to corrupt AI model outputs could potentially lead to life-threatening consequences due to:
Missed diagnoses
Incorrect prescriptions
Completely “nerfed” models that waste time and resources
6. Corrupted model seeding
Using an open source model “out-of-the-box” or fine-tuning it?
Make sure you know where it came from!
StackAware partner Mithril Security demonstrated how easy it is to sneak a pre-poisoned model in the open source supply chain.
7. Indirect prompt injection
Even if you aren’t exposing generative AI tools to customers and are just using them internally with trusted employees, that doesn’t mean you can ignore prompt injection.
Researchers have proven time and again how trivial it is to kick off unexpected function calls and similar events merely by putting malicious instructions into webpages that LLM-powered applications browse to.
Have controls to identify and block these attacks.
Availability
Augmenting burnt-out doctors and nurses with AI applications can be a lifeline to organizations struggling with insufficient capacity...
...as long as those apps are operational when needed.
The Change Healthcare ransomware attack and others like it have shown how vulnerable we are to disruptions in these tools. So securing the underlying infrastructure for AI applications is table stakes.
8. Resource exhaustion
Even assuming that’s done, AI applications are vulnerable to resource exhaustion attacks.
Especially if they are customer-facing, rate limiting is a must.
Otherwise, attackers could bankrupt hospitals and physician offices with simply attacks that consume expensive application program interface (API) credits.
Accelerating care with secure and effective AI
AI will transform healthcare, but having the right controls in place should be step 0 of the process.
StackAware recently helped an AI-powered healthcare company securely deploy a new product through a comprehensive risk assessment and penetration test.
Are you a leader in healthcare and preparing to roll out an AI-powered product? Get in touch so we can ensure you have a successful and secure launch.