The top 3 data security concerns blocking AI adoption in healthcare (and how to mitigate them)
Unintended training, excessive data retention, and sensitive data generation.
Check out the YouTube, Spotify, and Apple podcast versions.
1. Unintended training
Tools like ChatGPT train on inputs by default.
So they can accidentally learn from sensitive data (like protected health information [PHI], which ChatGPT isn’t certified for).
And later give it to unauthorized parties.
MITIGATION:
OpenAI application program interface [API] endpoint (doesn’t train) +
Business associate agreement [BAA] +
Zero data retention [ZDR]
2. Excessive data retention
Even if an AI tool isn’t training on sensitive inputs, having a 3rd party store them creates security and compliance risk.
They can be stolen from the 3rd party
Longer retention = discovery risk
Vendor personnel can review
MITIGATION: Abuse monitoring opt-out with Microsoft Azure OpenAI services.
3. Sensitive data generation
AI tools can aggregate information that seems innocuous, but which still lets the model infer personal identifiers or health conditions.
MITIGATIONS:
Machine unlearning
Pseudonymization
Anonymization (it’s different)
Are you rolling out AI in healthcare?
While these are real risks, there are ways to manage them. And the opportunities for:
allowing providers to focus on higher-level tasks
improving patient experience
reducing costs
are immense.
So the best approach is to deploy AI tools alongside an comprehensive governance program. Need helping building yours?