Why health-tech companies should get ISO 42001 certified
Protecting privacy, accelerating HIPAA and HITRUST compliance, and helping close deals faster.
1. Address patient privacy concerns
Healthcare is ground zero for fear, uncertainty, and doubt (FUD) when it comes to AI. It’s a high-risk, high-reward use case for the technology, combining the:
Privacy and compliance implications of protected health information (PHI)
Need to process unstructured information like doctors’ notes
Unclear and evolving standards for sound AI governance
Magnifying these issues, AI-powered health-tech platforms often combine:
Software-as-a-Service (SaaS) AI application program interfaces (API)
Fine-tuned versions of these offerings or open-source models
Proprietary systems trained on patient or customer data
Such complexity increases data security risk and makes a sound AI management system (AIMS) based on ISO 42001 an excellent tool. These suggested controls from Annex A of the standard are especially relevant:
AI system verification and validation (A.6.2.4)
Data provenance and preparation (A.7.5-6)
Incident communication (A.8.4)
Having these controls and their effectiveness confirmed by an external auditor can go a long way toward building patient trust.
2. Accelerate existing compliance efforts
Backstopping de-identification under HIPAA
Any covered entity or business associate must comply with the U.S. Health Insurance Portability and Accountability Act (HIPAA). This law regulates the handling of PHIthrough its security and privacy rules. The Department of Health and Human Services (HHS) also uses the latter to provide guidance on de-identification.
De-identification turns PHI into data that can be used for analysis, including training AI models. Those subject to HIPAA can use two methods for de-identification:
Expert determination
Safe harbor
Safe harbor requires removing 18 types of personal identifiers from PHI and requires the organization using the method have no “actual knowledge” the resulting data could identify someone.
The expert determination method, however, is more flexible. It requires:
A properly trained and experienced person
Applying appropriate statistical principles and methods
To determine (and document) the risk of identification is “very small”
Doing all of this requires a comprehensive data and AI governance framework. ISO 42001’s Annex A controls, especially around data:
Acquisition (A.7.3)
Provenance (A.7.5)
Preparation (A.7.6)
are well-suited to supporting and justifying the expert determination method of de-identification under HIPAA.
Helping with HITRUST
The Health Information Trust Alliance (HITRUST) publishes the non-regulatory Common Security Framework (CSF) specifically for healthcare companies. Recently, HITRUST announced companies could earn AI assurance reports as well.
Importantly, these will “share insights into how the organization is preparing to safeguard its data against AI risks in support of a trustworthy system.”
ISO 42001 bolsters these efforts because of its requirements to:
Evaluate documenting AI system design/development (Annex A6.2.3)
Consider responsible use of AI systems (Annex A9.2-3)
Do AI risk/impact assessments (Clauses 6.1.2 and 6.1.4)
Using a globally-recognized standard for these steps builds credibility for healthcare company security teams. Given the reputation of HITRUST being a “signficant emotional event,” they often need all the support they can get.
3. Drive sales by navigating security reviews
Health technology platforms working with other businesses (e.g. healthcare systems) undergo rigorous security reviews during and after the sales cycle. Especially given the FUD (and real risks) associated with AI, the scrutiny can be extreme.
ISO 42001 can speed the process along by assuring customers a company:
Identifies and fixes nonconformities (Clause 10.2)
Conducts internal audits (Clause 9.2.2)
Communicates clearly (Clause 7.4)
Has an AI policy (Clause 5.2)
Annex A’s proposed controls are also relevant, like those:
Creating a whistleblowing program (A.3.3)
Ensuring AI governance in the supply chain (A.10.3)
Matching responsible AI objectives to customer needs (A.10.4)
Are you a healthcare security leader looking to build an AI governance program?
StackAware puts in place:
Policies
Procedures
Technical guardrails
for health-tech companies leveraging AI to help patients.
And our core offering, the AIMS Accelerator, gets you to ISO 42001 readiness in 90 days or less. Want to learn more?