3 AI governance frameworks business leaders can use to accelerate sales and avoid regulatory fines
The NIST AI RMF, EU AI Act, and ISO 42001
Check out the YouTube, Spotify, and Apple podcast versions.
1. NIST AI RMF
The National Institute of Standards and Technology (NIST) Artificial Intelligence (AI) Risk Management Framework (RMF) launched in January 2023.
It has four functions:
Map
Measure
Manage
Govern
These lay out best practices at a high level. But the standard doesn’t have many implementation details.
And like all NIST standards, there is no way to be “certified” against this framework.
With that said, the NIST Cybersecurity Framework (CSF) has become something of a gold standard for the security side of things. Because of NIST’s existing credibility and the fact it was the first major AI framework published, using the AI RMF as the basis for your governance program is a good way to build customer trust.
Appropriate for
Any company starting its AI risk management journey that needs a “checklist” of considerations.
Analogous standard
NIST CSF.
2. EU AI Act
Formally adopted in early 2024, the European Union (EU) AI Act is highly prescriptive and completely forbids certain practices such as:
Inference of non-obvious traits from biometrics
Real-time biometric identification in public
Criminal profiling not on criminal behavior
Purposefully manipulative or deceptive
Inferring emotions in school/workplace
Exploits vulnerabilities of a group
Blanket facial image collection
Social scoring
It also heavily regulates AI systems involved in:
Intended to be use as safety component; and
Underlying product is already EU-regulated
Criminal behavior risk assessment
Education admissions/decisions
Job recruitment/advertisement
Exam cheating identification
Public benefit decisions
Emergency call routing
Migration and asylum
Election management
Critical infrastructure
Health/life insurance
Law enforcement
Credit scoring
Fines for non-compliance can be up to 35,000,000 Euros or 7% of worldwide annual revenue, whichever is higher.
So ignoring the EU AI Act’s requirements can be costly.
Appropriate for
Companies qualifying as any of these (according to the AI Act):
Provider
Deployer
Importer
Distributor
Product Manufacturer
Authorized Representative
Analogous standard
General Data Protection Regulation (GDPR).
3. ISO 42001
Published by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in December 2023.
ISO/IEC 42001:2023 requires establishment of an AI management system (AIMS) to effectively measure and treat risks to:
Safety
Privacy
Security
Health and welfare
Societal disruption
Environmental impact
An external auditor can certify compliance with this standard.
Additionally, compliance with a “harmonised standard” under the EU AI Act, which ISO 42001 is expected to become, gives you a presumption of conformity with many of the law’s provisions.
But ISO 42001 is not a silver bullet for EU AI compliance.
A U.S.-based company offering facial recognition for public places could be ISO 42001 certified but banned from operating in the EU.
And conversely it’s possible to comply with the EU AI Act without ISO 42001 certification.
In any case, achieving it is one of the few ways to get a third party to bless your AI governance program. Which can increase customer confidence and accelerate sales.
Appropriate for
AI-powered B2B startups
Companies training on customer data
Heavily-regulated enterprises (healthcare/finance)
Analogous standard
ISO 27001.
Need help with one (or all) of these frameworks?
None of standards are mutually exclusive. And each represents a different type of compliance framework.
The NIST AI RMF is a non-certifiable set of best practices
The EU AI Act is a legal obligation
ISO 42001 is a certifiable standard
StackAware helps organizations implement and adhere to all of them. And our speciality is the last one.
Want to learn more?