Everyone talks about AI "bias," but usually stops there.
Here’s a 3 level framework for how I think about it to manage risk:
Level 1 - Reputation
The most restrictive "standard," if there even is one.
This addresses public (and customer) opinion, usually driven by activist groups and media reporting.
Risks at this level can materialize in the form of:
Image models showing non-whites as Nazi soldiers
Legal health insurance denials causing bad press
Chatbots saying offensive things (incited or not)
These aren't crimes or contractual violations, but somewhere between 1 and 8 billion people have a problem with them.
Because these requirements aren't written down, you'll need to read the societal "vibes."
And also understand your risk appetite for angering people. Good luck offending no one while deploying AI.
Deploy guardrails to stay below this appetite.
Level 2 - Law
The minimum acceptable level of compliance for any AI-powered business is the letter of the law.
Examples include:
Colorado SB-205 banning algorithmic discrimination
NYC LL 144-21 forcing employment tool bias audits
EU AI Act mandating data governance measures
The good news here is that these laws require a concrete set of steps.
Take them AND document what you have done.
The bad news is that what any of these laws mean usually only becomes clear from enforcement action.
Be conservative early on to avoid one, then learn from the misfortunes of others.
Detailed documentation of your AI governance program will help if you have the bad luck to be the test case.
This can take the form of your:
Level 3 - AI Management System (AIMS)
If you implement ISO/IEC 42001:2023, there are few hard requirements about what you must do in terms of bias.
It's the most flexible level of the framework. But you must have a coherent system for addressing it.
Things that might be okay (or not) under your AIMS could be rules (dis)allowing AI systems to:
Make it harder to apply for a job from certain states
Accept blocking some legit credit transactions
Charge personal training rates based on BMI*
* Not in WA, MI, San Francisco, or other jurisdictions where body weight is a protected characteristic.
You'll need to answer to your auditors here, as well as customers whose contractual restrictions you accept.
And no. None of this is legal advice. I'm not a lawyer.
Managing bias risk is complex
StackAware helps AI-powered companies measure and manage their compliance risk through ISO 42001 readiness. Through:
Data governance, quality, and provenance measures
Responsible AI architecture and design
Measurement and management
We help you avoid embarrassments and regulatory penalties while accelerating deal closure and boosting sales.
Want to learn more?