People talk a lot about AI “bias.” But usually stop there.
Here’s a 3 level framework for how I think about it to manage risk, avoid costly fines, and prevent customer backlash.
And no. None of this is legal advice. I’m not a lawyer.
Level 1 - AI Management System (AIMS)
If you implement ISO/IEC 42001:2023, there are few hard requirements about what you must do in terms of bias (covered by Annex A controls 4.3, 5.5, 6.1.2, and 7.3-7.4).
Things that might be okay (or not) under your AIMS could be rules (dis)allowing AI systems to:
Adjust personal training rates based on a customer’s BMI.1
Accept blocking some legitimate credit transactions.
Make it harder to apply for a job from certain states.
There don’t seem to firmly established societal norms on these types of decisions, so this will mainly be a business decision. That makes this the most flexible level of my bias framework.
But you still must have a coherent system for addressing bias because you’ll need to answer to your auditors here. And to any customers whose contractual restrictions you accept.
Level 2 - Reputation
This addresses public (and customer) opinion, usually driven by activist groups and media reporting.
Risks at this level can materialize in the form of:
Image models showing non-whites as Nazi soldiers.
Chatbots saying offensive things (provoked or not).
Legal health insurance denials causing bad press.
These aren't crimes or contractual violations, but somewhere between 1 and 8 billion people have a problem with them.
Because these requirements aren't written down, you'll need to read the societal "vibes."
And also understand your risk appetite for angering people. But good luck offending no one while deploying AI.
Deploy guardrails to stay below this appetite.
Level 3 - Law
The minimum acceptable level of compliance for any AI-powered business is the letter of the law.
Examples include:
Colorado SB-205 banning “Algorithmic Discrimination.”
NYC LL 144 mandating employment tool bias audits.
EU AI Act mandating data governance measures.
The good news here is that these laws require a concrete set of steps.
Take them AND document what you have done.
The bad news is that what any of these laws mean usually only becomes clear from enforcement action.
Be conservative early on to avoid one, then learn from the misfortunes of others.
Detailed documentation of your AI governance program will help if you have the bad luck to be the test case.
Managing bias risk is complex
StackAware helps AI-powered companies measure and manage their compliance risk through ISO 42001 readiness. Through:
Data governance, quality, and provenance measures
Responsible AI architecture and design
Measurement and management
We help you avoid embarrassments and regulatory penalties while accelerating deal closure and boosting sales.
Want to learn more?
Not in WA, MI, San Francisco, or other jurisdictions where body weight, a key component of body mass index (BMI), is a protected characteristic.