AI ethics and AI governance are separate things.
The first should drive the other, not vice versa. Because at the end of the day, ethics are just business requirements.
Want to save the whales?
Want to make as much money as possible?
Want to turn the entire world into a paperclip factory?
These are business requirements with varying magnitudes of ethical implications. And there are a lot of ways to achieve them.
Frankly, I am not even convinced “AI ethics” should be separate field.
Ethicists generally say more about “what” and less about “how” (with some exceptions like Kant, etc.).
All other things being equal, harpooning (or saving) wales has the same ethical implications whether you use AI or not.
On the other hand, AI is mostly about “how.”
So is governance:
How do existing and new laws impact use of this model?
How do we reduce the likelihood of regulatory action?
How does this tool fit into my risk appetite?
How does this model achieve ESG goals?
These are “how” questions.
The interplay between ethics and governance gets especially tricky when people talk about AI and “bias.” The only way to eliminate bias from an AI model is to delete the model itself. Disesdi Susanna expertly explains how bias is integral to AI in this YouTube short.
So the key ethical question is which types or levels of bias are unacceptable.
And to do this we need to understand what the outcomes of such bias would be, both good (i.e. being able to use AI in the first place) and bad. But that isn’t really different than any other sort of ethical analysis. AI is a way the outcomes are achieved. It’s not magic.
Unfortunately AI governance folks get wrapped around the axle talking about “bias” as if it were inherently bad. They should focus on evaluating the outcomes that result from it, and communicating those holistically to the business decision maker. Of course they can and should make ethical recommendations in the process.
But they shouldn’t try to drive it themselves.