AI committees (and committees in general) are rather in vogue these days.
It’s certainly a good idea to ask for input from stakeholders, but here’s how to avoid a “perpetual brainstorming session” and get stuff done:
1. Designate a clear decision authority
If you stand up an AI governance committee that isn’t able to make decisions, what exactly is it doing? Unfortunately, I find many such bodies are simply set up in a way that prevents them from doing so. Here are my preferred structures for who should make the final call:
Strong 1st choice: single business leader who owns a P&L.
Distant 2nd choice: majority vote, odd number of people on committee.
Even more distant 3rd choice: majority vote, even number of people on committee.
Functionally unworkable, but common: every member has (or select ones like legal, compliance, or security have) veto authority, whether explicitly or implicitly, without ownership of the business outcome.
For the last one, check out this post for reasons why it might seem like a good idea to have this setup, but generally isn’t.
And more broadly, if you make decision by committee you also have accountability by committee. This usually ends up being no accountability at all.
2. Give firm timelines for (dis)approval
Technology is moving quickly. If your AI governance committee needs 3 months to approve a pilot program, something better might have already released by then.
Document clear deadlines for responding to requests to do things like:
Test new tools
Train on new data
Make AI-powered applications publicly-facing
It’s always possible to ask for more information or research, but it’s never possible to recover the time lost doing so.
And if your committee just becomes a black hole for requests, risk-averse people will feel frozen and unable to innovate.
The more entrepreneurial ones will get hip to this quickly and just start evading the process, leading to shadow AI.
3. Explicitly lay out risk appetite, ahead of time
If you aren’t comfortable with any risk, you probably shouldn’t get out of bed in the morning. Assuming that isn’t the case, establishing a clear risk appetite for a given project will let people know that you will have their backs if:
An application has an outage
The business suffers downtime
Customers start complaining publicly
The press publishes an angry headline
You suffer a data breach stemming from AI use
Obviously none of these things are desirable, but every business implicitly accepts the risk of these happening.
State what you are willing to accept, preferably in quantitative terms. Doing this retroactively after an incident suggests you are just scapegoating people who get unlucky
An alternative approach: committees as advisory bodies (only)
Another possible model is to have the committee be a purely consultative organization that makes recommendations but has no decision authority itself. This is how OpenAI structured their Safety Advisory Group (SAG). The SAG makes recommendations to the CEO (or designee), who makes the final call.
This is certainly a valid approach.
With that said, the three points above still apply:
Deciding on what recommendations to make needs a clear process.
Same goes for timelines. If you don’t know when you’ll hear back from the committee, no one will bother asking their opinion in the first place.
If the committee only makes vague suggestions like “ensure ethical use” or “protect user data,” rather than proposing clear thresholds of risk, it will be equally ineffective.
Bottom line: don’t become champions of eternal deliberation
New technologies like AI require proper governance.
But merely standing up a committee doesn’t mean you automatically have it.
Are you concerned about AI risks? Want to know the right way to address them without getting bogged down in bureaucracy? Drop us a line, because that’s exactly what StackAware does: