Who should own AI governance?
AI governance guide: 5 options for managing artificial intelligence risk.
The very concept of AI governance is new enough that it’s not immediately clear 1) what it is or 2) who should do it. To answer #1, I define it as the practices and frameworks which allow for:
value delivery with systems that are not rules-based; and
managing and optimizing the associated risk.
I may refine that definition over time, but it’s a start.
#2 is the subject of this post.
I have seen a variety of approaches in the organizations StackAware has worked or spoken with, and I think there are 5 common options.
I am intentionally excluding from this list cross-functional business leaders. These people are accountable for overall mission accomplishment, of which AI (in general, and governance in particular) constitutes only a part.
These people should ultimately be the risk owners for anything related to AI systems and have ultimate authority for relevant decisions. With that said, I wouldn’t expect them to be governance experts because of their relatively broad focus.
So here are some teams who can own AI governance:
1. Security
Pros:
Focus on protecting data confidentiality, integrity, and availability means they are going to be well equipped to mitigate many downsides of AI use.
Likely have existing tools, techniques, and procedures for onboarding and managing new technologies.
Cons
Not necessarily well-equipped to address non-security risks of AI use, e.g. environmental, social, and reputation impacts.
Lack of technical familiarity could cause mis-prioritization.
Inherently risk-averse posture can slow deployment.
2. Legal/Compliance
Pros:
Adept at navigating complex regulatory landscape.
Able to parse complex terms and conditions, privacy statements, etc. and distill the “bottom line.”
Cons
Almost certain to be less technical than security teams and may have difficulty understanding real-world impacts of new systems and products.
Probably even more risk-averse than security teams and are overwhelmingly focused on downsides rather than upsides.
3. Privacy
Pros:
AI brings with it new challenges that intersect closely with existing privacy regulations like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA).
Maintaining customer trust during the rollout of new tools and features can be tricky and requires clear messaging about privacy controls.
Cons
Indexing on privacy above all else can have functional impacts, reducing value delivered to customers and captured by the business.
Not necessarily concerned with organizational trade secrets. This can lead to design compromises which exceed the business’s risk appetite in terms of intellectual property.
4. Data Science / AI
Pros:
Technical fluency means they have a deep appreciation for the capabilities and limitations of AI systems.
As a result, they do not need “interpreters” to explain the real-world impacts of AI deployments and will have a more unvarnished view.
Cons:
Not as knowledgeable about security, compliance, or privacy as other stakeholders.
Likely heavily focused on the business outcome, reducing appreciation for competing priorities (and risks).
5. Dedicated AI governance
Pros:
Ability to specialize leads to deep subject matter expertise regarding AI risks, frameworks, and compliance obligations.
Can serve as neutral arbiter between security, privacy, legal, and other teams.
Cons:
Additional administrative overhead and expense.
Risk of siloing AI governance and development away from other technology and business efforts.
Why not everyone? AI committees
As I’ve written before, having an AI governance committee is not a bad idea. Problems emerge, however, when:
Decision-making processes and authorities are not clear.
Timelines for decisions are undefined or not abided by.
Risk appetite is described as “zero” or vague.
Having a single pre-existing department own AI governance doesn’t necessarily address any of these problems, but it can streamline some of the bureaucracy inherent to a cross-functional committee.
Struggling with AI governance?
These types of questions are ones we tackle frequently in our client engagements. Tools and frameworks aren’t useful unless you have a solid foundation of risk management and accountability.
Need help making sense of AI-related risk?