How effective AI governance can improve cyber insurance coverage
Enhancing a key risk transfer option.
Joseph Breen is the latest guest contributor to Deploy Securely. In this article, he tackles the evolving landscape of cyber insurance with respect to AI governance.
As businesses rush to deploy AI, the risks grow—and cyber insurance isn’t keeping up.
Coverage for incidents involving autonomous decision-making, generative AI models, or failures tied to algorithmic outputs is by no mean guaranteed. For companies that rely on AI to power customer service, fraud detection, or content generation, this creates a serious blind spot. While organizations may believe they’re covered by cyber policies, ambiguity or even AI-specific carve-outs can leave them exposed in exactly the kinds of scenarios AI introduces.
During our discussion in March 2025, John Czapko, CEO of CyberSecure, highlighted that most insurers still haven’t incorporated AI into their underwriting processes. He explained that while applications remain lengthy—often exceeding 10 pages—they typically include no reference to AI at all. Supplemental documentation about how a company governs its AI use is rarely requested or accepted. This hesitation stems in part from the insurance industry’s reliance on historical data. Underwriters need robust actuarial models to price risk accurately, but with newly-deployed AI systems, such models are largely unavailable. Real-world claims involving AI failures are still too few and varied to create reliable benchmarks, leaving insurers in a state of uncertainty.
Adding to this challenge, a recent National Association of Insurance Commissioners (NAIC) report highlights how the rapid evolution of AI far outpaces insurers’ ability to develop comprehensive coverage frameworks, resulting in significant gaps in policy language and risk assessment. Furthermore, GlobalData research released in February 2025 points out that a pervasive lack of AI expertise within insurance firms is hindering adoption of AI-informed underwriting practices, deepening the disconnect between AI deployment and insurance readiness. Among more than 120 insurance professionals surveyed in the study, nearly 25% (24.4%) cited lack of AI expertise as the primary hurdle, while customer understanding (21.3%), skepticism about AI readiness (17.3%), and trust issues (13.4%) also featured prominently—emphasizing the increasing divide between AI adoption and insurance readiness.
As a result, I haven’t yet seen major insurers roll out formal premium discounts tied to effective AI governance. But that doesn’t mean the industry isn’t moving in that direction.
Drawing parallels: how security controls shaped cyber insurance
The cyber insurance market rewards organizations that implement preventive controls like multi-factor authentication, endpoint detection and response, and internationally recognized certifications. In a 2024 Insicon blog, the company highlights that achieving ISO 27001 in particular leads to significantly lower premiums, as acknowledged by underwriters such as Aon, Marsh, and Chubb. Similarly, a report by the U.S. Government Accountability Office (GAO) highlights how insurers use such criteria to determine coverage and pricing.
I expect insurers to extend this same logic to artificial intelligence. Companies that can clearly demonstrate how they identify, monitor, and mitigate AI-related risks—especially in areas like model transparency, data governance, and automated decision-making—will be better positioned to negotiate favorable policy terms as insurers evolve their underwriting frameworks to address AI-specific exposures.
Complexities of AI risk: from legal uncertainty to model opacity
AI introduces a host of new risks, including:
Generative AI, in particular, creates liability challenges because its outputs are probabilistic rather than deterministic, making them inherently unpredictable. It can produce content that is misleading or offensive—triggering clear reputational and legal risks—or content that may violate copyright law, a question at the center of the ongoing New York Times lawsuit against OpenAI. If those outputs cause harm, is the liability on the developer, the deployer, or the trainer? In some jurisdictions—such as under Colorado’s SB-205 or the EU AI Act—legal frameworks are beginning to clarify those roles and responsibilities. But elsewhere, the lines remain blurry, leaving insurers to grapple with how to assess and price these emerging exposures.
Traditional cyber policies were not built with AI in mind. Many insurers are now reevaluating how to handle exposures introduced by systems that operate with limited human oversight. In a December 2023 article on the Insurance Thought Leadership’s website, Christopher Gallo discusses some of the current blockers—specifically, how AI presents “complex risks” that are “difficult to understand and predict,” making it challenging to define coverage terms with confidence. The absence of historical loss data, combined with evolving legal frameworks, only adds to the ambiguity.
Exploring alternatives: captives vs. governance maturity
While claims arising from AI-related incidents likely fall into a gray area in traditional cyber insurance policies—or are even explicitly excluded—companies are beginning to explore alternative risk financing mechanisms, such as captive insurance, to close coverage gaps. However, this approach can be complex and expensive—especially for small and mid-sized businesses.
A more sustainable path? Reduce the actual (and perceived) risk of AI use through demonstrable governance.
The role of standards in building AI trust
This is where AI governance becomes essential—not just for compliance or ethical reasons, but for business continuity and insurability. Frameworks like ISO/IEC 42001, the first global standard for AI management systems, and the NIST AI Risk Management Framework (AI RMF), offer structured approaches for managing AI risk. These frameworks help organizations classify systems by risk level, assess potential impacts, and implement controls to minimize harm. They also emphasize transparency, documentation, and accountability—elements that are critical for both regulators and insurers.
While I don’t know of any insurers offering discounts specifically tied to adoption of ISO 42001 or the NIST AI RMF, that may change. A 2024 report by The Geneva Association suggests that insurers are starting to consider AI governance maturity—such as the presence of human oversight, transparent documentation, and clear accountability structures—as factors that could eventually inform underwriting models. Deloitte has also highlighted growing interest from insurers in understanding how AI is deployed and governed, particularly in high-risk sectors like healthcare and finance.
Preparing for the shift: practical steps for organizations
To prepare, organizations can strengthen their internal AI controls today. Key actions include:
Conducting model risk assessments to understand where failures could occur.
Maintaining audit trails that document how models are developed, trained, and updated.
Engaging legal, compliance, and security teams early in the AI development lifecycle.
Establishing oversight mechanisms for monitoring AI in production and intervening when things go wrong.
These steps not only reduce operational and reputational risk now—they also create the foundation for demonstrating governance maturity to insurers in the future.
From cyber to AI: defining the future of insurable innovation
In many ways, this mirrors the evolution of cybersecurity insurance. In the early 2010s, insurers were cautious about covering cyber risk due to a lack of data and standard practices. Over time, they began to recognize and reward companies that adopted controls like encryption, multi-factor authentication (MFA), and incident response plans. The same trajectory is now possible for AI—if the industry has the right signals to evaluate.
Ultimately, as AI is a core business function rather than a fringe experiment, the question isn’t whether insurers will react to AI-related risks. It’s how they will do it—and which organizations will be able to qualify for meaningful, affordable coverage. Those that invest early in AI governance may not only protect themselves from current gaps in coverage—they may help define the very standards that shape the future of insurable AI.