To date, it has been rare for insurance policies to contain any reference to AI. Where AI tools or system have caused injury, damage or loss, cover has often arisen because the loss fell within traditional insuring clauses (and was not expressly excluded). For example, if an AI enabled system malfunctioned and caused damage or injury, insurers would likely treat this in the same way as any other defective product or operational failure. This mirrors the early days of cyber risk, where losses were unintentionally picked up under legacy wording.
However, the 'silent cover' position is changing. The insurance industry is moving swiftly to address AI exposures, particularly in circumstances where:
- courts are starting to hold companies responsible for the harms caused by their AI tools;
- the regulatory landscape for AI is rapidly evolving; and
- there is the potential for systemic loss across many insureds simultaneously.
We are now seeing AI exclusions stating to appear in certain types of policy wordings, and the breadth of these exclusions varies widely. These sorts of exclusions may allow insurers to deny cover for claims that:
- arise from AI generated content (e.g. defamation, IP infringement);
- are linked to AI driven decision making errors; and / or
- can be traced to autonomous system outputs rather than human acts.
It is likely that these sorts of exclusions will become more common and will lead to insurers offering AI-specific endorsements or stand-alone cover. Lloyd’s-backed start-ups such as Armilla have already launched dedicated AI liability cover for algorithmic failures (covering issues like AI model errors and 'hallucinations').
In the immediate future, it is likely that many AI-related events will remain covered, absent specific policy exclusions, especially where AI is ancillary to a traditional operational activity. However, companies cannot assume AI is covered, just because there is no specific AI exclusion. Silent coverage is starting to be closed off by insurers as claims volume and systemic risk become clearer.
The tightening restriction on cover intersects directly with D&O risk. As AI exclusions proliferate, boards are increasingly expected to understand what AI systems have been deployed. The are also being expected to know where insurance coverage begins / ends and demonstrate active oversight of AI risk and disclosures. Insurers and regulators alike are signalling that AI governance failures may be foreseeable and uninsured, rather than accidental.
Insurers are moving quickly to control exposure, and insureds who do not proactively review their liability programs may discover the gap only after a claim arises.
Given the rapidly evolving insurance response to AI risk, organisations should not treat coverage as static or incidental. As a minimum, boards and risk teams should actively map how AI is being used across the organisation, including where it influences decisions, content or outcomes rather than merely supporting workflow. In an environment where insurers are increasingly signalling that AI related losses may be foreseeable and avoidable or expressly excluded (absent specific endorsement / standalone cover) proactive oversight of both AI risk and insurance response is quickly becoming a key component of effective corporate governance.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.