When AI fails, insurance may pay – but who is at fault?

If an AI tool gives the wrong advice, who is actually responsible? The company using it? The developer behind it? Or no one at all? This is becoming a real question as tools like Claude and Copilot move into everyday work. Some companies are now insuring AI systems – covering the financial cost when things go wrong. But insurance doesn’t answer the harder question: who is at fault? I spoke to experts across security, strategy, and law to understand how this is actually being approached. The answer is less clear than you might expect.
When AI fails, insurance may pay – but who is at fault?

# Claude and other AI tools are now embedded in everyday work, but legal responsibility has not caught up with their us

image

Insurance-backed AI systems raise questions about responsibility when automated agents give incorrect or harmful advice

If an AI agent gives the wrong advice, who is responsible – the company using it, the developer behind it, or no one at all?

A new type of insurance is set to test the question in practice.

AI platform ElevenLabs has introduced insurance-backed protection for its AI voice agents, enabled through certification from the Artificial Intelligence Underwriting Company. The system is based on large-scale testing of how these agents behave before they are deployed into business environments.

This article looks at the problem through three perspectives: how AI systems fail, how they behave in practice, and who is held accountable.

How AI risk splits across systems, users, and law

“If an AI voice agent gives wrong advice or misleads a customer, it can be very serious,” says AI security leader Hammad Atta, “from financial loss to reputational damage, and even legal exposure in some industries.”

The risk of impersonation, he adds, “is very real. The technology can now replicate voice and behavior in a very convincing way.”

But is insurance the answer?

“Insurance doesn’t really shift day-to-day safety thinking,” says strategic AI advisor Avron Welgemoed. “It just adds an imagined layer of financial protection without changing the ‘let’s try this’ culture that is common in productivity setups.”

And from a legal standpoint, the arrival of insurance does not resolve a more fundamental question about accountability.

“There is currently a high degree of uncertainty regarding legal liability for AI harm,” says Dennis G. Jansen, AI-native Chief Legal Officer at J-Law. “We have no clear, universal standard yet.”

Welgemoed warns that insurance could worsen that uncertainty: “There is absolutely a risk that insurance normalises failure. There’s a real moral-hazard risk. If leaders think they’re insured, they can become less vigilant.”

image

From left, Avron Welgemoed, strategic AI advisor; Hammad Atta, AI security leader; Dennis G. Jansen, Chief Legal Officer of J-Law

Why AI testing breaks in the real world

AI insurance schemes like AIUC-1 rely on extensive pre-deployment testing, where systems are run through thousands of simulations designed to identify potential failures.

“When companies say ‘tested,’ it usually means they have run the system through structured scenarios, edge cases, safety checks, and sometimes adversarial prompts,” says Atta. “But real-world use is far less predictable.”

“Testing does not mean the system won’t behave unexpectedly — just that we have seen some of the ways it can fail,” he says. “[AI agents] can still hallucinate, misunderstand intent, or take actions they should not.”

The problem becomes clearer once AI systems are used in everyday environments, where behaviour is open-ended and unpredictable.

“A lot of issues only show up in messy, real-world conditions,” Atta says. “Real users do not behave like test cases — they jump between topics, reuse conversations, and interact over time.”

Welgemoed agrees, noting: “Real-world behaviour emerges from how people actually use it in messy, changing workflows, and no amount of pre-deployment testing fully captures that.”

image

AIUC-1 is the first standard for AI agents, covering safety, security, privacy, reliability and accountability

Risks we don’t fully understand yet

The gap between testing and reality is where the biggest unknown risks appear.

Welgemoed suggests the industry may be moving too quickly. “I see parallels between AI insurance and other industries where risk was financialised before being fully understood – think early cyber insurance and the first wave of autonomous-vehicle coverage. The same pattern is playing out here.

“We’re financialising a risk we still don’t fully understand and probably can’t predict.”

In his view, complexity increases when AI is no longer limited to narrow use cases. “Tightly scoped AI solutions are much easier to define,” he says. “But for those broad productivity tools, like Claude or Microsoft CoPilot, that most businesses rely on, ‘safe enough’ is harder to pin down.”

One of the key risks, according to Atta, only appears once systems are in constant use. “Overtrust is a big issue,” he says. “People assume the system understands more than it actually does.”

“AI agents can gradually drift in behavior or reasoning,” he says. “This kind of cognitive degradation can lead to unpredictable outcomes.”

Who is responsible when AI goes wrong?

image

Contracts and agreements often determine responsibility for AI failures before any court decision. Unsplash / Radission US

If some of the most serious risks only appear once systems are in the wild, the question of responsibility becomes harder to anchor.

Welgemoed takes a direct view. “The buck still stops with the people who built, bought, and deployed it, not the AI itself.”

Jansen says accountability is often set long before anything goes wrong. “Because AI providers frequently cap their liability in their Terms of Service, the liability often falls entirely on the company deploying the AI, regardless of actual fault,” he says.

In practice, he adds, “vendor contracts are your absolute first line of defense.”

He argues the system cannot be reduced to a single point of blame. “I advocate for a shared responsibility model based on control,” he says. “All parts of an AI-based ecosystem typically need to have more awareness and attention.”

That includes not only developers and companies, but also users interacting with AI systems in everyday contexts.

Pricing risk without defining blame

“Insurance creates a financial backstop, but it doesn’t necessarily clarify the underlying legal fault,” says Jansen.

In some cases, it can even add complexity to disputes, “because insurers bring their own definitions of ‘negligence’ and ‘defective products’,” he says.

This means responsibility can shift depending on how contracts, policies, and legal frameworks interact across different jurisdictions.

Despite this, insurance still plays a role in enabling adoption by reducing perceived risk. “Specialized AI insurance can support taking risks, ease negotiations, and thereby enable AI adoption,” Jansen says.

The legal system catching up to AI

image

Courts are still determining how to assign responsibility for AI-driven decisions and harm. Unsplash / Tingey Injury Law Firm

The deeper question is how courts will interpret responsibility when AI systems act with a degree of autonomy.

Jansen says the law is likely to evolve in different directions rather than settle on a single rule.

One possibility is stricter product liability, where harm caused by AI is treated like a defect in design or warning failure.

Another is treating AI output more like regular published content, where responsibility sits with whoever controls the final output or publishes it.

According to Jansen, traditional legal ideas struggle with systems that do not behave consistently. “If you ask the exact same AI the exact same question three times, you might get three slightly different answers,” he says.

Because of that unpredictability, legal responsibility remains difficult to pin down in practice.

Where this is heading

As AI systems become more widely used, disputes over responsibility are likely to become more common.

“When an AI agent makes a costly error, users will sue deployers and deployers will sue developers,” says Jansen.

He says the law is moving towards a more flexible approach, where responsibility would depend on how much control someone actually has over the AI. “A logical way to regulate liability could be to align the strictness of liability with the amount of control that is possible.”

Insurance may help absorb the cost when things go wrong. But when an AI agent gives the wrong advice, the question remains unresolved – whose head will be on the chopping block?


Write a comment
No comments yet.