Main Menu
Related Practices

Insurance Bad Faith Claims in the Age of AI Jim

Insurance Law360
January 29, 2018

By Dennis C. Anderson
To read this article in PDF format, please click here.

On the evening of Dec. 23, 2016, at seven seconds after 5:49 p.m., the holder of a renter’s policy issued by upstart insurance company Lemonade tapped “submit” on the company’s smartphone app. Just three seconds later, he received a notification that his claim for the value of a stolen parka had been approved, wire transfer instructions for the proper amount had been sent and the claim was closed. The insured was also informed that before approving the claim, Lemonade’s friendly claims-handling bot, “A.I. Jim,” cross-referenced it against the policy and ran 18 algorithms designed to detect fraud.

In a blog entry posted the following week, Lemonade characterized the three-second claim approval as a world record. Others called it a publicity stunt. And a year later, it’s certainly old news in the insurance industry. But there is no dispute that A.I. Jim’s light-speed claim approval illustrates how “insurtech” companies — tech-oriented startups in the insurance sector — are sidestepping traditional insurers by using technology to reach customers, sell insurance products, and process insurance claims.

Insurtech represents a small sliver of the overall insurance industry, but its explosive growth is drawing a great deal of attention and capital investment, as well as a growing slice of market share. The use of algorithms in the insurance industry is nothing new, but their use has been primarily in risk assessment. The rise of insurtech, and the sector’s heavy use of algorithms in the claims-handling process, is raising questions about how traditional insurance law applies (or doesn’t) to this new paradigm in which functions traditionally carried out by human beings are increasingly handled by bots like A.I. Jim and the algorithms that power them.

Take insurance bad faith, for instance. The basic framework of an insurance bad faith claim is that the policy entitles the insured to certain benefits, and those benefits have been unreasonably withheld by the insurer. Such claims can be brought under the common law duty of good faith and fair dealing, but are commonly brought under insurance bad faith statutes that prohibit specific conduct, such as denying a claim without explanation, or without first conducting a reasonable investigation.

The threshold prerequisite for an insurance bad faith claim is that the underlying insurance claim has been denied. When the denial results from a traditional insurer’s human-powered claims examination process, the human beings involved typically leave a trail of emails, reports, and other documents that explain their process. But suppose a claim is denied by a bot rather than a human. What then?

The modern inclination to trust computers to do things right — especially things we don’t understand ourselves — may deter bad faith claims when the denial decision is made by a computer. Indeed, to the casual observer, algorithmic processing of insurance claims might seem like the gold standard in objectivity and even-handedness. But that’s not necessarily the case.

In her book Weapons of Math Destruction, mathematician Cathy O’Neil argues that people are often too willing to trust mathematical models because they believe that math will remove human bias, when in fact, algorithms may only be as even-handed as the people who create them. Media outlets describe this phenomenon as “algorithmic bias,” and many experts believe we are only in the early stages of understanding it. As the concept of algorithmic bias gains acceptance, the popular perception that algorithms are fair and objective will likely erode. And in a marketplace where insurers are often regarded as making money by denying claims, policyholders may suspect that algorithms built by insurers are biased in favor of denying claims.

Algorithmic claims handling also presents practical challenges in a litigation setting. Attorneys defending insurance bad faith claims routinely rely on information gathered and analyzed by human claims examiners to show how the facts of the loss justify a denial of the claim. The humans involved in the process are available to explain and defend their decisions, and can usually point to notes, emails and other documents for support. But when a bot denies a claim, an insurer’s legal team may face the challenge of explaining to a jury how the bot arrived at that decision, and persuading jurors that they should trust the bot’s impartiality. In other words, they may have to explain in laypersons terms how the underlying algorithms work in hopes of persuading human jurors how a computer-generated bot acted in good faith.

As if that prospect isn’t daunting enough, consider the daunting complexity of algorithms. Several experts have opined that the algorithms we rely on in everyday life are growing so complex that even their creators can’t understand them. In a 2016 commentary in Forbes magazine, Kalev Leetaru explained the many ways in which the complexity of algorithms can outpace the forward-thinking abilities of their human creators, leading to unintended, even tragic, outcomes. Given the boundless factual complexity of insurance claims, the likelihood of unintended outcomes in bot-reviewed claims seems great.

Yet another layer of uncertainty is created by the fact that algorithms are not static. One of the great strengths of a good algorithm is its ability to “learn” from its experiences. But what a bot will learn is not always clear — or positive.

Perhaps the most highly publicized example of a bot run amok is “Tay,” a chatbot designed by Microsoft as an experiment in “conversational understanding.” When Microsoft launched Tay on Twitter in March of 2016, it was supposed to learn to engage in “casual and playful conversation” by interacting with other Twitter users. Well learn Tay did, but what it learned was not casual or playful. Within 24 hours, Tay had learned to parrot the racist, anti-Semitic, misogynist rants showered on it by Twitter trolls. Microsoft pulled Tay down, deleted the worst of the tweets and apologized.

Tay is a high-profile example of a flawed bot that was exposed to a wave of negativity in what some believe was a coordinated attack. But that experience has become a cautionary tale of how machine learning can run off the rails. In the context of insurance claims, the risk that an initially fair and impartial bot could develop an unfair bias over time cannot be dismissed. In the blog post announcing A.I. Jim’s world-record claims processing time, Lemonade noted that “A.I. Jim is still learning” under the guidance of “real Jim,” Lemonade’s chief claims officer, Jim Hageman.

Human supervision of a bot’s education is surely well-advised. Insurtech companies that launch a bot and leave it to its own devices may find themselves exposed to bad faith claims because their bot fell in with the wrong crowd. And if their bots apply the bad lessons they have learned too broadly, insurers may find themselves grappling with claims of “institutional bad faith” that implicate their practices and procedures on a broad scale, not just on a claim-by-claim basis. The litigation costs could pale in comparison to the public-relations costs.

Voices in the insurtech industry have played down the risks of bot-based claims handling. But the risks are not merely hypothetical. A cursory (and admittedly unscientific) survey of online ratings for insurtech companies shows that they include numerous complaints about claims denied without explanation, or without investigation. Those are precisely the types of conduct that fall within the model Unfair Claims Settlement Practices Act that has been adopted in one form or another by almost all 50 states.

As noted above, algorithms have been a fixture in risk assessment for a long time, but using them in the claims-handling process poses new risks and challenges, including the risk of bad faith claims that could be very difficult to defend. As insurance companies know better than anyone, identifying risk is the first step in avoiding it. There are indications that the creative minds that hatched the insurtech model are also leading the way in addressing the risk of insurtech bad faith claims. One way to do that is to limit a bot’s authority to approving only clear-cut claims, like the case of the stolen parka, and programming it to route dubious claims into human hands rather than denying them.

Referring questionable claims to real people is already part of the game plan for Lemonade. In the same blog post that trumpeted A.I. Jim’s “world record” claim approval, the company noted that “real Jim” — the company’s flesh-and-blood chief claims officer — “is by far the more experienced claims officer,” and that A.I. Jim “often escalates claims to real Jim. That’s why not all Lemonade claims are settled instantly.” Whether other insurtech companies use the same approach is not clear. But considering insurtech’s creative track record, it’s likely that as new problems and risks become apparent, solutions will follow close behind.

Dennis C. Anderson is an associate with Zelle LLP in Minneapolis, Minnesota. 

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Back to Page