Four challenges that insurers face when using artificial intelligence in claims

The use of artificial intelligence in claims represents a ‘significant opportunity’ for the sector, said the insurer Ageas in a recent press announcement. And they’re right: expectations that artificial intelligence (AI) will produce cost savings in claims management will be high. Yet AI could also introduce some slippery slopes for insurance claims. Here are four to watch out for.

Challenging claims decisions

It’s common now for insurers to justify an underwriting decision by telling the customer that it’s because of new data or new systems. Will the same thing happen in claims?

Remember that views differ about how correct some claims decisions are. That’s one reason why brokers exist, and organisations like the UK’s Financial Ombudsman Bureau (FOS) as well. Given FOS uphold rates on some lines of personal insurance, it’s clear that insurers don’t always get it right.

So how will an insurer using artificial intelligence in claims respond to a reasonable challenge from the policyholder (or the likes of FOS) to the correctness of a claims decision? How will the insurer explain the decision making process within their AI? As the Information Commissioner has made clear to insurers, big data is not a game played by different rules. Fairness and transparency are two such rules.

Will customers trust the results?

Research by a leading insurer found that customers fear discrimination from artificial intelligence more than from a human operator. Unfortunately, they haven’t published the research, so I can’t say more than that. Nevertheless, the question it points to is that the decision on a claim might be fast and clever, but will it be trusted? Perceptions matter.

Moving fraud management beyond human understanding

To what degree can you let artificial intelligence replace human intelligence in an emotive issue like insurance fraud? Of course human judgement is not perfect, but then neither are artificial judgements (think of the historical data they’ve been trained on). The public will view someone accused of fraud by AI differently from someone similarly accused by a human.

If you’re a claims fraud strategist, how will you assess the significance of AI driven correlation on particular individual claimants? It’s not something you can leave to the AI supplier.

Remember that if your rules for defining fraud are narrower than those used in courts of justice (more here), then the insurer’s handling of fraud judgements will need to be beyond reproach. The test will be when someone of public note is accused of insurance fraud and decides to take it public.

Exploiting your AI investment

Artificial intelligence is an expensive investment and there will be pressure on the insurer to generate as much value from it as possible. So the possibilities presented by claims optimisation will undoubtedly be considered at some point. How do you weigh up the pros and cons of such a controversial practice as claims optimisation? (more here)

If your firm is happy to optimise premiums, it might seem only a small step to optimise claims. It will however be a very significant one, the detail and implications of which will need to be scrutinised by senior directors. Will that happen, and if so, against what ethical criteria?

Artificial intelligence presents insurance claims with some powerful tools, and the choice of which ones to adopt, and how to adopt them, will present claims directors with some complex decisions. And woven through those decisions will be the ethical questions that the deployment of AI often raises. How they are addressed will be a measure of the ethical leadership in insurance claims today.

Boost what you know about the ethics of insurance

...and stand out as the person who understands trust 

Join me every week for posts full of insight, guidance and challenge.

x