May 20, 2019 5 min read

Why the FCA should not use claims complaints as a value measure

Complaints are important. Not just for resolving an individual problem, but for tracking the health of the policyholder / insurer relationship. And the UK regulator is now proposing to introduce claims complaints data as one of its new values measures. On the face of it, it makes sense, yet it is likely to have serious consequences for policyholders.

Some history first, and then a look into the near future. Five years ago, the FCA conducted a thematic review of household and travel claims. And in the main, they were pretty relaxed with what they found.

Yet one statistic from that report has remained lodged in my mind. It showed that even when policyholders had had a successful claim, 15% still wanted to make a complaint. And of those 15%, half then went on to make a complaint.

So 1 in 7 of successful claimants still felt unhappy enough with matters that they wanted to complain. That just seems incredible. Even with their claim successfully settled, the friction involved was enough to them for them to still want to go to the effort of submitting a complaint. There’s two elements here. They could have been unhappy with service, and/or they could have been unhappy with the settlement.

Further on in that report was another interesting finding. During their investigation,  the FCA had looked at an insurer’s best practice guidance and found an interesting instruction to claims staff. Their ‘best practice’ was to settle a particular category of claim according to whether the claimant was likely to make a formal complaint or not.

Optimising Outlays

Let’s look into the near future now, starting with a paper by Rick Swedloff, a professor at Rutgers University Law School in the US. His paper ‘Regulating Algorithmic Insurance’ looks at how insurance regulators might respond to the raft of new underwriting and claims practices being introduced as a result of big data analytics.

The section on claims explores the influence that algorithms could have on how claims are settled. He talks of claims departments exploring the use of artificial intelligence (AI) to find out how settlement offers might be optimised. The aim was to reduce the outlay, without triggering negative feedback, in this case a complaint.

Let’s look at an example of how this would work, using high volume claims of mid to low value. The ‘Claims AI’ would make say 1000 offers and watch for any signs of negative feedback. If none emerged, it would log two things. Its claims complaints target had been met but its claims ratio could still be improved.

So the Claims AI takes another 1000 claims and arranges for their offers to again be marginally lower. Again it watches for any negative feedback like complaints. If it’s not forthcoming, it will repeat the process on further batches of claims, until it senses a tipping point in that feedback, with complaints starting to come in. At this point, the Claims AI raises settlements a little so as to stay just those triggers of negative feedback.

A Tale of Two Outcomes

The result for the insurer is no perceptible change in complaint levels, but an improved claims ratio.  For the policyholder, the result is rather different: a settlement lower than they were entitled to.

Let’s bring in that FCA report from 2014 again. That insurer’s best practice guidance stated that claims were to be settled at below the correct amount if they felt the claimant would not complain. This is simply the analogue equivalent of what I’ve just described above with Claims AI.

Am I extrapolating from that one case too much? I don’t think so, for I’m also reminded of those people who had had a successful claim but still wanted to complain. And what starts to come together, from what I hear from around the market, is a development referred to as ‘willingness to accept’.

Willingness to Accept

Just as underwriting has willingness to pay, (aka price optimisation), so claims has willingness to accept (aka claims optimisation). The data that insurers collect about us allows them to move the notion of a claims settlement away from what the loss was worth, and towards what the insurer believes each of us, in our own individual way, would be willing to accept in settlement.

And that reference to ‘each of us in our own individual way’ is important. For Claims AI will sense that a particular type of person could be more willing to accept a lower settlement. This may be because of their character, because of their credit position, because of their family circumstances, and so on. So your settlement is personalised, but not necessarily in the way you would like it to be.

As I’ve said before, claims optimisation is a dangerous practice for insurers to engage with. It directly feeds into the notion held by some that insurers always offer less than a fair settlement. It could also represent the Achilles Heel of insurance fraud initiatives (more here).

“We are All Vulnerable Now”

It is the sort of practice that would set off all sorts of alarm bells for consumer groups and regulators looking at how vulnerable consumers are handled. It doesn’t take a rocket scientist to see how ‘willingness to accept’ connects pretty directly with vulnerability. If claims optimisation takes hold in UK insurance, and I’ll quote the Chairman of the UK Competition and Markets Authority here, from a seminar I attended earlier this month, “we are all vulnerable now”.

Now I know several claims directors who would, hand on heart, state categorically that they wouldn’t do that sort of thing. Which is great of course, but the danger doesn’t necessarily recede as a result. For how you train your Claims AI, how you test it, how you monitor its performance, how you let it learn for itself : all these are deeply technical questions that data scientists make decisions on each day.

It would be surprisingly easy to set parameters for your Claims AI that told it to not let complaints rise, and to also target a certain claims ratio. And off goes the Claims AI, to learn for itself a way through a forest of claims to arrive at a harmonious balance between those two parameters.

It may not be what the firm wanted to its Claims AI to do, but unless it is overseen actively, comprehensively and ethically, it might all too easily end up doing it anyway.

A Value Measure?

So, back to the FCA. It recently announced that it would be introducing claims complaints as a percentage of claims, as one of its value measures. And it doesn’t take a rocket scientist, let alone a data scientist, to see a target complaints level being set as a parameter in the Claims AI. And then another target… well, I think you’ve got the picture.

Everyone’s attention may be on pricing at the moment, but it is important for claims people to remember that exactly the type of practices causing problems for underwriting, seem to be appearing in claims. If price optimisation exposes the sector to accusations of unfairness and discrimination, then claims optimisation does so too, and more.

Insurers need to take a critical look at their plans for artificial intelligence in claims. They need to understand the questions that need asking, and learn what differentiates a good answer from a poor one. Claims is too important to become buried in the technicalities of artificial intelligence. It encompasses vital services that policyholders pay for in the hope that they won’t need to make use of. Let’s make sure that if they have to, they come away from the experience with a fair settlement and a satisfactory service. It’s not a lot for them to ask for.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.