How good are your firm’s algorithms? Can you prove it?
The use of algorithms in core insurance functions like claims and underwriting is growing steadily. And the claims for what they’re achieving are becoming ever more confident. Words like transformational are now commonplace. Yet how credible are such claims? Can they be trusted?
Trust in your algorithm matters because a lot of people will rely on it. There’s the claimant whose claims is being assessed for fraud or settlement. There’s the regulator interested in whether it is making underwriting decisions that are fair and non-discriminatory. And there’s the investor, who wants to know that their investment is being well used and is secure.
You can also go one step back from your own algorithms and think of how you weigh up the trustworthiness of the claims that an external algorithm supplier is making to you. Is it that good? Is it worth paying that much for?
Addressing that Trustworthiness Question
These are questions worth thinking about because there’s a person on your firm’s SMCR responsibility map that is now personally accountable for knowing the answers. Let’s look at how you can starting fashioning those answers.
There are examples being used in comparable sectors that illustrate how this question of the ‘trustworthiness of the algorithm’ can be addressed. And the one I’ll choose is that for algorithms being used in medical evaluations. After all, life and health insurers will be experiencing these more and more now.
The pharmaceutical sector has a well established approach to evaluating the trustworthiness of claims being made about drugs. It has four phases – safety (initial testing on human subjects), proof of concept (estimating efficacy), randomised control trials (comparison between existing treatment in clinical setting), and post-marketing surveillance (for long term side effects).
A Phased Evaluation
What is being proposed now by leading academics is a similar structure for algorithms. This has come out of the use of algorithms in medical evaluations. Here’s how the four phases are being described:
1: Digital testing – how does the algorithm perform on test cases/data;
2: Laboratory testing – how does the algorithm compare with human decisions (user testing etc);
3: Field testing – putting the algorithm through controlled trials of impact (measuring the benefits and the harms);
4: Routine use – monitoring the algorithm for on-going problems.
How might this phased approach be used by an insurance firm? Well, it could be used as a check on the claims being made by external suppliers about their algorithms. To what phase has their claims about their algorithm been tested? And what level of certainty attaches to those results?
Then there’s the development and evaluation of your own in-house algorithms. To what phase should an algorithm’s testing be taken before it can be considered by the project sponsor, by senior management team, by the board?
Removing the Hype
A structured approach like this helps removes the hype about an algorithm’s potential and forces the developers to test their claims at increasing levels of sophistication.
Think of it this way. Would you take a drug when it’s only been tested at phase one for safety? It’s unlikely, isn’t it. Yet many of the claims currently being made about algorithms have only been tested as far as phase 1, sometimes phase 2. Would you introduce them into the decisions systems for core functions like underwriting and claims?
The regulator is worried that this might already have happened, that inadequately tested algorithms are influencing outcomes for insurance consumers. And one of the signals for this was in the Liberty Mutual case, when software that analysed voice patterns for fraud indicators ended up creating a surge in justified complaints. Only then did the insurer wake up to what had happened.
Now some of you might think that there’s not enough knowledge of algorithms for this level of openness to benefit the sector. Yet is that really the case? Is there not a danger of ‘over-trust’ in the algorithm? This stems from a perception that algorithms are so clever and complicated, so objective in their statistical method, that we can just trust in them being right.
Intelligence Openness please
That line of thinking ignores the nature of statistics, at the heart of which lies mathematical uncertainty. What this phased approach is asking of algorithm suppliers is to measure and explain their own uncertainty about what they claim their algorithm can achieve at its current level of development.
Clearly, in each phase, more is learnt and uncertainty is reduced. It’s a form of what’s called intelligent openness. And research points to this type of confident openness about uncertainty actually increasing trust, not reducing it. It’s what audiences invariable expect.
The great danger is that insurers will feel under pressure to deploy algorithmic systems rapidly, in order to earn a return on the investment that’s been poured into them. Suggesting that such pressures should be ignored seems a bit Canute like.
Instead, organise a framework to guide decision makers through the phases of an algorithm’s development and deployment, and start using control standards to clearly signal performance expectations. This is not rocket science. After all, insurers do this all the time with human decision makers, through performance management and competency frameworks.
Algorithms are here to stay. Yet like other aspects of business performance, they have to show that they’re up to the job expected of them. Welcome to the ‘prove it’ world.