Insurers have always used claims data to understand the future likelihood of this or that type of policyholder making this or that type of claim. So should signs emerging of insurers using predictive analytics in underwriting be treated with an expectant shrug, or raise any ethical concerns? In this second post about predictive analytics, I’ll outline why it is a bit of both.
Predictive analytics involves applying mathematical models to a collection of datasets in order to draw forward-looking conclusions from it. Insurers can use it to categorise risks in new and interesting ways and so compete more confidently for those that fit its underwriting strategy. There’s an ethical upside to this, in that used responsibly, big data and predictive analytics can help reduce insurers’ dependency on some of the risk categories that have attracted criticism on ethical grounds in the past.
Like many an innovation however, the stress should be on responsible use. Applied without such consideration opens up a dark side to predictive analytics, with it becoming a tool not for rational decision making, but for division and exclusion.
If predictive analytics can identify one in ten claims as fraudulent, as has been claimed recently, then one use of such intelligence is to take those indicators of fraud and apply them not just to how new claims are assessed, but to how policies are underwritten at inception and renewal. Some insurers are talking about just this, seeing it as a more effective place to drive forward the ‘fight against fraud’.
You can’t label a policyholder’s request for cover as fraudulent on the basis of predictive analytics, no matter how high the probability was measured. That’s because predictive analytics can only (and only ever will) measure correlations, not causations. In other words, it will give you the probability that certain indicators of fraud are present, but it will not be able to tell you that that particular person, the one whose quotation request is in front of you, is a fraudster. Inferring causation from correlation will take you into unethical territory and put a hole in your Treating Customers Fairly programme below its waterline.
This can work in two ways. Firstly you can’t refuse to renew a policy just because predictive analytics tells you that that policyholder has several attributes of a fraudster. That’s inferring causation from correlation and leaves the policyholder struggling to find cover on a legitimate basis.
Secondly, you can’t refuse to renew just because a policyholder has done something that predictive analytics says is an indicator of fraudulent activity. The classic example is walking away from a claim. Policyholders walk away from claims for a variety of reasons, one (but only one) of which is because their attempt at insurance fraud has become too risky. Common sense says that you can’t extrapolate from that one correlation to an overall judgement on one particular policyholder.
Insurers put great store on whether someone seeking cover has been refused cover in the past. It usually results in them being ‘no quoted’ by most of the market. So refusing to renew because predictive analytics tells you that there’s a certain probability that that person falls into a certain category that you have concerns about is an ‘ethically controversial’ step that should be thought through with very great care.
Predictive analytics is by its very nature an inexact science. It concentrates on what is known about something, ignoring what is not known about it. And the models may be sophisticated, but they are still simplifications of the real world. Underwriters need to proceed with care.