Prediction and the Cold Hand
Several years ago, I looked at how the overall field of analytics can be broken down into four categories
- descriptive analytics – looking at what happened;
- diagnostic analytics – considering why did it happen;
- predictive analytics – judging what will happen;
- prescriptive analytics – judging what should happen and making it happen.
The first two are of course backward looking, while the third and fourth are forward looking.
Yet consider these words from an Oxford professor’s article in Wired, which I wrote about earlier this year…
“These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people - beings who are supposed to be infused with agency and free will.”
If an insurer is using its analytics predictively or prescriptively, then across a cohort of consumers, they will be influencing the very thing they are seeking to analyse. People will do less of one thing or more of another. They will tend to go less often to one place, more often to another. They will tend to engage more often in one activity, less often in another.
In other words, the more you move from predicting the actions of a pool of people, to predicting the actions of individuals within that pool of people, the more you start to influence the very outcomes you’re trying to predict. The very act of prediction changes the predicted. It is not a neutral activity.
Two Sides to Prediction
There’s certainly a positive side to this. Consumers eating less unhealthy food, engaging less often in risky activities, staying away from less safe areas – all these are good for underwriting results.
Yet there’s certainly a negative side as well. It’s sometimes referred to as the ‘cold hand of insurance’. People become discouraged from doing something outside of the usual, and from doing something new, lest their insurance becomes too expensive, or even unavailable.
Yet if we shy away from doing risky things, the sum of human endeavour will be depleted. New places won’t be explored, new research won’t make life enhancing discoveries, new ideas won’t get shared and tested.
I once came across a paper on insurance, data and Marie Curie. Its central point was that predictive analytics would have labelled her as a very bad risk, to the point perhaps that her work would have been made much more difficult. Yet her scientific work resulted in discoveries that have been hugely beneficial for insurers.
In a recent article, I looked at how insurers assess policyholders for counter fraud purposes. And based on the sentiment sometimes heard within the sector that once a fraudster, always a fraudster, anyone with a prior conviction (even if ‘spent’ under legislation) will be rated and serviced accordingly. Yet the rehabilitation of offenders is very much in insurers’ interests. They want past offenders to be rehabilitated back into society and leading a conviction-free life.
Feedback Loops
My point is that it is very easy for analytics to create feedback loops, whereby decisions taken at the level of the individual impact can, at the community level, easily undo what the insurer was striving to achieve. Your analytics leans what you’re looking for towards being a self-fulfilling prophecy.
Insurers need to follow the advice of Professor Véliz and think through the social and ethical implications of making predictions about people. It’s not just their autonomy that is impacted. It is the endeavour that comes out of the society of which they are all part, of which insurance people are all part.