The Institute’s guide is called ‘Supporting customers with mental health problems’. Inside is what it says on the tin – lots of different ways in which insurance people across a firm can work a bit smarter as part of their ‘customer duty’ work for people with mental health problems.
It also reads as a gesture – here’s how we can help you help the people who matter to us ; in return, please work with us on making this happen. Just the sort of partnership that business and society should come together on.
So all’s rosy then? Unfortunately, no. The problem for the Institute and others like it is that the approach around which their guide has been built has a decreasing shelf life. The way in which insurance is changing means that the basis upon which insurance people and people with mental health problems engage is changing.
While the Institute’s guide will deliver lots of value in the short term, it will diminish in value in the mid to long term. And I believe the Institute is clued in enough to realise this. So this second strand of their campaign (the ‘engage with us’ part) is going to have a third strand ready to be brought in as and when needed. I’m going to first examine what the aims of that third strand will be and then look at when it is likely to emerge.
Decreasing Disclosure
One of the terms most frequently used in the Institute’s best practice guide is disclosure (or some close variant of it). That’s because many of the challenges that people with mental health problems encounter are to do with some form of disclosure or communication. Yet a key thrust of many an insurer’s digital transformation is to reduce the amount of disclosure and rely instead on what they can find out about someone from within the huge data lochs that insurers have been assembling.
The good news for people with mental health problems is that the difficult disclosure issues they encounter will progressively diminish in frequency and scope. Insurers will learn what they want to know from their own collection of data sets, these being made up of a variety of primary and secondary data. So long as they can identify who you are, most of what they want to know will be ready and waiting to be used in the underwriting that is needed.
What’s the problem then, some of you will be saying. Doesn’t this make it all much easier, less stressful, all the more convenient, for both sides? Not really, for those issues around the understanding and interpretation of mental health will have simply been tucked away within a rather opaque digital decision system.
Unless insurers get access to digital medical records (not on the immediate horizon), all of the data used by underwriting, claims and counter fraud people for customers with mental health problems will be indirect data, obtained from secondary sources. And all the insight drawn from that data will come from analytical decision systems designed to identify correlations and act upon them.
Good Enough for Good Outcomes?
The question this then raises is one of quality and completeness. Where has this data come from? How representative is it? How complete is it? Is it reliable enough to base important decisions upon? And how well trained are the analytical decision systems? Have they been tested upon the data of people with mental health attributes? As there’s lots of unstructured data around, how has that analytical learning been supervised?
These questions need to be extended beyond the data and analytics. Around what parameters have the decision systems been designed? Has the science underlying these technologies been looked at?
As an example, consider this research funded by a leading European insurer into how smiles can be a leading indicator of mental health problems. All it takes is for some image analytics to feed this into an underwriting decision system and suddenly you’re dealing with an old problem in a very different guise.
Move forward a few years from that research and we find that same insurer saying in the context of digital trends in insurance, that emotions do not lie. In other words, what their analytics learns from your selfies about your mental health can be relied upon in underwriting and claims decisions. That is, until you then listen to the Information Commissioner’s Office, who judge emotion AI as a technology fit for nothing more than playing games at office parties.
Tension Ahead
What we have then is a great deal of tension building up around the insurance sector and its approach to decisions that touch on mental health. For sure, better service is needed now and that’s what the Institute’s guide addresses. But looking ahead a few years, a third strand is likely to emerge, in which particular data sources and particular technologies are critiqued in relation to their efficacy for decisions within an insurance context.
The big danger for insurers is that their decision systems will be found wanting, for how people present mental health problems is complex, to say the least. Will that danger be realised then? From what I see and hear, yes, it will. I won’t list the contributing factors here, but none of them should be a surprise.
A Thought for Sponsors and Designers
I’m going to end with a thought for the sponsors and designers of these digital decision systems. Look around your management team, your board, your data science team and the like, for a minute. Then think of the widely accepted view (including by the UK Government) that one in four people will at some point in their life experience mental health problems. And then remember that that one in four figure means that three out of a team of twelve will experience the thing that your system is expected to produce good outcomes for. And then remember that every person in that team will have someone in their family or close friends who will experience in some way that same thing.
Stand back and decide whether what you’re doing and how you’re going about it is thorough enough, researched enough, tested enough to deliver the results that you would be happy discussing openly with family and close friends. And then go one step further – would you like your mental health to be underwritten in this way?
We’re more in this together than most people think.