Tensions Around Insurance and Mental Health Apps
Every so often, insurers highlight the work they’re doing around mental health. Two examples that stand out for me are this insurer’s funding of research into facial image analysis, on the basis that scientists can apparently predict someone’s future mental health on the basis of how they smile. And this reinsurer’s involvement in an app that can be used to support their clients’ employee mental health.
The Pressures of Personalisation
Clearly of course, reinsurers and insurers have an interest in improving the mental health of the people they are insuring. This is part of the ‘prevent rather than pay out’ trend. And there’s a lot to be said for this, as more and more people experience mental health challenges. The numbers are high. In the UK, it is widely accepted that 1 in 4 of us will experience, or know someone who will experience, a mental health challenge at some time in their lifetimes.
One outcome of that ‘prevent rather than pay-out’ trend is of course that claims could be prevented. It sounds like a win-win for policyholder and insured. One gets better mental health while the other gets lower claims. Yet just how impactful will this be? Will portfolio performance see tangible returns for providing that support for mental health? I suspect that such returns will take a long time to have a discernible, let alone significant, impact on portfolios.
So what happens in the meantime? That will be influenced by another big trend in insurance: the trend towards ever greater levels of personalisation, driven in large part by ever increasing volumes and varieties of data. This results in constant pressure to identify those of higher risk and put them on a higher rate. It will be clear to any underwriting director that portfolio returns would be more discernibly improved, and sooner, by putting those with mental health challenges on a rate commensurate with their higher life and health risk. This is fairness of merit in action.
GDPR and Personal Data
This however would put underwriting directors on a collision course with those wanting to emphasise the support element of the insurer’s approach to mental health. After all, who’s going to engage with say a mental health support app provided to the policyholder for their employers, when the personal data being collected is then used for underwriting purposes? I suspect few if any.
And a recent survey in the UK supports this. It found that 79% of those surveyed did not believe their employer when they discussed or promoted wellbeing initiatives. So what might be behind this? I strongly suspect the lack of trust stems from concerns about how employers might use the information collected through wellbeing initiatives.
One route commonly taken to navigate away from this problem is for those running the mental health support service (now usually app based) to emphasise that the service is delivered in accordance with the requirements of the GDPR. Personal data is protected and anonymised. Nothing is left to tie in the mental health information gathered through the app with an identifiable person working at such and such a firm.
For simplicity, let’s put aside questions about the effectiveness of anonymisation and take as given that personal data is protected in accordance with legislation. End of problem then? Not at all.
Affinity Profiling
Privacy scholars know that for all its strengths, the GDPR has flaws. One of its main flaws is in relation to group level data and affinity profiling. Group level data is created through large amounts of anonymised personal data. None of it is personal data as per the GDPR, but all of it has at some time come from a person. While the identifiers have been removed, the relationships between the remaining data points are still present.
This then allows those with access to the mental health app’s anonymised data to analyse it for patterns, in the form of relationships that are statistically significant. Out of this comes a deeper understanding of the people (as a group) using the app and how one thing leads to another, how one aspect ties in with another, and so on.
An Insurance Example
Let’s illustrate this with a simplified example from the world of insurance. We’ll assume that a mental health app has been provided to employees of a large UK organisation. Over time, the anonymised data is collected and analysed in all sorts of fancy ways for statistically significant relationships. Let’s say one trend points to people with a particular mental health challenge (let’s call it XXXX) having raised levels of mortality and morbidity. Through the app, they get support for their condition and many improve, but not all (apps can only achieve so much).
The insurer, concerned about the impact that the XXXX condition could have on its overall life and health portfolios, decides to use what it knows about how XXXX presents in a large population like that organisation’s employees. So for example, it identifies from that analysis a cluster of data points that are statistically significant for a person at some point facing condition XXXX. The relationships within that cluster of data points then become part of its core underwriting model used across its L&H portfolios.
This is what is called affinity profiling. And it is something that, as per this research paper, is seen to be inadequately catered for by data protection and discrimination legislation. Furthermore, so long as the insurer is directly involved with the mental health apps design and delivery, it might also mean that the data so collected is not considered as secondary data under proposed legislation for the European Digital Health Space (more here).
This then begs the obvious question of just how much involvement insurers might have in the design and delivery of such mental health apps. That involvement will vary of course, but it will vary from a significant involvement (as per here) to little more than an 'off the shelf' involvement.
More than One Fairness
I mentioned earlier that at the heart of the sector’s logic for ever greater levels of personalisation is something called the fairness of merit. In other words, higher risks attract higher premiums. There is more than one type of fairness however.
Within an overall equality of fairness (more here) lies other types such as fairness of merit and fairness of access. That underwriting director needs to think carefully about that 1 in 4 figure I mentioned earlier, for people who will themselves, or will know someone within their family who will, experience a mental health challenge in their lifetime. The grim reality of that figure is that the underwriting director, or someone in their family, will experience a mental health challenge.
This means that the dilemma about how to balance fairness of merit with, for example, fairness of need, becomes both a professional one and a personal one.
An Ethical Framework
I’ve written before about how loss prevention programmes are often trojan horses for acquiring the behavioural data that the insurance sector has an increasingly voracious appetite for (more here). The great danger is that the same will be the case for the sector’s interest in mental health.
Last year, the behavioural people at Swiss Re issued an interesting paper about the importance of behavioural work being undertaken within a clear ethical framework. My main issue with it was that the many judgements required for such a framework to work were going to be predominantly taken by the insurer.
“Behavioural fairness is not something that can be determined by the insurer alone, for their views on it are subject to many interests and judgements.”
If ever there was a situation where this type of ethical framework could be fully and cooperatively put to use, it is around this whole field of mental health and insurance. There are a number of mental health charities following closely what is happening in insurance and I’m sure they would bring a lot of expertise and insight to the table. Now is the time for this to happen.
Watchwords
There are three steps that I think every insurer engaging in whatever form with mental health should take.
The first is to be extremely clear and candid with itself about the scope and depth of that engagement, most obviously in relation to any direct or indirect links with underwriting. Bring in an independent perspective to help with this, otherwise self interest will imbalance the findings. Insurers need to do this because at some point, someone within their firm will come asking challenging questions.
The second is to think more critically about affinity profiling. There’s a possibility that an EU push against online behavioural advertising will result in a generic restraint on affinity profiling being added to or aligned with the GDPR. Insurers will become caught up in the ripples that will flow from this.
The third is for the insurer to be prepared to have its approach to ‘mental health and underwriting’ examined in rather challenging terms in the court of public opinion. There are a variety of routes by which I believe some questionable practices will come into the limelight. I won’t go into them here, other than to say that insurers will struggle to counter them.
The watchwords across these three steps are: be more aware, be more self critical and be better prepared.