Feb 12, 2019 8 min read

How do you feel about insurers tracking and analysing your emotions?

analysing your emotions
Photo by Nathan Dumlao / Unsplash

How many of us have appeared in a photo that has been posted online? Or spoken to a voice assistant like Alexa or Siri? Or simply spoken over the phone to a company? Most if not all of us wlll have done so at some point over the last few years. And that means that you’ve been leaving a data trail that is now being used to understand and track your emotions. So how are insurers engaging with this, and what are the ethical implications?

One of the big technology trends at the moment is the surge of voice-based web interactions. Predictions vary, but range from 30% to 50% of all web browsing by 2020 – note, that’s next year. And to those voice interactions can be added the good number of years we’ve been putting our photos online. Together, these voice and image records are rapidly becoming a huge part of our digital footprint.

Immensely Revealing

Those voice and image records are immensely revealing. While retail purchases and location data tell a firm what we do and where we do it, voice and image data tells a firm much more about why we did what we did, and how we felt when we doing it. Voice and image data is opening a window into our emotional lives.

This will come as no surprise to some insurance people. After all, voice analytics has been in use to detect claims fraud for several years now. When we’re asked over the phone to explain the circumstances of the loss in our own words, we are in effect being subjected to a remote polygraph test.

There is something almost old fashioned about that now though. The new focus is less on such person to person activity and more towards passive data collection. This shift has occurred because of the need for the artificial intelligence around which voice and image analytics has been built to be trained as comprehensively as possible. The more data it can feed on, the more it can learn about us. And the better it is then able to move from understanding how we are now, to perceiving how we might be in the future.

A Shift towards Social Listening

We are in the middle of a significant shift towards ‘social listening’. What we say in social media, how we look in that selfie, and how we respond to an online post, are increasingly being recorded. Remember UK insurer Admiral’s foray into our musings on Facebook as indicators of motor risk. That was back in 2016.

On top of all this is the data that our various devices are collecting, be it an Apple Watch given to you by your employer or insurer, or through  the telematics device in your car. Life insurers are now ready to make full use of this river of data for underwriting you.

What binds all this data together, and which gives it value, is the artificial intelligence (AI) used to find all those insightful patterns and trends. And the value it brings to an insurance firm could be enormous. I say ‘could’ though, because like many decisions in life and in work, there are choices. And how we respond to those choices marks us, and our firm, in the eyes of the insurance buying public. There’s lots we could do, but that’s not the same as what we should do. Therein lies the ethical challenge that lies at the intersection of data about our emotions, the analytics in artificial intelligence and the interests of insurance strategists.

The Challenges in Emotional AI

As voice and image data fills insurers’ data lakes, the application of insurance AI to our emotional lives creates one of the fundamental issues facing the sector. So just how should the sector respond to the challenges inherent in emotional AI?

That response needs to take into account most functions within a typical insurer. Emotional AI is already being used in underwriting – think of all that ‘psychological pricing’ that’s being introduced. And in claims, it’s being used to gauge how much a claimant might accept in settlement. Marketing of course knows all too well how important emotion is in purchasing decisions. And each of these functions is weighing up how to move from understanding our emotional present, to predicting our emotional future.

Let’s look at some examples. One leading insurer has been funding research into what photos might say about your mental health. Not just your mental health now, but how it might evolve in the future. They’re doing this by facial analysis, and in particular how you smile.

Then there’s the insurer with a very large personal motor portfolio who wants to use the data streaming out of their policyholders’ telematics boxes to gauge the driver’s susceptibility to stress. Important insight if you’re also providing them with life or health cover.

And then of course, there’s the very topical one in UK insurance at the moment, being the way in which policyholder data is being used to gauge the premium to charge someone before they start to think of switching insurers.

Putting a Value on Emotions

These examples are all using the data being collected about our emotions. And it tells us that our emotional lives are being assigned an economic value. Quite a big one in fact, given how expensive data collection and AI is nowadays. And the return insurers will expect to earn from it will be twofold: market share and portfolio profitability. In other words they will want more business, and better business.

And what’s wrong with that, you might ask. It’s a question worth asking, but remember, it’s a question that needs to be asked not just within the framework of short term returns, but within that of long term returns too (more on that here). After all, insurers will be paying for those data lakes and AI over the long term too.

There’s something else that is important over the long term: trust, and the reputation of the market. Emotional AI raises some hugely significant ethical issues, and how the sector responds to them will have an impact on the sustainability of those long term returns. As I’ve said before, tools like AI may lead insurers to think that they’re getting closer to their customer. Yet that is confusing proximity with intimacy. The latter is all about whether the customer wants to get closer to you. And insurers earn that by their trustworthiness.

Predicting Mental Health

When you read about emotional AI, it sounds like very clever stuff. Just scan a photo and we will predict your future mental health. Turn that steering wheel and we will tell you what type of person you are. There’s an emphasis, a sense of certainty, in how the insight into our emotional life will be revealed. As one leading European insurer wrote last week, “emotions can in no way lie”.

Alas, if only humans were so clear, so transparent. That insurer is confusing our emotions with their ability to read and interpret emotions. In other words, they’re conflating who we are with whom they think we are. ‘Hey, just live with it’, you might think. Yet if that insurer is so adamant in what their emotional AI is telling them about a policyholder, how are they going to settle that policyholder’s claim fairly? It’s a difference that matters (recall my recent post on identity).

And it’s a difference that has its roots in scientific understanding of our emotions. Or I should say, scientific understandings, for there are two broad schools of thought when it comes to understanding emotions. One is the categorical and the other is the dimensional.

Two Schools of Thought

The categorical approach argues that there are a number of primary basic emotions that are hard wired into our brains and which can be universally recognised. The Facial Action Coding System is an example of this. It’s been developed around a taxonomy of human emotions and facial expressions. And this systematisation of facial expressions has proved attractive to business, for it fits neatly with all that clustering, categorising and correlating at the heart of data and AI.

The dimensional approach rejects the idea of basic emotions. Instead, it sees emotions as linguistically and socially constructed. Take smiles for example. In Japan, smiles depend very much on the social context and are driven by unique and complex display rules. Some for example can indicate negative emotions.

This difference is significant for two reasons. Firstly, the categorical approach has proved attractive to business and is behind many of the emotional AI that sectors like insurance are adopting. Yet it is disputed, to no small degree. The dimensional approach has significant support. So the danger for insurance is that they may be adopting an approach to emotional AI that gives them the certainty most businesses desire, yet ultimately be wrong. Or at least not so certainly right as the providers of emotional AI systems claim.

The Implications for Privacy

The second reason why the difference between the categorical and dimension approaches is significant relates to privacy. Categorical thinking sees emotions as leaks: in other words, our faces are saying things about us that we might not want to reveal. And businesses like this, for it means that they can rely on what those emotional leaks are signalling, rather than what we ourselves are saying.

That ‘emotions as leaks’ argument is significant, for it leads to the view that our facial expressions are public. They’re not private because they are universal to us all. And this argument allows providers of emotional AI to justify the harvesting of facial expressions on a mass scale.

Yet what about GDPR, you might ask. Surely our facial expressions are personally identifiable information (PII)? Well, they needn’t be, if you organise your data and analytics so as to sit just outside the GDPR. Data that is identifiable is described as ‘toxic’, because it requires a business to treat it differently and pay extra for it, despite it providing little to no extra function benefit.

Circumventing GDPR

Our facial emotions are being gathered and analysed using small group targeting and inferential analytics. So long as the data does not connect with an ‘identified or identifiable person’, the GDPR does not apply. Inferential analytics is then used to reconnect what is learnt through small group targeting back to you as a marketing, underwriting or claims ‘target’.

Data about our emotions is sensitive – no one can argue otherwise. Yet that doesn’t mean it’s personal. The difference is crucial.

Let’s turn this round. While the privacy of our emotional lives is, I would argue, important to us all, there’s another way of thinking about this. That other way talks about respect, self guidance and choice. It involves putting our right to autonomy before the benefits of insurance and the direction being taken by the sector. Our faces and emotions may be seen by some as commodifiable, but do we want them to be intruded upon in ways that are becoming increasingly apparent?

Five Dangerous Words

Now insurers might say that they have little choice. If there’s a risk of someone else using emotional tracking techniques to learn more about policyholders’ future moods, then market competition says that they must do likewise. Yet remember: that well known investor and insurance CEO, Warren Buffett, has described that phrase ‘everyone else is doing it’ as the five most dangerous words in business.

In deciding to introduce technological capabilities like emotional AI, it’s important that insurers do not forget that the public, and their representatives in government, will view such developments through the lens of social and ethical values, not business values. Market pressures will count for little if insurers start deploying emotional AI to predict our mental health and adjusting their products and prices accordingly. As I said at the start, there are two questions here, not one: ‘can we’ and ‘should we’. Never ask the former without the latter.

Acknowledgement: to learn more about emotional AI, read this book by Andrew McStay. It informed a lot of what I’ve covered in this post.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.