May 26, 2021 4 min read

New regulations will test insurers’ use of emotional AI

The road to the digital future of insurance is often referred to as a superhighway. The notion presented is of a safe and fast track heading straight into the future. It’s neat but wrong. The reality for digital strategies is more like a country lane – fast and slow sections, with twists, turns and the occasional dead end. Insurers who designed their digital strategies around that notion of a superhighway now find that they will have a cattle grid to negotiate. In other words, a road that is still open but one which requires them to slow down and handle some noisy feedback. This is the EU’s recent proposals for the regulation of artificial intelligence (AI).

These proposals for AI regulation are voluminous, so I’m going to focus on how two particular themes are handled in it, which are emotional AI and biometric categorisation.

Emotional artificial intelligence is widely recognised as a huge growth area. It involves digital systems gathering, interpreting and applying voice and image data to find out not just what we do, but why we did what we did, and how we felt when we doing it. It is about interpreting our feelings, emotions, moods and intentions and applying them to decisions about the insurance we are offered, when, how and at what price.

Biometric categorisation uses data relating to the physical, physiological or behavioural characteristics of a person. This covers a lot of system types, but ones certainly in use by insurers today include voice recognition, facial recognition and keystroke analysis.

New Transparency Obligations

The EU proposes to introduce new transparency obligations on firms using systems to detect emotions or determine associations with categories based on biometric data. They will be required to notify someone who is exposed to such systems. And that notification needs to be in accessible formats for people with disabilities.

Compliance will rely on regulators being granted access to the source code and the training and testing datasets for such systems. And remedies such as system shutdown and fines up to 6% of total worldwide annual turnover are being suggested.

These are of course still just proposals, but even so, it’s worth noting two things. Firstly, the emphasis placed on emotional AI and biometric categorisation is significant. And secondly, the means of compliance is another sign that regulations are being aligned with supervisory technologies.

So how relevant is all this to insurance? It’s very relevant. Both emotional AI and biometric categorisation are being used by several insurers, to influence a range of underwriting, claims, counter fraud and marketing decisions. I examined this in more detail in this early 2019 post.

More than Just Transparency

Some of you may see this new transparency obligation resulting in even longer terms and conditions for how consumers interact with insurers. Even more that won’t get read then, let alone understood. Yet is this going to be more than broadcast compliance? In other words, the insurer telling the consumer what it is doing. Or is it going to be more engaging and organised around consent? Bearing in mind the sensitivity of emotion and biometric data, I expect the latter. In which case, the label may say transparency, but the tin contains a lot more.

Transparency is one thing ; understanding can be quite another. And if consent is woven through the new AI regulations, then the understanding to be achieved will have to deliver informed and explicit consent. As I’ve said recently, insurers need to take a serious look at their consent strategies and decide if they are still fit for purpose. Personally, I suspect they’re passed their use by date.

Snakes and Ladders

So one question that people looking at insurance might wonder is whether insurers actually need to use emotional AI and biometric categorisation. The response from the market will be a resounding ‘yes’. This is because insurance has long been linked with moral hazard, which in turn is linked with character, which in turn is signalled by emotions and other such data. The sector will want to do everything possible to retain access to and use of emotion and biometric data.

Will they succeed? As I outlined in this earlier article, insurers have said that their interest in behavioural fairness grew out of concerns about the way in which discrimination legislation was being developed. To be blunt, that came across as a bit like moving the argument rather than addressing the issue. In ‘snakes and ladders’ terms, the refocus on behavioural fairness may have moved the sector up a ladder. Unfortunately, these EU proposals might just take the sector down a snake.

Questions around fairness and non-discrimination will need to be readdressed. And the questions will not just be about whether behaviour is a fair way of pricing policies. They will also cover how insurer systems are interpreting behaviours, interpreting character and assigning judgements.

One line of argument that will emerge in the market will say that digital insurance is what consumers want, so why are regulators getting in the way? Unfortunately, while consumers do want good user experiences, they also have big concerns about what insurers do with their data. Recent research by the Association of British Insurers was very clear about that.

The Can of Worms

How might this turn out then? In board terms, insurers will need to prepare for greater openness about not just what their systems do with emotion and biometric data, but about the insurance thinking that underlies how those systems are configured the way they are. And in turn, this will cause consumers to question some of the assumptions about character and behaviour that hitherto, insurers have been able decide on their own.

This reminds me of a simple tests for weighing up the ethics of a situation. It’s sometimes called the ‘dinner table conversation’, sometimes the ‘front page story’. Would you be happy telling people around the dinner table about what your systems are doing with their voice and image data? Would you be happy seeing your explanation as a front page story on a leading business newspaper? If not, then think again.

Some insurers may prefer to ignore or fudge such considerations, thinking that they’ve negotiated similar problems in the past. That logic may no longer be valid, for the regulators increasingly have the supervisory technologies to answer those questions for themselves. The big question then is just how willing regulators are to fulfil the ‘SupTech’ promises they’ve been making and start to deploy them.

Emotional AI is often seen as game changing for insurance. That might well be the case, but for the wrong reasons.

Postscript

Just as I released this blog post, Lemonade was marketing how it analyses ‘non-verbal cues’ in videos that claimants have to use when submitting their claim. They then went on to deny using emotional AI, after coming in for a lot of criticism. As they operate in the Netherlands and Germany, these proposed regulations are pretty pertinent.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.