If you’re an insurer underwriting risks and settling claims in the personal, commercial and long term markets, then you will at some point need to engage with the implications of emotional AI in cars. Yes, across all three markets. In particular, you will be facing some significant ethical dilemmas. How you resolve them needs to approached with care and documented well.
That’s because the ethical risks associated with emotional AI are what I would describe as very raw. They touch on highly personal and emotive themes that everyone, from the chair of the board down to the new intern, will have feelings about.
A quick point first about which markets need to understand the implications of emotional AI in cars. Clearly personal and fleet motor are obvious, but why long term markets? The reason is best illustrated by a comment a few years ago from the CEO of one of the largest auto insurers in the US. He said that telematics was especially valuable because it not only told you about how a person was driving, but lots of other things relevant to life and health underwriters. He said he would not hesitate to extract value from selling that data on into the long term market.
Note that he was talking about telematics data. How you drive can be interpreted for signals about your emotional state. So insurers will need to think about the ethical issues associated with emotional AI if they’re working with any data coming from a car, not just image or voice data.
In Car Sensing
In-car sensing is the front end of how emotional AI is integrated into cars. It’s a booming aspect of motoring for reasons I’ll come to a minute. For that growth to be truly sustainable though, it must generate revenue. That’s because the installation and maintenance of sensors in cars is expensive. Insurance is one of the ways in which that revenue will be generated.
It’s not just about premium revenue though. Insurers have a keen interest in autonomous driving, seeing it as vital for reducing accidents and making sense of claims. And autonomous driving works not just from sensors that point outwards, but also sensors that monitor inside the car. That means what drivers and passengers are doing.
I’m going to rely heavily in this article on a recent paper by Andrew McStay and Lachlan Urguhart, entitled “In Cars (are we really safest of all?): Interior Sensing and Emotional Opacity”. Their paper is based upon interviews with 13 experts on the technical, industrial, legal and ethical aspects of in-car sensing. I was one of those experts.
The paper covered many aspects of in-car sensing, so I’ll concentrate on those with implications for insurers.
What Sensing is Happening
Firstly though, something about the sensing taking place. McStay and Urguhart talk about…
“Inward sensors seek to detect specific states such as fatigue, drowsiness, intoxication and stress; affective states (such as excitement and relaxation) and expressions of emotions (such as fear, anger, joy, sad, contempt, disgust and surprise).”
And sensors do this by tracking…
“…occupancy in seats, distraction of driver, liveness and emotion state, presence of child seats and drowsiness through sensing of the presence of faces and ‘key body points’, smartphone use and micro-expressions. Other measures to sense in-cabin could include heart rate variability, respiration, …motion detection, voice and touch-sensors on sites such as the driving wheel and seats (including dedicated child seats).”
Suppliers and motor manufacturers like to talk about an “interactive automotive assistant that understands drivers’ and passengers’ complex cognitive and emotional states from face and voice, and adapts behaviour accordingly.”
The big overall drive behind these systems from motor manufacturers and technology firms is to improve safety, meet regulatory standards and enhance in-cabin experience.
Some Examples
Let’s look at some examples.
You put an airport carpark into your navigation system and during the journey, the telematics and in-car sensing picks up a certain nervousness in your driving. As you park, the car prompts you to buy some insurance.
Then there’s the telematics data that when combined with steering wheel sensor data points to you being particularly stressed at the moment. As the car is owned by your employer, this data is fed through to the firm’s group benefit insurers.
More indirect means will also be used. So for example if your car’s sensors think you’re starting to become drowsy, they will lower the cabin temperature and change your playlist to something more upbeat. It doesn’t need the personal data from those camera images to be used, just data about the events triggered by changes in that personal data.
And in a more forward looking situation, your car may be operating at some level of autonomous driving, when outwards sensors indicate a problem. However the inwards sensors suggest you are not in the right emotional state to take over control of the car, so it converts to an ultra safe mode and pulls over to the side of the road.
A Mass of Competing Interests
The emotional AI behind much of in-car sensing is therefore both a solution and a problem. Policy makers want cars to become safer, but only if data protection regulations are complied with. They want AI that complies with equalities legislation, but know that both motor and tech markets are highly opaque. They want insurance premiums and settlements to be fair, but at both the level of the individual and society.
On top of equalities legislation, there’s data protection and the GDPR, there’s fairness and the proposed AI Act, and there’s car safety and the EU’s Vision Zero initiative. Attempts are being made to resolve the inherent tensions between some of these competing regulatory interests, such as the idea of ‘sensing not storing’ in relation to privacy. Yet this then encounters the tensions between the in-car processing of personal data and its de-identified transmission to other systems for affinity profiling.
Add in the tensions between tech firms, motor manufacturers and the insurance sector around who controls what within all this and what emerges is a series of ethical dilemmas. An overstatement you may think, but remember this. What we’ve just covered involves questions of personal and public safety pitched against issues like fairness, bias, autonomy, privacy and consent.
And between transparency and user experience as well. After all, who wants to listen to and accept a long and complex consent notice before the car allows you to drive off? But then who’s happy with recent research that found that all motor manufacturers failed consumer privacy tests in relation to the data they collect from car owners?
Key Questions for Insurers
Let’s focus in on what insurers should be thinking about in relation to emotional AI and in-car sensing. Here are some key points for them to consider…
You may just be collecting driving behaviour data via your telematics app, but if you are then using that data to judge emotional states, or selling it to others under terms that allow that purpose, how are you ensuring that existing UK data protection law (let alone the forthcoming EU AI Act) is being complied with? Remember the ICO’s view in October 2022 found that none of the emotional AI technologies it had seen passed data protection legislation.
How have you been training your telematics and/or emotional algorithms in relation to key (for motor or life insurance) emotional states? Has due consideration been given to, for example, people with disabilities. As about one in ten new cars in the UK are bought by the Motability scheme (yes, you read that right), that’s both a pretty important segment of the market to be strong in, and a pretty important constituency to get this right for. The same goes for other forms of protected characteristics.
If you’re forming partnerships with other firms in the in-car sensing supply chain, what steps are you taking to ensure some sort of balance between the strong tendency in those firms towards opacity, and the right level of due diligence needed for compliance with internal standards and external obligations?
How are you weighing up the context within which your telematics and/or emotional AI is telling you things? What are the limitations that underwriters need to consider, and which people in claims need to take care over?
Follow the Money
As revenue opportunities put pressure on those using in-car sensing to increase the scope of what their systems are thought capable of, how are you ensuring that this keeps within your product governance requirements? How much of what you’re contemplating doing is being held up to independent scrutiny? The three lines of defence is not as reliable as people like to think.
There will be a narrative in some insurers something along the lines of ‘if it makes the car safe, then what's wrong with us using it as we wish?’. How are you addressing this type of post-development rationalisation within your firm’s culture? What does the firm’s board do to ensure that the right culture is in place around highly emotive and increasingly debated issues like emotions and AI?
And finally, if your firm is dealing with technologies that are thought to be opening up new horizon’s around emotions and data, what processes does it have in place to weigh up the often significant questions about the science behind such technologies? What sort of oversight is being used to ensure that the firm does this within the spirit and the letter of existing regulations?
Objects of Emotion
Cars are objects that bring up all sorts of emotions. They’re also computers on wheels, and as Volkswagen found with ‘diesel-gate’, misuse of those systems can have huge financial and reputational impacts.
Insurers and cars have a long joint history, and while the sector will want to be up there at the front working on vehicle innovations, they also need to do so within regulatory and legal systems that are paying hugely more attention now to the implications of AI and big data. Add in the mandatory nature of motor insurance and all this needs to be done within the need to ensure maximum access to the market.
‘Could we; should we’ becomes the key question. And it’s a key question that might best be tested on members of your firm’s board. Some of them are very likely to be supporters of charities engaging with people with mental health or disability issues. Asking them a ‘could we; should we’ question in relation to extracting and applying data from cars about the driver’s emotional state could be a valuable litmus test, even if configured around just that data’s use on an affinity basis. It’s also a question that can be given added piquance by including passengers as well.
Begin Here
Each of those seven 'questions for insurers’ outlined above involves some form of ethical dilemma. At the same time, they may seem a rather overwhelming set of questions to start on. I would suggest choosing one question that resonates the most and get some experience of resolving the ethical dilemma it presents. And if resolution is not possible just now, then give your people some experience of moving the ethical dilemma forward and gaining confidence in it being managed as best as circumstances allow.