ICO Consultation on Biometrics: Key Points for Insurers
The ICO are going through a consultation process for this guidance, but while it’s called a draft, it very much has the feel of a final version. The structuring of the guidance and its linking into particular aspects of UK data protection legislation gives it a certainty not usually found in other regulators’ consultations.
I would liken this consultation to the ICO showing business the hoops through which it will soon need to jump. The message is: ‘This is how you should be thinking about biometric data and systems’ .
And that impression is reinforced by the ICO’s use of the Ada Lovelace Institute’s Citizens’ Biometrics Council (CBC). As a representative sample of the UK public, with academic expertise added in, the CBC is in effect acting as a yardstick as to what the public thinks about biometrics. From what I’ve read of their output, they’re going to be a hard one for any sector lobbying to move, if at all. There’s a lot of insight and challenge in what they’ve been saying.
So what of the guidance itself? I’m no data protection expert, but a close reading of it highlighted for me a number of points.
Firstly, the emphasis on purpose. If that purpose would allow a firm to identify a person, or make it possible for a person to be identifiable, then it is personal data. This means that you don’t have to actually be identifying someone at a particular stage in a business cycle. So long as that data would allow someone to be identifiable at any point in that business cycle, then it is personal data.
This is Biometric Data
So how does personal data become biometric data? This is what the draft guidance has to say:
“...personal data is only biometric data if it...
> relates to someone’s behaviour, appearance, or observable characteristics (e.g. the way someone types, a person’s voice, fingerprints, or face);
> has been extracted or further analysed using technology (e.g. an audio recording of someone talking is analysed with specific software to detect things like tone, pitch, accents and inflections); and
> can uniquely identify (recognise) the person it relates to.”
Those three stages are explained and emphasised further in the guidance.
Here’s an example of how the ICO is thinking. The way in which someone types on a keyboard is collected and analysed in a way that makes it possible for that person to be identifiable; this makes it biometric data. Furthermore, that biometric data may be special category biometric data if it could (no more than could) reveal information about someone's racial or ethnic origin or could include information about their health, sex life or sexual orientation.
Given the nature of biometric data, and the emphasis on words like allow and possible, the guidance has the feel of firms being expected to show how it is not special category, rather than the other way round.
Let’s look at two areas in which insurers need to pay particular attention.
Profiling And Consent
Insurance has been categorised as a high risk activity in relation to profiling and automated processing of personal data. So, to use our earlier example again, if a system carries out keystroke analysis from the moment someone starts to type their personal details into a quote machine, then you have biometric data being created as part of a profile for an identifiable person, which is then subject to automated decision making.
Another example can be found in a telephone based claims management service, in which the person’s voice is analysed for signals of what type of character they might be. There are very obvious questions around consent in activities like this. In my experience, it’s not paid a lot of attention to.
Back to the Purpose
So what, some of you may ask; aren't insurers meant to watch out for these things? And so they are, so long as they do so in relation to risk, and not, for example, in relation to optimisation practices. Given that many insurers are engaging in price and claims optimisation, this begs the question; are they doing so legally?
Does their data protection impact assessment detail this optimisation? Does their policy document confirm it? If they do, then is it a valid use? If they don’t, then someone may ask why the firm is processing biometric data outside their stated purpose.
The ICO makes an interesting point at this stage:
“In most cases, explicit consent is likely to be the only valid condition for processing special category biometric data. Where there is an imbalance of power between you and the person, you should carefully consider whether relying on explicit consent is appropriate.... You must offer a suitable alternative to people who choose not to consent and ensure they do not feel under pressure to consent.” (ICO’s emphasis)
In a sector like insurance with an inbuild imbalance of power, this stacks up a whole series of hoops for an insurer to have to jump through.
Fraud
The ‘get out of jail free’ card when it comes to automated profiling using biometric data is of course that insurers are using it for the ‘prevention and detection of unlawful acts.” It’s a card that has its limitations though:
“You must be able to show that using special category biometric data is “necessary” both for the prevention and detection of crime and for reasons of substantial public interest. To satisfy this condition, you should demonstrate you are using biometric data in a targeted and proportionate way to deliver the specific purposes set out in the condition, and that you cannot achieve them in a less intrusive way.” (ICO emphasis)
The challenge that insurers will need to face up to is application fraud. Every single person seeking a personal lines quote is given a fraud score, which starts to be calculated from the moment they start typing. Is this ‘targeted and proportional’? Some will see this as people having deemed guilty until the automated system judges them innocent.
What’s more, the crossover between guilt and innocence is that chosen by the insurer, not that used by the legal system. This is less fixed than people may think. How else can a near threefold increase in suspected cases of motor application fraud happen between 2017 and 2019, without someone turning the dial for the cross over from innocence to guilt? (more here)
Am I saying that we should stop application counter fraud measures? Not at all. What I am saying is that the opacity with which they are carried out and tracked is a significant exposure for the sector. This guidance could become a lever with which that opacity is challenged.
The Technologies
It’s worth reminding ourselves of what the ICO found when it lifted the lid on a range of biometric technologies. This is from their October 2022 announcement:
“Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever. While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination. The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area. The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work.” (my underlining)(more here)
That’s about a blunt as it gets. They’ve yet to see any biometric technology that satisfies DP legislation. And note their point about ‘proportionality, fairness and transparency.’ For sure, the ICO is a data protection regulator, but data protection is being seen and judged through an increasing number of lenses.
Some Steps to Start Off With
- Appoint someone as the firm’s biometrics expert. Ideally situate them within the data ethics team or a wider ethics committee.
- Check that the firm is challenging itself sufficiently in relation to the proportionality of how it is using biometrics. Is everyone just nodding along for an easy life or opportunity, or is someone actually asking real questions? The questions being raised by the Citizens’ Biometrics Council are a good starting point.
- Prepare for greater scrutiny. Initially this will be around ‘showing your cards’ but it will soon extend to ‘showing your workings’. Then take the ICO’s warnings about biometric technologies as the template by which those cards/workings will be weighed up.