Feb 19, 2025 6 min read

The Social Scoring Ban – Confusion for Insurers

Should insurers feel reassured by EU guidance about the prohibition on social scoring in the AI Act? It would be easy for them to think that way. The problem is that the guidance has some flawed assumptions about insurance. When those are corrected, the prohibition should be taken very seriously.

social scoring
Is personalised underwriting toast? No, but it is in danger of landing sticky side down.

The EU has recently issued detailed guidance on the prohibitions set out in the AI Act on the use of artificial intelligence. Each element of the prohibition clause wording is examined, with several examples provided of what would be lawful and unlawful.

The section on social scoring covers about fourteen pages. Insurance is referenced on several occasions in that section, yet inconsistently so. It’s used to illustrate an unlawful underwriting practice in life insurance, as well as referenced in more broader statements like the following….

“(175) … Recital 31 AI Act, in particular mentions that the prohibition ‘should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law’. For example, credit scoring and risk scoring and underwriting are essential aspects of the services of financial and insurance businesses. Such practices, as well as other legitimate practices (i.e. to improve the quality and efficiency of services, to ensure more efficient claims handling, …., fraud prevention and detection, ….), are not per se prohibited, if lawful and undertaken in line with the AI Act and other applicable Union law and national law...”

Unacceptable Underwriting

There you have it: risk scoring is an essential practice of an insurance business and, when undertaken in line with EU and national law, is out of the scope of the AI Act’s prohibition on social scoring. OK, so now let’s bring in that life insurance example, categorised as an “unacceptable social scoring practice” in paragraph 170 of the guidance:

“An insurance company collects spending and other financial information from a bank which is unrelated to the determination of eligibility of candidates for life insurance and which is used to determine the price of the premium to be paid for such insurance. An AI system analyses this information and recommends, on that basis, whether to refuse a contract or set higher life insurance premiums for a particular individual or a group of customers.”

The difference between these two references to underwriting practices can be explained in this way. The ‘risk scoring’ reference in clause 175 is a high level generic description of underwriting – it’s about scoring risks for transfer from consumer to insurer. The life insurance example in clause 170 is much more specific. It focusses on ‘spending data’ obtained by a life insurer from an bank for underwriting purposes. The Commission sees what you spend in the shops as out of context for life risk scoring and so falling within the prohibition.

What is Risk Data?

Insurers will look at those two clauses (170 and 175) and say ‘hold on – our analysis shows that spending data is correlated with risk and so should be allowed’. And they will point to the fact that spending data sourced by insurers from banks has been used in annual and long term policy underwriting across the EU and UK for at least a decade, if not two. And no regulator has challenged that.

The problem that the EU has created here is that in clause 175, they are thinking of insurance in relatively traditional terms, for example when motor underwriting used about a dozen data points relating to the driver and the car (model, occupation, experience, etc.) and this was reviewed annually.

In its life insurance example, it comes at insurance from a different angle. In their example, the Commission took a particular category of data and said that it should not be used to risk score in life insurance. And I fully expect that if they were to do the same with other categories of data (travel, social media, etc), they would likely come to a similar conclusion.

This is why insurers need to weigh up this guidance on prohibited practices very carefully. For sure, risks scoring is allowed, but what exactly do they mean by risk scoring? Insurers now think along the lines that all data is risk related if it can be correlated with loss. And in working on that basis, they have justified for themselves that they have the right to collect, analyse and score all data relating to an individual or group of individuals. And this is not just underwriting – the same view exists for claims and counter fraud.

Think of it this way. In the past, underwriters thought in terms of certain risk features and then applied statistical correlations to data relating to those risk features. Now, underwriters think in terms of correlations and then ascribe the risk label to those that interest them. It’s a small but fundamental change (more here).  

What is needed clearly is some basis upon which to address the different layers between risk scoring per se (the top generic layer), and the collection and analysis of specific types of data (the particular layer at the bottom). Luckily for policy makers, that was researched a few years ago by Prof. Barbara Kiviat of Stanford University in the US. Her paper on insurance, data and consumer attitudes emphasised the notion of logical relatedness, and I would very strongly recommend that you read it (more on why here).

In between that top generic layer of risk scoring per se, and the particular layer at the bottom about using this or that type of data, lies a range of practices built upon the latter in order to personalise the nature of the former. Most of these practices rely on secondary data: in other words, data originally collected as primary data in one context (shopping for example) deemed to be unrelated to the context in which insurers are using it. And that is why we now turn to context.

Context Counts

The guidance makes clear that for social scoring to be lawful under the AI Act, it must be done:

“…for a specific purpose in the related context as that in which the personal data used for the score were collected…”.

In my opinion, these four modern insurance practices fail that test:

  • Price walking – when an insurer uses data to gauge the willingness of different categories of consumer to accept a price increase;
  • Price optimisation – when an insurer uses data to gauge the reaction of an individual consumer to a particular price;
  • Claims walking – when an insurer uses data to gauge the willingness of claimants with a particular type of claim to accept a lower settlement without this leading to a complaint;
  • Claims optimisation – when an insurer uses data to gauge the reaction of an individual consumer to a particular settlement offer.

Specific Purpose

Let’s now bring in a key clarification in the guidance. In clause 147, it says:

“…the prohibition (on social scoring) is not intended to affect lawful practices that evaluate people for specific purposes that are legitimate and in compliance with Union and national law, in particular where those laws specify the types of data relevant for the specific evaluation purposes and ensure that any resulting detrimental or unfavourable treatment of persons is justified and proportionate... .”

Can insurers interpret this along the lines of ‘if what we are doing is lawful at the moment, then we can carry on doing it’? These four points are worth emphasising here.

Firstly, correct me if I’m wrong, but I don’t think many laws in the EU (or the UK for that matter) specify the types of data relevant to risk scoring. It would be like trying to legislate for a moving Tower of Babel.

Secondly, nearly all law about people and data focusses on personal data. Group data is hardly ever addressed, yet group data is hugely significant when it comes to applying secondary data for the underwriting of a portfolio.

Thirdly, the danger here is if that phrase in the third paragraph above is misinterpreted as ‘if we are able to do what we are doing at the moment, then it must be lawful.’ The pricing super-complaint showed the market just how dangerous ‘pricing group think’ can be.

Legitimacy

And lastly, bear in mind that word ‘legitimate’. Then read this…

“The severity of the impact and the interference with the fundamental rights of the person concerned resulting from the social scoring compared to the gravity of the social behaviour of the person should determine whether such treatment is disproportionate for the legitimate aim pursued, taking into account the general principle of proportionality.”

In other words, the gravity of the social behaviour being datified must be weighed in relation to proportionality and legitimacy. I’m not sure that the underwriter tracking a thousand rating factors for motor insurance will be able to put together a convincing case. And if ultimately they can’t, then is personalised underwriting, to all intents and purposes, steering insurance towards a cliff?

To Sum Up

The EU’s guidance on prohibitions in the AI Act is a good example of how legislators think they know how a sector like insurance works, but find that their understanding is more than a little out of date. What EU legislators think of as risk scoring is about twenty years out of date.

As a result, the guidance sends a confused message to the sector: it talks about risk scoring as if it’s predominantly about a few dozen risk factors (and so acceptable), when nowadays it’s predominantly about the scoring of huge amounts of data very loosely associated with risk and mostly interpreted out of its original context (which the guidance sees as social scoring).

What this adds up to is the need for EU insurers to be much more challenging about what they do and how they do it, in the context of this new legislation. On the surface, it all looks fine, but dig a little and the questions quickly stack up.

Over the next five to eight years, this legislation has the potential to flip digital underwriting practices and land them sticky side down. Insurers will be left with quite a mess to clear up.

Feel free to get in touch if you would like to explore the implications for insurers of the social scoring prohibition further. My independent perspective will help inform your decision making.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.