The EU’s Social Scoring Ban – What Does It Mean?

It’s not uncommon after launching a big piece of legislation for questions to then be asked about what exactly this or that clause actually means in such and such a context. With the EU’s AI Act, we are now entering such a period of interpretation. The social scoring prohibition within it is going to be one of those clauses seeing a lot of attention.

That’s because the clause is worded in a somewhat broad way. There is more than a hint of symbolic gesturing sitting behind its wording. As a result, a vagueness has crept into it. So the question becomes: is it a woolly prohibition that just needs to be tightened up here and there? Or could it turn out to be a loose cannon, primed and waiting for anyone wanting to fire off a challenge at a questionable digital practice?

Let’s consider the wording first. This is clause 5.c of the Act...


Article 5: Prohibited AI Practices

The following AI practices shall be prohibited:

(c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;


Do Insurers’ Socially Score?

So how might such a wording be interpreted in relation to what modern insurers typically get up to in these digital times? In essence, do insurers socially score?

Think of it this way. Do insurers evaluate and classify people, either individually or in groups? Of course they do.

Do insurers do this over time, as opposed to just as a one off? Of course they do, for the policy is an annual one and there’s an ongoing obligation in it to declare material changes.

Do insurers do the above in relation to the consumer’s social behaviour or known, inferred or predicted personal or personality characteristics? That is what modern insurance is all about, so the answer is yes. Twenty years ago, a yes answer might have been in doubt, but not now.

Do this lead to detrimental or unfavourable treatment of individuals or groups in social contexts that are unrelated to the contexts in which the data was originally generated or collected? There are two questions here.

The first relates to treatment. Given that differentiation of risk is at the heart of insurance, and that being deemed to be a higher risk will have a detrimental effect on your premium and/or cover, then the answer here seems to be yes.

The second part of this question compares the context that is insurance rating, with the context in which the data for that rating was generated or collected? Are those contexts different? In the past, most insurers relied on primary data, being data given by a consumer to an organisation for use by that organisation for providing a product or service to that consumer.

That’s all changed now. In the present day, insurers rely a lot more on secondary data, being data given by a consumer to one organisation for use by that organisation, but which is then sold to a different organisation for designing and delivering their products and services to that consumer and others. And it seems obvious that with regard to secondary data, the two contexts referred to above are different. Data given to a food retailer, travel firm or social media platform is not data given for insurance purposes.

And finally, does the social scoring lead to detrimental or unfavourable treatment of individuals or groups that is unjustified or disproportionate to their social behaviour or its gravity? In short, are insurers using a mountain of secondary data to crack a rating nut? Or to put it in policy making terms, do the benefits of rating outcomes enabled by social scoring outweigh the impact of such scoring on citizens’ fundamental rights? Well, given the overall wording of the Act, I’m not sure that policy makers would think that favourably on insurers. That's because they will struggle to see the point of an insurer having 100+ rating factors for a line of business like motor or household.

Expert Opinion

Let’s return to our two opening questions. Does social scoring happen in insurance? In my opinion, the answer seems to largely be yes. And was it the intention of legislators to address modern insurance practices through the social scoring ban? To answer this question, I helped organise a webinar in early September 2024 to bring together the views of academics in the fields of law and sociology. Legal scholars seemed obvious participants in a discussion like this. And sociologists are just as valuable, for they have lots of insights into the nature of society and the nature of scoring practices that can shape it.

Attendees were:

  • Marvin van Bekkum ; PhD Candidate ; Radboud University ; Netherlands    
  • Marta Infantino ; Associate Professor of Comparative Law ; University of Trieste ; Italy
  • Liz McFall ; Professor in the Sociology of Markets ; University of Edinburgh ; Scotland
  • Gert Meyers ; Assistant Professor, Digitalisation of Health and Wellbeing ; Tilburg University ; Netherlands
  • Brend Plantinga ; Senior Adviser, Algorithms and AI ; Autoriteit Persoonsgegevens (Dutch Data Protection Authority) but attending in a personal capacity ; Netherlands
  • Frederik Zuiderveen Borgesius ; Professor of ICT and Law ; Radboud University ; Netherlands
  • and myself.

Thoughts from the Webinar

The social scoring ban grew out of a broad concern about practices that the authorities didn’t want to see happen in the EU. The idea was to distance the EU from the type of social scoring that has been emerging in China and Russia. It was a statement about “pushing back on the stuff that’s super bad”. There was also a concern that existing legislation didn’t provide protection for consumers against social scoring type practices.

The definition adopted for social scoring was judged to be not at all clear. The “evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour” was too vague and left it unclear as to legislators’ intentions. This made it difficult to see how the social scoring ban will then be applied.

This then raises questions about what this part of the AI Act was there for. Was it only there for markets that aren’t yet established and for which there are no regulations at present?

This vagueness means that it could be difficult to apply the ban to insurance. Legislators wouldn’t have meant that to happen and so they are unlikely to support its enforcement in that way. At the same time, the sector’s legal advisers will be telling their clients that what they do is fundamentally different to what was intended with the ban. In short, they shouldn’t worry.

Yet, might there be practices within insurance that could trigger the clause? Social scoring based on data collected in one context and used in another context was viewed as an exposure. That is certainly the direction in which modern insurance has taken its digital transformation, so a lot will then hinge on how an ‘unrelated context’ is interpreted.

Sector AI Problems

The discussion then looked at the type of AI uses that might cause a problem for the sector. These were highlighted:

  • irresponsible pricing practices;
  • rating factors that no one can explain in a reasonable risk context;
  • the use of emotion AI and forms of biometric categorisation;
  • claims optimisation, whereby you profile a claimant in order to understand how low a settlement they would be prepared to accept;
  • claims walking, where you move volume settlements down until complaints go up;
  • fraud scoring, where a person’s trustworthiness is judged, often from when they first seek a quote;
  • the use of genetic test data in claims settlements.

What this points to then is a number of practices becoming common across modern insurance and relying on secondary data, that could turn into exposures for insurers in relation to this ban.

Negotiating the AI Act involved a fair amount of give and taken, and the social scoring ban seems to have secured its place in Article 5’s list of prohibited practices as a signal to consumer groups that the AI Act wasn’t going to be too business friendly.

Yet at the same time, the wording of the prohibition was written in relatively extreme terms for legislation like this. This points to the ban being targeted at almost science fiction level extremes. That said, legislators will be aware that ‘science fiction level’ developments have a habit of going live sooner than is often anticipated. The future is nearer than most people think.               

My Thoughts After the Webinar

So what are extremes, and who gets to decide this? These questions matter because they could well be the touch stones for an insurance practice being challenged.

We are seeing several challenges being put to the sector at the moment. Discriminatory practices is the main one, with a second being concerns about access to insurance for those less well off. These challenges are very much wrapped up with the digital practices many insurers have been adopting.

Challenges is a polite word. The views that underlie many of them refer to outcomes that are judged as unaccepted into today’s society. Evidence has been put into the public domain showing that people are paying more because they are black, or poor, or both. And that’s if cover is available.

What we have then is science fiction that could already be upon us, and practices viewed as extreme in relation to the outcomes generated. What this then does is position the social scoring ban as a door through which these challenges could be driven.

In short then , the social scoring ban shouldn’t worry insurers doing business in the EU, except where they’re doing things (like those listed earlier) that others strongly believe to be producing unacceptable outcomes. At which point, the worry indictor should rise, for the ban presents an opportunity for those challenging to seek legal redress.

To do this, those challenging would need to assemble evidence and data that supports their case. And we are seeing evidence of this happening in jurisdictions like the UK and the US: one example is this case and another is this case. So while in the past, it was left to the regulator to assemble this, now consumer groups and campaigning law firms are initiating this themselves.

An Example

Let’s turn to an example to illustrate all this. Counter fraud is a key function now within every insurer. Its people are described as being engaged in a ‘war on fraud’. There are two particular concerns shaping insurance counter fraud agendas at the moment.

The first concern is to spot fraudsters before they ‘get on your books’. This means the use of a lot of inferential analytics and biometics, applied from the moment a consumer starts typing their quote details into a website. If that person’s ‘fraud score’ turns out to be above a certain threshold, then that person is ‘no quoted’ or quoted a sum so high they would never take out the policy.

The second concern is to identify and address new types of fraud threat. This means the use of many forms of analytics and wide ranging data to spot patterns as early as possible.

With both of these concerns, the ‘utopia’ is to achieve “…fully reflexive, dynamic and easy to change customer... journeys based on risk signals”. In short, using fraud scores to immediately and continually tailor product,  delivery and services. And attendees at a recent industry briefing thought that “…that utopia is now well within its grasp”.

What does this add up to then? Everyone engaging with an insurer (and I mean everyone - you, me, your granny) has their trustworthiness automatically scored, in real time, automatically, across all transactions making up the customer journey.

There’s clearly an upside for consumers from this. Fraud costs are controlled and premiums lowered. Yet as I’ve said for ten or more years, tackling fraud is an ethical thing to do, but how insurers tackle fraud also has an ethical side to it. The downside is that people unknowingly experience detriment, in the form of financial and emotional outcomes that may seem micro to the insurer, but could well be macro to the individual.

Artificial intelligence is now at the heart of counter fraud initiatives in insurance. The question then becomes: is there sufficient governance of how it is used to contain that downside? And for many years, I and others have raised concerns about governance being, well, let’s just say, less than it should be.

In short, the sector is doing things, like fraud scoring, that expose it to a challenge being directed through the social scoring ban.

Of course, all this is about an EU Act. What about here in the UK? Well, the best way to answer that is through the example of this class action case in the US, involving insurers and providers of claims and counter fraud analytics. Many of the providers operate in the EU and UK as well.  

To Sum Up

I wrote ten years ago that social scoring could turn into a nightmare for insurers. So does this ban on the use of AI to socially score represent that nightmare?

I think that long established methods of rating policies and settling claims fall well outside of the social scoring ban. Attendees at the recent webinar thought so too. The problem lies with extensions of those core practices enabled by AI, like claims optimisation and the use of emotion technologies. All agreed that such practices could fall within the ban.

Alongside this are a series of challenges being put to the sector, about some of the outcomes being experienced by groups in society. If those challenges aren’t fully engaged with by the sector, they risk turning into confrontations. In such situations, the AI Act presents a clear opportunity for such practices to be legally challenged. Therein lies the nightmare for the sector.

If you'd like to explore the issues raised here in greater detail, get in touch.