Feb 20, 2018 5 min read

The Ethical Problems at the root of the Mohammed and John Quotations

Two UK insurers have been accused of charging people more for motor cover if their name was Mohammed, than if their name was John. It has been vehemently denied by people across the sector. Yet the story has attracted widespread attention. And it’s unlikely to be a one off either. The sector should expect others like it to crop up over the next few years. Here’s why.

Insurance people have based their rebuttal of the story upon the public not understanding how motor insurance is priced nowadays. Central to this are the real time fraud assessments made at the quotation stage. Anyone seeking quotes that are identical in all but the proposer’s name (as the original investigating journalists did) could trigger fraud warnings. Insurers are saying that this fraud angle accounts for the difference in quotes between Mohammed and John.

I can see where they’re coming from, but it’s not a convincing enough rebuttal. The premium differences between the quotes are significant, but not significant enough for a situation when the fraud warning light is flashing red. I’ve also read conversations on social media where experienced technologists have gone to significant lengths to neutralise that angle, and still found the Mohammed and John premium difference easy to reproduce.

Easy to Reproduce

And therein lies the reputational risk for insurers. Journalists have found the premium difference to be relatively easy to reproduce. As a result, they’ve been drawing similar conclusions with relative ease, all of which have been bad for insurers.

Another defence put forward on behalf of the insurance sector is that “we’re not like that; we wouldn’t do that.” And yes, the sector is full of good people, but that doesn’t mean they’re not capable of making some poor decisions. The ‘good people’ defence is also a weak one when viewed through the lens of equality legislation. That says that intention is not a defence; it’s the outcomes being experienced that matter.

So what is causing this Mohammed and John premium difference? I’m certain that it is not down to any racist intent within insurance circles. I am convinced that its origins lie in the increasingly complex underwriting systems being deployed by many insurers.

At the heart of many underwriting systems now are algorithms looking for patterns of risk and opportunity in the pricing of new policies. Insurers have been investing huge amounts in algorithm based underwriting, in the hope that it will identify new insights for profitable exploitation.

Algorithmic Underwriting

Consider two factors important for these algorithmic insights: clustered correlations and training data. Algorithms learn to focus not simply on individual correlations, but on clusters of correlations. From these, the algorithms ‘learns’ something new and insightful about the risk and then remembers it for next time.

This brings into being a piece of ‘manufactured information’ (more here) that the machine remembers and uses the next time that cluster of correlations emerges again. It doesn’t have to record it in a data field – it is just learnt and remembered for next time. And that manufactured information can be pretty sensitive, like a pregnancy, or personal, like race or gender.

Algorithms learn through being fed lots of training data. Such data is drawn from the deep well of digital decisions that people now make as part of their online day to day lives. The problem is that researchers have found that such training data can contain all sorts of bias, in the same way that the society from which it is drawn also contains all sorts of bias (more here).

What does this add up to then? It points to some insurers using sophisticated underwriting systems without properly understanding and addressing the ethical issues that they can give rise to. In such circumstances, the ‘good people’ defence, like the ‘the public doesn’t understand insurance’ defence, is weak at best. In fact, the latter one tempts people to ask whether insurance people  themselves understand how their underwriting works.

That’s a reasonable question to ask. Some years ago, underwriters began talking about how they no longer understood how their premiums were calculated. Their remarks struck me as odd at the time, and now seem prophetic of the problems now confronting underwriters.

It’s all about Outcomes

Underwriting systems, be they simple or sophisticated, should not produce outcomes that are discriminatory. This would be illegal and is of course deeply unethical. Equally, underwriting systems that have anti-fraud systems working in tandem with them should not produce outcomes that are discriminatory. It’s as simple as that.

Any system used by insurers, be it in underwriting, claims, marketing or anti-fraud, should have been checked for discriminatory outcomes before being released. There are tools with which systems can be tested and then audited, to guard against outcomes like this being generated.

The big question then is – why didn’t this happen? Or if it did, why didn’t it pick up on these outcomes? They were, as previously mentioned, so easily reproducible. Do insurers have sufficiently robust testing regimes? Have they been scoped with the right range of test criteria?

The fact that such questions arise also point to potential weaknesses in the robustness of management systems at some insurers. This has implications of course for regulators, who may themselves in turn have to consider the extent to which their work on insurer management systems addressed testing regimes. It’s a systemic risk that’s been talked about for a while.

Such questions will also have implications for investors, some of whom will be noticing that this is not the first time that new approaches to underwriting have grabbed the wrong headlines (more here). And then there are the non executive directors at these insurers, who will be experiencing the challenges of providing oversight to underwriting that is increasingly algorithmically driven.

Finally, consider the external auditors, signing off the income and risk sides to these insurers financial reports. No wonder then that last week, the UK’s Financial Reporting Council identified the financial sector’s use of big data and artificial intelligence as a ‘hotspot’ risk to the quality of actuarial work. Let’s hope they’re equally focused on auditor independence, for many big auditing firms are also prominent in advising insurers on data analytics and AI.

Five Challenges

So what should we look for going forward? Here are five challenges that insurers need to recognise:

  • Insurers need to approach algorithmic accountability in a structured manner, and they need to do so in both operations and oversight. There’s plenty of material available for them to draw on for such accountability frameworks.
  • They need to take a serious look at the ethical risks that such frameworks are addressing. Recent events suggest that some insurers are just not seeing clear risks.
  • They need to react with some sense of urgency, to show boards, regulators and investors that they have control of the situation, and to waylay the possibility of some form of super-complaint being lodged against them.
  • They need to devote sufficient time and resources to what could well turn out to a highly complex problem. To illustrate this, consider the problems Google must have encountered when trying to stop the auto-tagging of photos of African-Americans as gorillas. The solution they came to rely on was to delete that term as an allowable auto-tag, rather than changing their algorithms to avoid this in the first place. Insurers shouldn’t underestimate the mountain that underwriting algorithms could well have created
  • Insurers need to take a closely considered look at the implications of making their core functions seriously complex. Any underwriting or claims director talking about no longer knowing how their premiums or settlements are calculated should represent a warning to the wider business. It is no longer a sign of sophistication.

Insurers can expect to experience a series of reputational challenges in which consumer detriment is transparent and easily reproducible. They need to raise their game, in how they recognise them, how they respond to them, and in how they configure their processes to avoid them in the first place.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.