The insurance market is entering a new phase, in which underwriting and claims practices are being transformed through access to vastly greater amounts of data about customers. This will bring a whole raft of benefits to consumers and, through new services and products, could transform public perceptions of the sector. However, this transformation has a number of ethical issues associated with it and, if not handled properly, those issues could turn out to be the greatest ethical challenge facing the sector over the next five years.
A ‘nightmare’ ethical risk that might emerge out of this ‘big data transformation’ of insurance is something called social sorting (which I explain in this earlier blog). The exposure that underwriting and claims practices become warped by discriminatory analysis is a risk that, in traditional risk assessment language, carries a high score for ‘significance’. So how might it score at the moment on’likelihood’?
In weighing up that question, two things come to mind. Firstly, most big ethical problems in sectors like insurance have their origins not in bad people doing bad things, but in good people making bad choices. So while I recognise that UK insurers are full of good people for whom the prospect of discrimination through something like social sorting is anathema, I also recognise that the sector has in the past made choices poor enough to position it below banking in surveys of public trust.
And the second thing that comes to mind is a quote being heard now, that personal lines underwriters no longer understand how the end premiums of their products are calculated because the process for calculating them has become so complicated. And if you read about some of the accusations levelled against data brokers serving the US financial services market, and then put these two things together, I worry that some firms may be unintentionally straying into a ‘big data’ ethical minefield. I think it would be a brave compliance person who scored the likelihood of ethical risks from big data as ‘low’.
Some positive signs that ethical risks associated with big data are on the sector’s radar have emerged in recent months: this speech by the ABI’s Huw Evans (I talk about its mixed message in this blog). And a few months ago, I began looking for other signs of how the market might be addressing these ethical risks. I contacted six insurers with a substantial presence in the UK and asked to see their corporate policies on equality.
My thinking was that while every UK insurer will undoubtedly have taken steps to ensure that equality is a watch word when it comes to managing employees, and a watchword in how those employees deal with individual customers, I wanted to look in their corporate policies for signs that the firm recognised the importance of equality when it came to consumers overall. So while customer contact people will be well versed in equality issues when it comes to dealing with this or that customer, how are all those behind-the-scenes people in underwriting and claims taking account of equality issues in their sifting and sorting of all that big data?
An equality policy may well be only partially representative of how insurers are approaching this big ethical issue, but while it may provide an incomplete picture, I think it’s a picture worth taking a look at.
The response I received from the six insurers was mixed. None of them had put their equality policy online, despite the law on equality covering a firm’s engagement with both the public and its employees. So when I emailed (and when requested, followed up with a phone call) these six insurers, here’s what came out:
Insurer 1… provided their equality policy, which was written around its commitment to employees, but which made passing references to a more diverse and inclusive workforce being good for customers as well.
Insurer 2… wouldn’t release their equality policy and referred me to the diversity part of their corporate responsibility webpages. That was very much written around employees, but did make passing reference to equality both internal and external to the firm.
Insurer 3… wouldn’t release their equality policy to me.
Insurer 4… provided me with relevant policies, which were written around its commitment to employees, but which made passing references to a more diverse and inclusive workforce delivering better ideas, service and products for customers.
Insurer 5… didn’t provide me with their equality policy, but did talk me through a number of ways in which they were weighing up the wider implications of big data for their business. While they were aware that it had both an upside and downside for their reputation, they hadn’t yet taken that thinking through into policy or project initiatives.
Insurer 6… didn’t provide me with their equality policy, but told me about some projects to ensure equality is built into how individual customers are handled.
So what does this add up to? Well, it shows the sector is still largely in the mindset of equating equality with employees. These particular insurers recognise that a diverse and inclusive staff benefits customers, but phrase that benefit as a spin-off from their focus on employees. Clearly, more needs to be done to frame equality policies so that they reflect the wider responsibilities of firms.
At the same time, what this does not add up to is a lack of concern about discrimination. People I speak with talk about their firm being wholeheartedly against it. And that’s great, but it’s not enough. That commitment has to break out of that ‘equality equals employee’ mindset and ensure that their firm’s big data programme is not, inadvertently and unwittingly, introducing it through the back door (more on this in a forthcoming blog). That, after all, is what leadership on ethics involves: ensuring that the decisions people make are in line with the firm’s values, be it in terms of how an individual’s claim is dealt with, or how its analytics sort and segment communities and society.
Given that over the past few years, people I know working on data projects at UK insurers have likened it to a ‘massive gold rush’, this overall response sits alongside other signs of a sector that is focussing very much on all the opportunities, but paying inadequate attention to the risks or side effects. The ‘gold rush’ metaphor is a telling one, likening data to a natural resource ripe for exploitation in the name of economic growth and private gain.
The next few years are important. As insurers go live with technologies for handling all that data flowing from the ‘internet of things’, there’s still an opportunity for them to develop principles that capture the promise of big data without losing sight of important social values.
As I mentioned at the outset, big data offers the insurance market a unique opportunity to forge a new type of relationship with consumers, one that delivers innovative new products and services within a renewed framework of engagement and trust. However, for that trust to become a reality, attention has to be paid to ethical values like privacy, identity, fairness and equality. That’s because these are long established social values that the public continue to hold dear.
At the moment, attention to those ethical values is barely emerging out of insurers’ big data gold rush. More needs to be done, both by individual insurers and the sector overall, to raise awareness, debate the issues and frame a response. Let’s get on with it.
Duncan is the founder of the Ethics and Insurance blog and the author of its many posts. He's a Chartered Insurance Practitioner, having worked 18 years in the UK market. As an adviser to many firms on ethics issues, as well as a regular conference speaker, he is one of the leading voices on ethics and insurance.