The Problem with Most Responsible AI Initiatives
It’s not surprising perhaps that much of what is being done around responsible AI at the moment is focused on the technical. That’s often the case at the surge stage of any new initiative. It’s clearer, less debated and people working on AI feel that there are fixes that can be made use of.
Alongside this, some people will be thinking that if artificial intelligence is just a piece of technology, then shouldn’t ‘responsible AI’ focus on the technical fixes to make that technology align with the responsibilities being emphasised? This is a good point, but an incomplete one.
Don’t get me wrong – there’s nothing wrong with applying these type of technical fixes to AI. Ok, a little bit of me thinks that some of these fixes should have been applied a lot earlier, perhaps even from the outset. Let’s put that grump aside though and focus on the key point in this article: are the technical fixes currently underway around ‘responsible AI’ enough? If an insurer concentrates on those technical fixes, have their got their scoping right ; are they on course for ‘job done’? Or is there something more?
There is More
I believe there is more. And that ‘something more’ is very important. I’m going to explore that ‘something more’ through a very interesting paper by Professor Barbara Kiviat of Stanford University. The paper’s title may not immediately grab you, but what her research explores can be of great help to insurers framing the scope and depth of their responsible use of AI. All the quotes here are from her paper.
A key attribute of algorithms is that they work “…by comparing people to one another, by detecting patterns across individuals. To do so, they necessarily rely on rendering people as cases—as discrete entities with particular attributes…” And this of course is inevitable. An algorithm is just a mathematical computation and only works with numbers that have been assigned to those attributes.
At the same time, “this way of organizing cognition about people runs counter to a fundamentally different but no less prevalent way of understanding individuals, what they have done, and what they are likely to do in the future: as actors in the unfolding narratives of their lives.”
Example : Target
Let’s look at a few examples of these two ways of thinking about people. Take the famous Target case from the early 2010s. Target’s analytics tracked shoppers who were very likely to be pregnant so they sent them money off coupons to encourage them back into their stores. The algorithm had used certain data points and rendered the shopper in this case as someone with a certain percentage likelihood of being pregnant. The whole process was less about her and more about her data, but for her, it resulted in her receiving those coupons.
The problem was that she knew she was pregnant, but it was early days and she’d decided not to tell anyone about it at that point. That was blown away from those coupons landing on her doormat. This was the narrative side of her story, involving judgements, autonomy, social contexts, decisions and actions.
The Target algorithm seriously undermined her agency, in terms of the what, how, when and if she shared her sensitive personal information with other people. In contrast, a narrative way of rendering people brings that agency to the foreground and would reflect how her situation unfolds over time and the context in which this happens. In the end, this situation ended up as a media disaster story for Target, who were roundly condemned in the court of public opinion and forced to apologise.
Example : Fraud Scores
Another example, this time from insurance, could involve the use of fraud scores. These ascribe likelihoods to a consumer in relation to them acting fraudulently. They are entirely reliant on the rendering of consumers as cases, to be scored, compared and judged for their fraud potential. As a result, premiums may move higher, or lower, or disappear altogether.
Yet what happens when a claim is made? The danger is that the statistical fraud score influences the loss settlement. The actual person and the context around their loss are minimised ; the fraud score takes precedence. After all, it’s less expensive and if they have a problem, they can always complain, goes the narrative.
The Moral Dimension
OK, so there’s two ways of looking at something. Nothing new in that. Yet there’s more to this than simply comparing two different approaches, one numbers based and the other narrative based. I’ll explain the implications here with ongoing points from Professor Kiviat’s paper.
Rendering someone as a case means slotting them into discrete, regularised and atemporal categories (atemporal meaning ignoring the time dimension). And in so doing, you organise cognition about them. And because of how you organise your thinking about them, you funnel your moral judgements about them in particular directions. One just has to look at the labels that data brokers assign to people with particular combinations of attributes to see this taking place.
So why does this happen? It happens because data organises one’s view towards a single meaning that can itself then be reused by another part of a firm’s analytics (those fraud scores influence your quote as well as your claim). When instead you frame someone in a narrative way, you use not one point of meaning but a multitude of such points: in essence, a web of meaning. And in that web of meaning, context becomes king ; time becomes key ; emotions and intentions rise to the surface.
These aspects are much better at surfacing why things happen. That’s because narratives are much better at revealing someone’s inner mental life. And this helps assign responsibility and blame in more accurately representative ways. This then funnels your moral assessment in ways often quite unlike those from ‘having rendered someone as a case’.
Consequences and Justice
In short, the way in which you choose to understand someone influences the moral judgements made about them, and from those judgements, the decisions you take about them. These are what researchers call the ‘material consequences of representational practices’.
Think of it this way. If you look for meaning in data and algorithms, you orientate your thinking around comparative notions of justice. So for example, you’re a higher risk than that person, so you should pay a higher premium – what is called the fairness of merit.
If however you look for meaning includes narratives as well, you orientate your thinking to include non-comparative forms of justice. So for example, your economic position makes annual premium payments difficult, forcing you to pay through instalment plans that are disproportionately expensive for someone on a low wage. This is a story about fairness of access.
Why this matters
This difference matters and insurers need to understand it and learn how to handle it. Here are a few of examples to explain why.
In the UK, a variety of issues brought together as ‘the poverty premium’ have steadily moved up policy maker’s agenda. This resulted this month in the Government telling the regulator to include a ‘should have regard to reinforcing financial inclusion’ in its remit, something the regulator had long resisted.
What you had here was a narrative around people suffering various forms of financial exclusion, supported by relevant analysis, capturing policy makers agenda and influencing Government policy. Insurers have lost influence because their case was too orientated around comparative justice.
In the US earlier this year, a court ruled that insurers’ ‘objective actuarial data’ did not take precedence over the requirements of the federal Fair Housing Act (in which are obligations to not unfairly discriminate). In other words, the comparative justice in that objective actuarial data did not take precedence over the non-comparative justice in the many narratives around discriminatory practices that the Fair Housing Act seeks to tackle. In my words, and in this case, the fairness of access and fairness of need arguments came before fairness of merit arguments.
Tactically, insurance lobbyists nailed their colours to the comparative justice mast and in so doing, lost their case. Many people would have seen this as an obvious outcome, so why didn’t insurers? Because their culture was orientated around the rendering of people as cases through objective actuarial data. It was not enough.
In the EU over the last 15 years, several directives have positioned fundamental rights ahead of the insurer’s right to underwrite. In the language of Professor Kiviat’s paper, non-comparative justice was given greater weight than the comparative justice favoured by insurers.
What I hope these examples illustrate is that this is not a regional thing, nor a recent thing. It has been a progressively building trend across regions, over a decade or two, to the point now that it feels almost like the default position being taken by policy makers.
Two Sides of the Coin
That reference a few paragraphs above to ‘not enough’ was deliberate. No one is saying that the rendering of people as cases is wrong and should be stopped. Governments, firms and organisations do it all the time. The key point is that it is only one side of the coin. On the other side of the coin are the narratives that insurers need to tune into, because those narratives are capturing the attention of policy makers and influencing the scope and depth of legislation and regulations.
A final example to illustrate this. The FCA analysed household inception and renewal pricing in 2015 and in the main, saw nothing wrong with what their detailed analysis uncovered. Then jump forward to 2018 and Citizens Advice’s argument that price walking was wrong. What I was seeing at the time was lots of analysis at the FCA that concluded ‘no problem’ and enormous amounts of customer engagement at Citizens Advice that concluded ‘big problem’.
In the language of Professor Kiviat’s paper, the FCA were looking at the situation in terms of comparative justice, while Citizens Advice was looking at the situation in terms of non-comparative justice. In short, their case was that price walking was just not fair. Ultimately, the FCA had to agree. It learnt to look at the situation through both the lens of comparative justice and the lens of non-comparative justice.
And insurers must learn to do so too. Here are a couple of steps for them to consider.
Two things for Insurers to Do
In addressing ‘responsible AI’, it is important that they encompass both forms of fairness and justice: the comparative sort and the non-comparative sort. Together, the two give you the full picture. The former only gives you half of the picture.
In public policy terms, insurers need to engage with those arguing for greater fairness in insurance by reference to non-comparative forms of justice. At the moment, those non-comparative forms are capturing policy makers attention, much more so than insurance lobbyists. Insurers need to tune into them, and not just as some form of ESG campaign.
Why bother, some of you may ask. Because issues around fairness are going to be bubbling around insurance for a good many years and if insurers want to actually be heard, they need to evolve how they shape their approach. For sure, this is likely to then mean they’ll have to give greater recognition to the fairness arguments that those other stakeholders are presenting. That is, I would say, inevitable. Many insurers have yet to grasp that though.
To Sum Up
Fairness is the big social justice issue facing 21st century insurance. To understand it and engage with others on it, insurers need to recognise that how they position data and the insight drawn from it will have consequences. The datified ‘rendering of people as cases’ approach needs to be balanced with the ‘understanding people through narrative’ approach. This will lead to better decision making, more effective engagement and more understanding from policy makers. All of which are central to the future of insurance.
If this doesn’t happen, the future of insurance will hit a series of legislative and regulatory brick walls erected to channel the sector towards what the public expect from insurers. It needn't be that way.