How Rules Reflect the Ethical Concerns of Consumers
In the main, rules relating to conduct, behaviour and decisions exist to form boundaries that delineate between what is acceptable and what is not. And firms benefit from this – it shows them what they can and can’t do.
This is especially true when a development, like artificial intelligence, starts to raise a myriad of questions, leaving firms unsure of what is allowed or how grey areas should be approached. And clearly the more complex the development, the more ‘developed’ the rule set to deal with it.
This is sometimes referred to as a constraint on business, adding to the expense and complicatedness of delivering a product or service. And it could well increase operational costs, but it could just as well reduce the uncertainty that investors may have to the firm’s exposure to the issues at hand. So, in the round, for firms, rules tend to have both an upside and a downside.
There are of course good rules and bad rules. I won’t give examples, but there’s clearly been occasions when a new rule set didn’t deliver its expected outcomes. Yet if you think about this, do all firm’s deliver their expected outcomes for investors? It feels like firms and rules have something in common.
While this initial point is about rules not being necessarily bad per se, my main point in this article is to encourage rules to be thought about in a more ethics related way. And the EU AI Act is a good example around which to make this point.
Rights and Narratives
The AI Act exists for two fundamental reasons. One is the increased prevalence of artificial intelligence across many walks of life and work, with more promised to be on its way. The other reason is the growing range of questions being raised about it.
You’ll be familiar with the first of those two fundamental reasons. The second becomes evident as you read through the documentation that fed into the Act’s development. Through various routes, and in various styles, questions were raised with policy makers that directly shaped the final wording of the Act.
An example that should be of interest to insurers is the ban on social scoring. During 2023, it underwent a number of redrafts, each of which widened the scope of the ban. What started out as a ban relating to counter fraud activities by public bodies, ended up covering any form of social scoring by any form of organisation.
Those redrafts were powered by the evidence and narratives of civil society groups wanting to ensure that artificial intelligence did not undermine the fundamental rights of citizens. Examples of such rights are fairness, autonomy and equality.
Where Rules Come From
What I’m saying here then is that what ended up as rules began as the ethical concerns of civil society. So rules may be rules, but they are also the ethical concerns of your customers in relation, in this case, to how firms like yours use artificial intelligence. They are the legislated voice of your concerned customers. This makes them worth listening to. After all, if you don’t account for what your customers are concerned about, where are you heading?
On one level, this may seem, to use an old phrase, ‘bleeding obvious’. Yet at the same time, it seems worth raising, on account of the numerous occasions when developments like the AI Act are discussed with little or no reference to consumers.
Are policymakers the true and representative voice of consumers? There are certainly occasions when doubts can be raised about that, but by and large, I think it’s fair to see them as ‘taking on board that voice’.
More Rules Then?
This then brings forward the obvious question as to whether more rules are therefore a good thing for consumers? A number of factors influence how this might be answered.
Firstly, consumers are no more fans of unnecessary rules than firms are.
Secondly, a good proportion of the ethical concerns of consumers are down not to the absence of rules, but to the ineffective application and policing of existing rules.
Thirdly, another good proportion of rules are down to fixing work-arounds being used to circumvent the intent of existing rules. These would be unnecessary if firms worked more to the spirit of the law than just the letter.
And another factor is that some existing rules become out of date due to technological advances.
What might then be called new new rules are less common than people think.
Organised Disruption
New rules can be disruptive for business. For sure, they come with implementation time, but even with 2 to 3 years preparation, they can still devour lots of time and resource. This matters when sectors are evolving at some pace, as insurance seems to be at the moment.
This reminds me of the orienteering I use to do a lot of. I was never a quick runner, but I was a good navigator. I learnt that faster runners may seem to be making progress, but also that it wasn’t always in the right direction (so don't follow them). Many a time I arrived at the next control at the same time as the fast runner who had bounced all over the place on their way there. So, evolving at pace is important, but rules help ensure you’re going in the right direction.
Another way to look at rules is that they are a form of organised disruption. The new rules give you the framework and the firms delivers around it. Disorganised disruptions come with no framework and little preparation time. They often happen against the background of a culture fashioned around the defence of a certain practice. The disruption just isn’t seen before it lands.
Radar Needed
So what sort of radar works for sensing out the emerging situations of organised disruption? The problem here is that to predict legislative developments, firms often turn to law firms. After all, who knows more about the law than they do?
That’s seeing such situations the wrong way round. If firms want to understand what rules might be in the making, the people to listen to are those closest to the ethical concerns being raised by civil society. They are not the people at law firms. They are consumer groups for the raw material, ethics people like me who translate that raw material into sector relevant implications, and academics who often construct the frameworks that help shape the eventual rules.
I’m going to beat my own drum for a minute with the example of the ban on social scoring set out in the EU AI Act. Many years ago, I read various academic papers on social scoring and understood pretty immediately the implications for insurers. Note that none of these papers were about China - all were about data and analytics.
That’s why I wrote this article back in 2014 – Social sorting ; could it be the stuff of nightmares for insurance firms?”. That was ten years ago, long before the AI Act was even a twinkle in the eye of an EU policy maker.
To Sum Up
Let’s be blunt. Rules may be rules, but they begin as social signals about ethical concerns. This matters because it allows firms to recognise them earlier, and because it shines a light on what matters to consumers. The better your ethical radar, the easier it is to fit in with those new rules and the more customer centric your firm becomes.
This of course only delivers value if your firm wants to be customer centric. That however is something for another article.