ChatGPT has become the most talked about digital development todate of the 2020s. Large numbers of people have been asking it searching questions and been amazed by the results it produced. They’re finding that it creates well-structured responses to questions about complex situations. Revolutions in customer engagement and service delivery are being talked about.
The one big obvious opportunity with ChatGPT when it comes to ethics and insurance is simply that anyone can ask it questions about the topic tailored to their own particular interests and objectives. That allows a lot more people to learn something quickly about the ethical side of their work as insurance people. Such accessibility can quickly tune a lot of people into the topic, which is great.
Yet accessibility must always be considered alongside other factors. Take reliability for instance. Is what ChatGPT tells you something that you can rely on? Given how wide ranging the questions asked of it have been, does it have the same scope to deliver output that you can do something with?
Who knows? The firm may be called OpenAI, but they’re revealing little about how and upon what data it has been trained, and how it actually analyses it. This makes it not a million miles away from being a black box technology.
Yet what if it was so good, that this becomes less important? That’s the wrong question to ask. How good an answer is in large part rests upon how easily it can be reproduced by others. If those others can’t do so, then good is not the right way to describe its output.
Hallucinating and Beyond
Take its habit of ‘hallucinating’ something. In other words, introducing a fake fact into its output. Scientists testing ChatGPT4 have found that it would skip a step or introduce a false fact in order to complete its output. It appeared really realistic, but was prepared to make errors in order to achieve a better output. Not a good idea when it comes to something like chemical analysis or drug development.
Why is this so? In large part because its analysis of previous word relationships has signalled benefits from doing so. It has no capability to decide if doing so is right or wrong. It just looks at what it has found in the past and replicates the action based upon a statistical analysis. The problem for users is that there is little in fields like business and science that doesn’t have a moral dimension. And quite obviously, combining its black box nature with its capacity to miss out inconveniences makes its reliability questionable.
Am I being pernickety? Not when you take the case of Google Bard, the recent launch of which ended up wiping $100 billion off the firm’s market value. The launch of Google’s competitor to ChatGPT featured a simple question about astrology, which it answered incorrectly. No one at Google noticed, until social media erupted in derision.
Perpetuating Bias
It could also make it dangerous. ChatGPT feeds upon what’s already been spoken or written about something. So if that earlier output includes views, statements and data that could perpetuate long standing injustices and biases, then clearly ChatGPT will perpetuate these. And if too few people know how it does this, or where it learn this, then that perpetuation will be difficult to identity and challenge.
Let’s move on to consider its capabilities around ethics and insurance. And I’ll do so in a rather circuitous route, so please bear with me. Let’s start with mobile apps for identifying plants. You point your phone’s camera at a plant and the app tells you what it is. Well, it will so long as the plant has actually been tagged (let’s assume correctly). And the people who do most tagging are those with an economic interest – the sellers of such plants.
What happens as a result is that these mobile apps are very good at recognising images of plants that are for sale. The apps are thus very poor at recognising wild plants, because those aren’t on sale and so haven’t been tagged.
Knowledge of a Kind
I’ve raised this to illustrate a key point about ChatGPT, which is that it is good at telling us what’s been said before. It’s knowledgeable about popular pre-packed knowledge. It’s poor at understanding the underlying nuances of why ‘such and such’ is so. And remember, it only draws upon things that have been written or said before September 2021.
Just like other forms of plagiarism, ChatGPT lacks intelligence about what it is outputting. It justifies what it does by what someone has described as the ‘just following orders’ defence – I’m doing this because others have done it before me. It is both knowledgeable and ignorant, in that it knows things, but doesn’t actually know what they are or why it is saying them.
It may be able to tell you things about the ethical dimension of insurance, but it will not know what they mean. It may tell you the ‘how’ but is blank on the ‘why’. And if there’s one topic for which the ‘why’ is central, it is ethics. Its strength in word association shapes its weakness in understanding.
The many people at recent insurance conferences who have attended ChatGPT sessions on ethics and insurance will end up knowing more about the topic than they did before attending that session. They cannot however rely on what they’ve heard to shape, for example, their firm’s data ethics strategy. They cannot trust it to provide a balanced and comprehensive overview of ethical risks. And they can never be sure that a fact has been omitted, or an illusion introduced, in order to make the argument more convincing.
Interesting versus Intelligent
In short, it’s fun, it’s interesting and of course it’s good at introducing you to a subject like ethics and insurance. Beyond that, you need intelligence and insight to actually achieve the outcomes for which you are engaging with ‘ethics and insurance’ in the first place. ChatGPT cannot provide that.
Remember as well that one of the four pillars of trustworthiness is goodwill. This addresses the reasons why you’re building trustworthiness. Wanting to be trusted can never be separated from why that trust is wanted. That insurers have to work out for themselves.