Self driving cars: a classic ethical question for insurers

Self driving cars will transform personal travel and in doing so, will pose some interesting questions for insurers. One question that insurers seem not to have addressed so far involves the ethical issues raised by self driving cars. One particular ethical issue could have a significant influence on the liability exposures presented by self driving cars.

Picture yourself relaxing back in a self driving car. You’ve just dropped off your son, who has ran off along the pavement ahead of you. Your car pulls out and accelerates, but suddenly six cyclists veer into its path. A collision is imminent and your self driving car’s computer has to make a split second decision. Should the car swerve out of the way of the cyclists, so saving their lives, but in doing so mount the pavement and kill your son? Or should it carry on and plough into the cyclists, so saving your son’s life?

Remember that the decision is not yours: it’s to be taken by your self driving car’s computer. Should the computer be programmed to reduce the overall number of casualties ( and so avoid the cyclists, but kill your son), or should it be programmed to put your interests first (and so collide with the cyclists)?

Classic ethical scenario

Some of you will recognise this as one of the classic scenarios used to stir debate in philosophy and ethics. It illustrates two ethical positions: utilitarianism and deontology. The former would say ‘swerve’, for six lives are saved at the cost of one. The latter would say ‘carry on’, for your interests are being put first.

The purely financial implications for insurers are clear: a self driving car programmed according to utilitarian ethics will carry a lower liability exposure than one programmed according to deontoligical ethics.  Will we see insurers turning to philosophers for help in deciding which car models fall into which rating categories?

Programmed by humans

The key point here though is not the employment prospects of philosophers, but the recognition that all those algorithms underpinning the decisions made by self driving cars will be programmed by human beings full of opinions, preferences and prejudices. Their subjectivity will influence the decisions your self driving car will take.

And as the big data supporting those decisions builds, so will the complexity that those algorithms have to handle. So for example, if the six cyclists were wearing health tracking devices that told your self driving car’s computer that they were all octogenarians, should it still swerge into the path of your only child?

The permutations are endless, but one dimension is fixed. It is that insurers using big data for underwriting and claims decisions need to recognise that choices are going to be embedded into those algorthms and those choices often have an ethical dimension that needs to reflect the values of that insurer and the needs of the regulatory framework it operates within. Simply saying, as some insurers now do, that ‘it was the data that made the decision’ will not hold water.

 

About the Author Duncan Minty

Duncan is the founder of the Ethics and Insurance blog and the author of its many posts. He's a Chartered Insurance Practitioner, having worked 18 years in the UK market. As an adviser to many firms on ethics issues, as well as a regular conference speaker, he is one of the leading voices on ethics and insurance.

follow me on: