The insurance sector is on the cusp of fundamental change. And at the heart of that change is the sector’s use of data, primarily for the underwriting of the risks we present. Consumers are told that this will allow insurers to offer them ever more personalised products and services. There’s a lot to say for that, but like many things, there are two sides to it. In this post, I want to explore the implications of what I’m calling the atomisation of insurance.
A powerful argument for ever more personalised underwriting lies in what insurance people refer to as adverse selection. This says that an underwriter whose premiums are based more on a pooled rate than a personalised rate will find high risks being attracted to her portfolio and low risks leaving her portfolio, relative to a competing underwriter offering a more personalised rate. This is seen as only fair and it is then seen as just as fair to offer that little bit more personalisation.
As opportunities to enhance the personalisation of your underwriting increase ( from all sorts of big data that suppliers are offering), then so do pressures to personalise your rates that little bit more. And ultimately, this becomes a never ending process, until of course, you end up with one-to-one underwriting. Personalisation becomes individualisation: a unique premium rate for each policyholder
Yet even then, we’re still looking at this through a traditional lens. Adverse selection would still be taking place along the dimension of time. Why individualise a person’s premium while still offering them an annual renewal? This then would drive premium adjustments towards a monthly and then weekly basis. And who’s then to argue against daily adjustments? Why not hourly? After all, some would see it as unfair not to. The internal logic of adverse selection then seems to become unstoppable, especially (and here’s the vital link) in a big data world in which firms have the opportunity to do something about it.
Think of telematics and the constant stream of driving data flowing from that black box in your car to the motor underwriter. Why add up its premium consequences in chunks rather than bytes? Delay seems almost like a historic anachronism.
Add to this an argument developing in insurance circles that anyone who doesn’t give underwriters unfettered access to their data must of course be a higher risk and so warrant a higher premium. So there are predictions of the insurance market splitting, into those who free up their data and get individualised premiums, and those who hold back on their data (or don’t generate enough) being automatically charged on the assumption that they are a higher risk. Sounds like a vulnerability and consent nightmare.
These sorts of developments can only be fulfilled in an era of machine driven underwriting. Gone will be a human touch to anything other than a minute handful of policies. And even then, that handful of policies may just be offered a renewal price designed to move them off that insurer’s books. After all (goes the argument), why incur expense that low risk policyholders will have to pick up the tab for?
This is not something unique to insurance. Many business sectors (and particularly financial ones) are on a micro-temporal trend, moving from the macro to the micro, even to the nano. While change seems inevitable, It could nevertheless have consequences for insurance that are unique
So what sort of market will emerge out of what I’m calling the atomisation of insurance? Here are four features of the insurance landscape of the future.
So where do we go from here? Research – absolutely. Inclusive debate – double absolutely. Will it happen – that only gets a question mark at the moment. Is it now time then for the insuretech debate to enter a new maturity?