Chad Karls, principal and consulting actuary at Milliman, talks about the importance of data — and new ways for insurers to make the best use of it
What technology trends do you expect to shape the insurance industry in the near future?
The two that are going to continue to emerge and become increasingly more important are, first, the application of artificial intelligence and, in particular, the machine learning part of it.
Computer algorithms are able to create data that previously didn't exist. I think that's a tremendous opportunity for the insurance industry.
And the idea that we can extract some intelligence out of various sources of non-traditional and unstructured information and turn it into data, I think that's going to be increasingly important.
Then the other part of that is what do you do with that data? So, the second trend that's going to continue to gain momentum is the whole concept of predictive analytics.
Its importance is going to grow with the idea that we can build these predictive models to better replicate the real world and model the real world.
So far, this has been used mostly on the underwriting side, but I think there's a tremendous opportunity on the claims side.
Why is data so important for insurers currently?
Data is the raw material for intelligence. You have all this data and you want to put it together and you want to extract the intelligence from it. The industry has more data available now and different types of it.
There's a reason why Major League Baseball teams are playing defence differently. Not because they thought it would be interesting, but because the data told them to do so. And it definitely works. The same can be applied in the insurance industry. Really listen to your data, and you'll be able to gain some competitive advantages that others won't.
Data is also important from the claims side.
Claims folks will often say every case is unique and that's certainly true. But when we as an industry settle hundreds of thousands of claims every year, I'm confident we can learn things from those experiences and I'm confident we can extract data from those claims and identify best practices.
Is there a best practice when it comes to writing say a motion for summary judgement? We as an industry write hundreds of thousands of those every year and the fact is some insurers, and in particular some defence counsel, have a better success rate at getting those granted than others.
Can we learn from those and maybe develop some best practices around that, for example? That's just one small example, but I think it replicates itself many times over on the claims side of the house.
What are the key non-technological challenges you're looking to help insurers with now?
For a lot of people on the claims side, embracing technology may be outside their comfort zone. From a non-technical perspective, we're trying to help them gain confidence in and to listen to their data, and to not allow recent or outlier experiences to overly influence their decisions.
Back to my baseball analogy, sometimes when the defence shifts, the batter will hit the ball where the defence would have been traditionally and instead of getting the batter out he gets a hit. That happens. But that's the outlier. That's not what happens most of the time. We're trying to help clients and instil in them the confidence to listen to their data so that they can make better decisions over the long term.
Did you make any upgrades or improvements in the last 12 months to the technology?
We've continued to expand the data sets that we're looking at. And we have deployed some additional machine learning based algorithms on the claim notes to try to extract some additional intelligence out of them. The idea is to figure out what of that information is predictive to claims outcomes—particularly litigated claims. We've made a concerted effort to continue to expand the data sets that we're able to examine and get some additional predictive intelligence from them.
How do you keep pace with the increased complexity in the modelling world?
That is a constant challenge. There is no such thing as a perfect model. We'll never build a model that perfectly replicates reality. That's the whole point of them, by definition. But the good news is we don't have to get it perfect. We just have to build a model that improves upon what we have been doing and create one that does it better than our competitors.
From a modelling perspective, you need lots of good data. And then you have to apply the right modelling techniques. And frankly the modelling techniques are very good nowadays.
If we have sufficient data, I'm really confident we can build a model that improves upon what we did in the past. It goes back to expanding your dataset, making sure your dataset is clean and comprehensive and then applying the modelling techniques.