Actuaries selecting predictive models for commercial challenges have been urged to factor the explicability of the model to less mathematically adept colleagues into their decisions, and not necessarily use the most complicated model.
Xavier Marechal, chief executive officer of Belgian actuarial consultancy Reacfin, said one indicator of success for an actuary should be the ability to explain the model used to a spectrum of colleagues, who "may be from sales or marketing or management and not [be] mathematical".
Being able to explain a model to them is important, Marechal told the International Congress of Actuaries in Sydney earlier this month "because at the end of the day you do not develop a model for the beauty of the art, but, of course, to help in decision making process. You are not alone in your company."
Marechal said whether electing a generalised linear model (GLM) or machine learning (ML) technique, for instance, actuaries had to balance three elements – the model's predictive power, the ability to base "sound decisions" on it, and explicability.
The latter helps, he added, "to involve people from across the business from the very beginning, to understand the problem, involve them in the project and share results with them."
Marechal said it was, therefore, crucial, actuaries understand from the outset exactly who will use a model's results, and the purpose and potential impact of the project.
Sometimes this may result in actuaries selecting a more complicated model because the goal will be to have the best prediction. But in cases where the goal is to explain the results to other people, the best model may be the "the one most easily interpretable and explicable".
The increased predictive power ML techniques ushered in have come "at the expense of a certain loss of interpretability," Marechal said.
Therefore actuaries might elect not to completely replace traditional models such as GLM with ML techniques.
One might combine ML techniques with traditional models in a project, for example using ML to identify the few most relevant variables, from a list of very many variables, which GLM might struggle with.
"It can be a good strategy to first enter one's variables into a ML model and extract the most relevant, and only use for example most 20 relevant in your GLM model," Marechal said.