Ralf Korn and Gerhard Stahl discuss their novel investigation into the quality of Solvency II capital models and reveal a simple but powerful method for challenging model complexity
In this article we summarise our detailed discussion paper1 about the adequacy and forecast quality of capital models applied under Solvency II.
This contribution has two aims: first, to convince a wider audience of practitioners that it is possible to challenge Solvency II models and modellers in just 10 minutes, avoiding cumbersome discussions about the underlying mechanics.
Second, we offer an alternative approach to calculate an economically realistic solvency capital requirement (SCR), which simplifies strategic steering under Solvency II.
Models are the price of the diversification benefit
Models are fantastic tools. Like an imaginary sixth sense, they serve as an interface between (wo)man and real world and enable humans to adapt their behaviour and optimise their decisions.
In financial risk management, stochastic models play a prominent role. They are at the foundation of Solvency II's "diversification benefit": the capital reduction that can be achieved by taking on a wide variety of risk. Though the benefits of diversification can hardly be overestimated, the complexity that comes along with non-linear results from stochastic models challenges our common sense. In addition, the concept of probability is non-linear by its very nature.
Under Solvency II, a whole bunch of stochastic risk models are in place: full internal models, partial models, the so-called standard formula and versions thereof enhanced by company-specific parameters.
Solvency II was a no-data problem
The early days of Solvency II were characterised by a certain lack of empirical data. This has been a consequence of the simultaneous introduction of the concept under discussion here – the Solvency II balance sheet – and the associated risk measures.
Hence, discussions with stakeholders were and remain dominated by formal, technical details related to the modelling process. Many insurers' own risk and solvency assessment (Orsa) reports bear witness to this phenomenon.
Our proposal is going to shift the discussions from causal modelling to the level of consequences and hence, decisions.
Backtesting the SCR
With almost a decade of practicing Solvency II, we now have the necessary data to study. We analysed a time series of about 30 SCR predictions and associated changes of own funds on a quarterly basis.
In order to understand model performance, it makes sense to compare the model forecast (denoted by F) with realised events, denoted by x. A suitable chosen function T ties the prediction-realisation pairs (F,x) in one number T(F,x).
In our case, T is the ratio of changes in own funds over one year and the predicted SCR. Under very mild technical conditions T(F,x) is approximately normally distributed.2
Common sense, graphically displayed
The well-established tool of QQ plots provide an easily applicable diagnostic instrument in order to check the stochastic essentials T; i.e. how close empirical distributions come to the ideal Gaussian distribution.
The assumptions underpinning such a statistical analysis are not only reasonable from a statistician's point of view, but also have a level of regulatory acceptability; such backtesting techniques are applied in regulatory frameworks (such as the 1997 Basel Market Risk Amendment) in order to determine capital add-ons for models lacking sufficient forecast quality.
The QQ plot allows us to judge both the model's adequacy and its precision. When the data in the QQ plot are scattering closely around the first diagonal, then the forecast quality is close to perfect. The tightness of the data cloud around its regression line allows inferences about adequacy of the stochastic model. The slope of the regression line indicates the model's precision, i.e. overestimation or underestimation of risk.
As the following example shows, the QQ plots allow us to draw inferences about the economic and stochastic adequacy of a model. This refers to both internal models and the standard formula.
Starting a dialogue
Figure 1 depicts the QQ-plot against the standard Gaussian distribution of an insurance group which applies an internal model to determine the SCR.
Figure 1: QQ plot for the data (quarterly log-increases of the own funds) of an insurance group that applies an internal model
Obviously, the data scatter very tightly around the regression line, indicating the adequacy of the underlying internal model. Comparing the size of scales of the x-axis with that of the y-axis shows a significant deviation from the first diagonal. More concretely, the narrow range of the y-axis shows an overestimation of risk by the model of more than 100%.
Furthermore, the outlier on the left grabs some of the reader's attention. Both features may serve as a starting point for decision makers to pose questions to modellers.
Such dialogues should provide quantitative insights that are very relevant for the Orsa process and discussions with regulators or other external stakeholders. In addition, the exchange with modellers will provide qualitative insights for decision makers with respect to
the company's risk management culture.
Often, passive compliance-oriented mind sets dominate risk departments. The overshooting of SCRs are accepted, because they satisfy the requirement to be compliant with regulations. The economic rationale for overshooting is not questioned, and a causality for the (perhaps unintended) conservatism cannot be given.
A renaissance for Solvency II?
An overshooting of internal models or the standard formula are not uncommon in practice. Of course, there are some factors that might have a dampening effect on the observed overshooting in our examples: for instance, the size of the insurer (because the analysis was mostly based on large groups) and country it's based in (because the default risk of government bonds is not mirrored in the SCR because they are considered risk-free in the standard formula).
However, given the likely imprecision of SCR figures, the economic need for an easy-to-calculate – but more precise – SCR is at hand.
To this end, we are going to apply the following universal principle (analogous to calculating the SCR for equities under Solvency II) for calculating the capital requirements for an asset:
- capital required = value x volatility x level of significance; (1)
where the saldo (assets minus liabilities) of the SII balance sheet is considered as the value of the company.
The first two components are empirically determined, hence the inherent model risk is low. The setting of the level of significance is normative, however very transparent.
The advantage of such an empirical model is that the SCR can be determined in minutes, stress tests are easily incorporated, and communication at every level is instantaneous. We believe this proxy fosters better risk-informed decisions.
The value of internal information
For insurance groups which are listed companies, their share price is an additional data point. Substituting the share price for the value in (1) and adapting the volatility yields a third risk measure.
These two evaluations of the same company differ with respect to the amount of information used. For the share price, only public information comes into play, whereas the own funds are derived from internal information underpinning the Solvency II balance sheet.
The examples considered in our paper show the volatility of the share is about three times larger than that of the own funds. In the light of this relationship, the overshooting of the SCR figures highlight that the available internal information is far from fully exhausted.
The SCR yields a closer proxy for the risk in the share than in the Solvency II balance sheet. These relations put the silent assumption of applying internal models for regulatory purposes – i.e., that regulatory and economic capital requirements should converge – at risk.
Key findings
- The volatility of equity price is approximately 3 times the volatility of own funds
- The volatility of the internal model SCR is approximately 2 times the volatility of own funds
Conclusions
Our study – the first analysis of this kind – shows the volatility of the SCR produced by the internal model is closer to the volatility of the shares of the company, than the volatility of the Solvency II own funds. This is surprising, and gives a strong indication that SCR overshooting is because the internal information underlying the Solvency II balance sheet is not properly mirrored in the internal model.
Furthermore, following the dialogue with the modellers, decision makers should have an opinion about the following questions:
- Is your Solvency II model conservative?
- Why and by what amount?
- Is this reflected in the Orsa?
- Is your risk culture agile?
- Is your capital allocation efficient?
This short summary is but an amuse bouche and may raise more questions in the reader's head than answers are provided.
The interested reader will find some answers in our discussion paper. We hope this short note fuels further debate about the Solvency II framework and its risk culture.
Ralf Korn holds a chair for Financial Mathematics at RPTU in Kaiserslautern. Gerhard Stahl is chief research and development officer at Talanx. To contact the authors email: [email protected], [email protected]
Footnotes
1 Insights from Backtesting under Solvency II: Overdose or implicit Change of Paradigm?
See also "A first look back - model performance under Solvency II", which has been accepted for publication by the European Actuarial Journal