Model validation is the set of processes and activities intended to verify that models are performing as expected.
According to the Supervisory Guidance on Model Risk Management (SR 11-7) issued by the FED, this is defined within the regulatory guidance as “the set of processes and activities intended to verify that models are performing as expected, in line with their design objectives and business uses. Effective validation helps ensure that models are sound. It also identifies potential limitations and assumptions, and assesses their possible impact.”
Therefore, during model validation, we verify that the model executes correctly. A typical validation process consists of the following aspects:
- verifying that the model input and model output is clean (i.e. does not contain outliers or missing data). In addition, verifying that sufficient controls are in place to deal with occasional data quality issues as well as with sparse missing inputs
- checking that the model input is stable and representative, meaning that e.g. the dataset on which a model is trained is representative of the data on which the model is executed
- verifying the model implementation, which means that we are testing the model expectations
- analyzing model performance
- comparing the model with alternatives (so-called benchmarks) to analyze the impact of changing model assumptions
- analyzing the stability of the model as well as robustness of the calibration procedure
Model performance is often analyzed on historical data, which we call backtesting. However, best-practice model validation should also include the evaluation of models on novel (synthetic) scenarios to understand under what conditions the model ceases to function correctly. This is done, for instance, in stress-tests.
The deliverables of a model validation process are:
- a validation report describing both findings and methodology
- a set of challenges, for instance, questions and suggestions for improvements to the model developer
- a set of limits, describing under what conditions the model can be used
A model has to be revalidated when:
- the model implementation, i.e.the code, has changed
- the context in which the model is being used has changed
- too much time has passed since the previous model validation – look at model validation frequency
Many validation procedures contain the determination of the so-called model risk tier. This is a discrete indicator that summarizes the amount of model risk carried by the model under validation. The model risk tier is computed by combining both qualitative and quantitative features. Qualitative features include model complexity or regulatory impact. Quantitative features include e.g. model performance and robustness against outliers.
Interested in learning more? Watch a demo of Chiron, our flagship product.