In my current post, I would like to share a few thoughts about scenario generation as I believe they are crucial when analyzing mathematical models.
Property based testing
First of all, as with any piece of software, an algorithm requires extensive testing. One particularly useful approach is the so-called property-based testing. The epitome of a library implementing this line of thinking is QuickCheck:
QuickCheck is a library for random testing of program properties. The programmer provides a specification of the program, in the form of properties which functions should satisfy, and QuickCheck then tests that the properties hold in a large number of randomly generated cases. Specifications are expressed in Haskell, using combinators provided by QuickCheck. QuickCheck provides combinators to define properties, observe the distribution of test data, and define test data generators.
Generating the actual test cases is an important feature of a property based testing approach. Traditionally testers use existing (i.e. historic) data or alternatively a generator to create synthetic cases.
These ideas are directly relevant to financial engineering. When building e.g. an FX option volatility surface calibration, it makes sense to verify that the fitting algorithm works equally well on a Monday as it does on a Friday. This is a test that can be run easily using historical data. On the other hand, when we want to verify if our new ML-based recommendation engine works correctly on a more noisy dataset, this can be tested using synthetic data generators.
Libraries like QuickCheck or ScalaCheck can be used to generate scenarios for testing mathematical algorithms. However, the off-the-shelf generators in these open source initiatives are still fairly basic and do not allow for more subtle quantitative analysis so custom work is still needed.
There is however another reason why scenario generation is essential. When making informed decisions, analyzing the outcome of plausible scenarios is extremely helpful. This is also one of the basic concepts of risk management. Hence, when managing model risk it is crucial to understand how mathematical models behave under various scenarios. Hence, rather than using scenarios to detect bugs in the algorithms, we can use scenarios, as well to understand the behaviour of the models we are testing. Below we give a few considerations to illustrate the added value of scenario generation in the context of risk management.
The added value of scenario generation in the context of risk management:
- When validating mathematical models, it is important to determine under what conditions they will break. This was illustrated during the great financial crisis when many banks were unable to use their derivatives pricing models because the market was in a dislocated condition which was an unreachable state by the underlying diffusion models. Determining this so-called region of validity requires an efficient generation of test scenarios.
- Regulators analyse capital requirements of financial institutions through stress tests. In order to do this accurately, it is important to create a sufficiently complete set of scenarios to sample possible future outcomes as accurately as possible.
- Sensitivity analysis of a model helps to understand which parameters play an important role. However, more often than not, sensitivity analysis is only performed in the neighbourhood of the current operating point of the model. In that case, the reaction of the algorithm to more global changes is hard to predict. Scenario generation on the other hand can be used for global sensitivity analysis.
- From a model risk management point of view, it is important to understand the impact of changing the assumptions of the model which is why we create benchmark models. Once created, it is crucial to understand where the models diverge from each other. Sampling this precisely requires generating a dense enough set of scenarios.
- Finally, in model development, it may occur that there is not enough data to accurately train an algorithm. In that case, data augmentation techniques can be used to generate alternative datasets.
For all of these reasons, scenario generation is a prerequisite for proper model risk management.
Being able to test models on various scenarios, both historical and synthetic is an important aspect of validation. We see interesting projects in this space but a lot of work is still needed to make efficient scenario generation easily accessible to both model developers and validators.
Interested in learning more? Download here our white paper The Evolution of Model Risk Management.
Interested in learning more? Watch a demo of Chiron, our flagship product.