Why we are disrupting model risk management

Two years ago, we embarked on a fantastic adventure to build a platform for better model risk management. Although this is often considered a niche topic, our determination was rooted in the firm belief that we need better tools to build robust models. The journey since then has not only confirmed but also consolidated that view. With the advent of machine learning and AI applications, the problem of creating high-availability algorithms, with a probability of failure below .001% which is needed in business critical applications such as health care, self-driving cars or finance, has made that lack of tools even more acute.

This contribution is written for the model risk executive who is looking for information on where this field is headed in order to build a future-proof strategy. Our vision will clarify why disruption is needed to allow people to manage the risk of advanced analytics.

George Box

All models are wrong

This quote from George Box, taken from his 1976 paper on Science and Statistics clearly points to the inevitability of model risk. As a direct consequence, model risk management is concerned with managing exactly this risk that models will inevitably produce false results. Embedding this certainty in a company’s approach to analytics is exactly what model risk management is about.

Since model failure is an indivisible part of modeling, the scope of model risk management is extremely wide and is absolutely not restricted to regulatory models. A proper model risk management framework therefore covers all analytics (such as credit & liquidity risk, scorecards, decision models, fraud & AML detection, valuation and market risk, chatbots and marketing analytics to name a few). In addition, managing this risk efficiently requires sound governance that impacts the entire organization.

This is why proper risk management is governed through the so-called three lines of defence. The first line is responsible for delivering an exhaustively tested and documented model. The second line asserts this independently and challenges the first line in case of doubt. The third line verifies both qualitatively and quantitatively that first and second line work together correctly, according to company-wide standards, both in design and in actual operations. In other words, model risk is everybody’s responsibility, from the model user, over the quant, to the manager and the board member. Although the above principles are well-established, on the ground we notice that model risk management is sometimes being narrowed down to a set of monotonous tasks; ticking regulatory boxes, performing repetitive jobs such as reimplementing a model somebody else has created as well as generating massive amounts of documentation that no human can ever consume in its entirety. Because of this aura of boredom, many talented people prefer to move into model development which is considered the place where the action can be found and budgets are being allocated. Simultaneously model risk teams face massive challenges finding and retaining good people. Also at the managerial level, we often see that businesses consider building new analytics - ideally incorporating AI - as the primary route to gain competitive advantage. In model building, budgets are large and can be allocated fast. Model risk, and especially second or third line of defence, is considered to be a mandatory cost center.

Over the last 15 years I have worked in institutions both large and small, developing models that managed some of the most complicated derivatives portfolios and engineering algorithms that controlled automatically 100’s of MW of industrial power consumption. Looking back at this, and comparing it with the tools that are available in 2019, I would like to argue that this view is flawed.

First of all, thanks to large efforts of the open source community, there is nowadays an abundance of high quality analytics. This is especially true in Machine Learning, where all important technology firms have open sourced considerable parts of their algorithmic frameworks (see e.g. TensorFlow, Microsoft Cognitive Toolkit, etc). Even the more classical fields such as valuation models now have a fairly rich set of libraries (such as QuantLib and ORE). All of this means that using sophisticated analytics will very soon stop being a competitive advantage. This transformation is happening at an exponentially increasing rate due to the advent of auto-ML. These frameworks, both commercially and open-source, allow virtually everybody who has a dataset to train hundreds of sophisticated machine learning algorithms (such as neural networks) and deploy these models instantaneously. A modeling team who is ready to leverage those tools and who has the data can build advanced models in days.

However, there is a vast gap between using a (ML) algorithm in a one-off proof-of-concept mode and running that algorithm stable in production. The former simply requires one to collect the data and perform a few trial and error iterations. Serving algorithms in production on the other hand means managing the dataflow, guarding data quality, defining fall-back strategies, monitoring model performance continuously and retraining/recalibrating the model whenever needed. This is a daunting task because even simply guaranteeing reproducibility of a model - which is a prerequisite to more subtle issues like bias and explainability - is often very hard to realize. This is why the field of machine learning is currently contributing to a reproducibility crisis in science .

To highlight the challenges of robust analytics even more, I would like to point out that many front office quant teams (i.e. first line of defence) in banks often become an indivisible part of their own analytics. These teams are constantly needed to fix issues as they appear, to finetune calibrations and perform small modifications to deal with additional edge cases in a continuous fashion. When the quants are gone, the models have to shut down. We call this the hybrid human-algo approach.

Transitioning to the AI era

This trivialization of model development is contributing to the high grow rate of models that are found in financial institutions. In a recent study of McKinsey, the yearly growth rate was estimated to be approximately 20%. This implies that the hybrid human-algo approach cannot be maintained and that we need to transition to highly integrated model risk management. In other words, an institution that endeavours to capture the full potential of machine learning will have to put model risk management first.

Let me detail how this would work in practice. At the beginning of the model development cycle, a project team is assembled. In the first design stage, this team studies the potential introduction of a new model to solve a concrete business problem. When requirements are gathered the team should immediately take into account the fact that this model will at some point fail. In order to manage that risk, the design of the model should focus on risk management, studying data quality, quantifying model risk and determining the feasibility of monitoring. Overlaying those risks and challenges with the estimated benefits and the risk appetite of the bank will allow the team to decide quickly what solution (a complex model, a simple one or no model at all) will be fit for purpose. This exercise at the beginning of the cycle will yield a design that allows for models that can be deployed in a robust fashion with clearly defined limits that can be monitored and managed in a completely automated fashion.

Thanks to the abundance of open source analytics and auto-ML solutions, the subsequent implementation stage will again be mostly concerned with model risk related topics. Key points to address here are the quality and representativeness of the data, the explainability of the model, and the level of testing and documentation that is feasible. In other words, at this stage the team can build new models in days, but the challenge is to determine which model fits best within the risk framework as defined in the first stage of the project. By putting model risk management first, more people will find their way into the second and third line since it will suddenly be clear that even more ingenuity is needed to understand the risks of models breaking down and to detect and explain model failure.

This vision can only be realized when an institution has the technology to manage its model life cycle correctly. A platform that integrates model development with validation and monitoring, that allows for agile workflows and close interaction between first, second and even third line. A platform that takes away the monotonous, repetitive tasks and allows risk managers to deep-dive into models, dissecting algorithm failure, explaining decisions to clients, and detecting issues in real-time.

Such a platform should also provide a more interactive view into models. If we want to manage the certainty that our models will fail, we have to replace static documents by dashboards showing real-time model health, interactive views into complex data pipelines and visualizations of model limits. As the world is slowly discovering that building a mathematical model is trivial, the industry is going to shift towards using agile technology platforms that give them the freedom to manage algorithms the same way that technology giants currently deploy code continuously.

The benefits

Recent advances in both technology and algorithms have shown previously unimaginable results, ranging from discovering new chess strategies to generating text that reads like it was written by a human. This full potential of AI can only be unlocked through a model risk centric approach. Model risk management makes the risks clear, and allows us to think about mitigation strategies. The added value of AI is often incremental - we build better credit risk models, detect more fraud, or price derivatives faster. Capturing that incremental value sustainably over time means that we avoid model failure that would annihilate that value instantaneously.

Showing consistent behavior will also generate trust in AI, which is another barrier to its wide adoption. People have to trust a machine, and this is only thinkable when ML behavior can be explained and when it shows consistent performance over a long period of time. When we board an aeroplane, we put our faith in the hands of the engineers who have designed the machine by accurately controlling the risks involved. When we build AI to perform surgery, we need a similar mindset that puts the risks first.

This is our vision. This is why we have created Yields.io.

Jos Gheerardyn, May 28 2019

back

Interested in a demo?

Lorem ipsum dolorem et arceopara bellum. Lorem ipsum dolorem et arceopara bellum. Lorem ipsum dolorem et arceopara bellum.