Automated model validation: Challenges & Considerations

Since 2017 we have been on a mission to build a platform for better Model Risk Management. Our determination was rooted in the firm belief that we need better tools to build robust models. The journey since then has not only confirmed but also consolidated that view. The reliance of organizations on models has grown and the introduction of advanced-analytics techniques has increased the scope and complexity of models even more. Following the financial crisis, regulatory scrutiny of this topic has increased considerably, and it has become imperative for banks to manage and monitor their Model Risk Management activities. Using correct models that are fit-for-purpose is key for organizations to foster effective and responsible decision-making.

Not all models are created equal, and not all organisations follow the same model lifecycle. However, in a typical set-up, model developers and validators work hand in hand throughout the model lifecycle in defining which models to develop, their scope and the level of risk involved. Generally speaking, in the model validation process, model validators answer questions such as, “Is the model fit for purpose?”, “Does it satisfy the requirements established a priori?” and “Are all underlying assumptions valid?”. To answer these questions, validators must execute rigorous tests and carry out in-depth analyses.

Depending on the complexity of the models, model validation can be a time-consuming process. With so much at stake, regulations are in place to help organisations mitigate risks. A model tiering process is used in which models are divided and categorised by risk level. Models that are tagged as ‘high-risk’ naturally take longer to validate and require additional human resources to conduct more checks and analyses.

Today, the risk in deploying models has become higher, and inefficient model validation processes can be very costly. The main challenge many organisations face is a lack of resources to efficiently monitor and validate their models relative to the demand. Combined with the increasing complexity of models brought about by new technologies such as machine learning and AI, this results in unwanted backlogs and significant delays. It can take as much as 6-12 months to validate a complex AI model, and model complexity doubles every 3.5 months.

The automation dilemma

For many types of models such as IRB models, validators are required to perform the same tests with a certain level of periodicity. The regular monitoring and repetitive analyses are time-consuming and do not allow for a proper in-depth quantitative analysis.

Due to the growing reliance on models, large banks also often lack a clear picture of precisely how many models exist, and validation management is typically very decentralized and heterogeneously organized across the organisation. Scaling up activities therefore becomes a challenge.

To prevent the quality of model validation processes from plummeting, organisations are forced to think outside the box and leverage new technologies. These technologies bring the potential to automate model validation workflows, and improve reporting and documentation processes, thus preventing backlogs and other important aspects of validation such as deficiencies in model documentation. The good news is that more and more organisations are opening up to the idea of implementing automation mechanisms for repetitive and routine tasks in model validation. This gives model validators ample time and space to prepare for and carry out crucial tasks to monitor model performance and behaviour. Furthermore, it frees up time to ensure validation consistency, and improve the validation workflow and time management per model.

Some argue that automation can only go so far. However, based on experience, organisations that do not rely on automation are more likely to experience inconsistencies and a lack of reproducibility and auditability. Therefore, automation is definitely the way forward.

It is interesting to note that organisations actually do make an effort to automate several tasks but find themselves hindered from successfully implementing an effective automation framework for model validation for two main reasons: a lack of internal expertise and a technology mismatch.

While it is clear that automation capabilities enable significant cost efficiencies for financial institutions, there is typically a long debate about whether an organization should build software solutions using in-house IT resources or buy market-ready products. Below are a few critical aspects in the decision-making process:

  • Evaluate if you have access to a multidisciplinary team (data scientist, front-end developer, back-end developer, devops, data engineer, architect, project manager, business analyst, etc.) that is competent and available to build a state of the art solution.
  • Make sure you have a complete overview of the costs for maintenance, support, and future feature developments and updates.
  • Create a roadmap that includes a realistic estimation of build-time and consider variables that may impact your delivery time requirement.
  • Thoroughly assess your own processes, limitations and capabilities, and see how they compare to market-ready solutions.
  • Compare the Return on Investment. Will an in-house developed solution create enough value to justify the higher build and maintenance costs?

Conclusion

The advent of machine learning and AI models has changed the dynamics of model validation, and organisations are increasingly struggling to cope with the demand due to limited capacity and resources, which, in turn, results in unwanted backlogs and delays. Model validators and developers are increasingly faced with both internal and external deadlines and requirements. This challenges decision-makers to consider automating repetitive functions within the whole process to prevent a capacity overload and give validators more time to focus on other crucial tasks.

Technological advancements take centre stage in model risk management. With this in mind, it must be acknowledged that automation has never been more relevant and accessible to most organisations. So, should organisations embrace automated model validation? The answer is simple: definitely!

delphine draelants avatar

About the Author

Delphine Draelants has four years of experience spearheading validation teams at top-tier financial institutions. Her extensive knowledge and passion for model risk management has helped various organisations efficiently streamline their processes and meet their goals. Today, Delphine works at Yields.io as the Customer Success Director.

About Yields.io 

Yields.io is a technology company that provides enterprise model risk management solutions to banking and financial organisations. Today, Yields.io is a leading model risk player, pioneering award-winning enterprise model risk solutions for banking and finance that are sustainable and easy to maintain for teams, data scientists, model developers, and model validators.

World class Model Risk Management Technology

Yields.io is the leading technology provider for model risk management. Yields.io’s model risk technology, Chiron App and Chiron Enterprise empowers model validation teams in G-SIBs worldwide.

Top-notch Model Validation Service

Yields.io developed Chiron App, an award-winning data science platform for all users to accelerate model validation in organisations.