Managing AI Risks

Introduction

In the rapidly evolving landscape of artificial intelligence (AI), managing the associated risks has become paramount for organizations aiming to leverage these technologies responsibly and effectively. This whitepaper delves into the critical aspects of AI risk management, comparing it to traditional model risk management (MRM) and highlighting the unique challenges posed by AI models, particularly Large Language Models (LLMs).

Both traditional and AI models share fundamental risk management principles, requiring robust frameworks that encompass governance, identification and risk tiering, model lifecycle management, and risk management strategies. However, AI models introduce additional complexities due to their advanced technical requirements, more dynamic development cycles, and broader application scopes. This paper outlines these complexities and provides practical examples to illustrate the necessity of tailored risk management approaches for AI.

The layout of our white paper is as follows.  We start by pointing out, through examples, that the challenges to manage classical models and AI use cases are very similar. We then continue to discuss how to implement AI risk management. In that chapter, we discuss AI risk frameworks. Here, we outline the essential components of an AI risk management framework, focusing on governance, identification and risk tiering, model lifecycle management, and risk management. Specific considerations for AI include the use of Continuous Integration and Continuous Deployment (CICD) pipelines and the need for pilot usage in scenarios where traditional backtesting is not feasible.

The growing reliance on third-party AI applications necessitates robust risk management practices to mitigate associated risks. We discuss the challenges of managing third-party models, particularly in complex ecosystems like Banking as a Service (BaaS), and emphasize the importance of third-party risk management strategies.

We conclude by discussing how to manage foundational models and use cases based on these. We address the unique challenges in managing LLMs, such as defining and controlling use cases, measuring and monitoring performance, and comparing the risks of in-house hosted algorithms versus third-party vendor models. This section provides a roadmap for organizations to effectively govern and manage LLMs, ensuring their safe and beneficial use.

By examining these key areas, this whitepaper aims to provide a comprehensive guide for organizations seeking to implement robust AI risk management practices. Through practical examples and detailed analysis, we highlight the importance of proactive risk management to harness the full potential of AI technologies while mitigating associated risks.

Similarities between AI and model risk management

Bias

Managing the risks associated with both classical and AI models presents similar challenges, with bias being a critical concern in both types of models. Despite the different methodologies and complexities involved, both types of models are susceptible to biases that can lead to unfair or incorrect outcomes. Bias can stem from various sources, including the data used to train the models, the assumptions made during model development, and the ways in which models are implemented and used. Understanding these challenges is essential for effective risk management, especially as AI continues to integrate more deeply into various sectors.

The Ofqual Scandal: A Classical Bias Issue

A notable example of bias in classical models is the Ofqual scandal in the UK. In 2020, the Office of Qualifications and Examinations Regulation (Ofqual) used an algorithm to predict students’ grades after exams were canceled due to the COVID-19 pandemic. The algorithm relied on historical data, including students’ previous performance and their school’s historical results, to predict grades. However, this approach led to significant biases, disproportionately disadvantaging students from lower-performing schools, often in less affluent areas.

The algorithm’s reliance on historical data perpetuated existing inequalities, resulting in public outcry and the eventual reversal of the decision to use the algorithmically predicted grades. This incident underscores how classical models can embed and amplify biases present in historical data.

your algorithm does not know me

The Incident with Google Gemini: An AI Bias Issue

A recent example of bias in AI models is the incident with Google Gemini. Google Gemini, an advanced AI generative model, generated World War II images depicting German soldiers of color. This was historically inaccurate and highlighted the challenges in alignment as the model is modified in a post-training step to de-bias. The incident sparked discussions about effectiveness of post-processing steps given that it is near to impossible to fully debias the vast amounts of training data while staying historically accurate.

Increased Awareness of Bias Risk in AI

Bias risk is more prominent in AI due to the broad range of use cases and the granular nature of AI predictions. AI models are increasingly used in sensitive areas such as hiring, law enforcement, and healthcare, where biased decisions can have profound impacts on individuals and society. The granularity of AI predictions means that even subtle biases can lead to significant and widespread consequences. Therefore, there is a heightened awareness and a pressing need to address bias risk in AI models to ensure fair and equitable outcomes.

ai generated image

In conclusion, both classical and AI models face significant challenges related to bias. The Ofqual scandal and the Google Gemini incident serve as critical reminders of the importance of rigorous risk management practices to identify, mitigate, and prevent bias in all types of models. As AI continues to evolve and expand into new areas, addressing bias risk will remain a top priority to ensure that these powerful tools are used responsibly and ethically.

¹https://en.wikipedia.org/wiki/2020_United_Kingdom_school_exam_grading_controversy
²https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html

Download to unlock the complete whitepaper

More about the whitepaper

In the whitepaper “Managing AI risks”, Jos Gheerardyn discusses how Model risk management (MRM) provides a proven framework to manage the risks of AI. After highlighting the main components of a sound MRM framework, he discusses which aspects need to be fine-tuned to address the specificities of AI models.

These include a.o. stakeholder management, dealing with technological complexity and managing third-party risk. He concludes by giving an overview of the main regulatory initiatives and gives advice on how to navigate their complexity.

Author

jos_gheerardyn

Jos Gheerardyn has built the first FinTech platform that uses AI for real-time model testing and validation on an enterprise-wide scale. A zealous proponent of model risk governance & strategy, Jos is on a mission to empower quants, risk managers and model validators with smarter tools to turn model risk into a business driver. Prior to his current role, he has been active in quantitative finance both as a manager and as an analyst.