Understanding Explainability in Machine Learning
Explainability in machine learning involves making AI models transparent and understandable to various stakeholders. This is particularly important in financial services, where automated decision-making directly impacts consumers. Clear explanations of how decisions are made help build trust and ensure fair practices. Below is a summary of the conversation with Kelly Thompson Cochran, the Deputy Director at FinRegLab.
FinRegLab is a financial research laboratory in Singapore, operated by the National University of Singapore’s School of Accountancy. It focuses on innovation and knowledge in finance, covering topics like financial risk management, reporting, and investment analysis. The lab collaborates with industry and research partners to develop solutions for real-world financial issues and offers training and education for students and professionals in finance.
Key Areas of Focus
- Credit Underwriting: Research in this area includes using alternative data sources, such as bank account transaction data, to supplement traditional credit bureau information. This approach aims to fill gaps in credit data and improve fairness in credit decisions.
- Model Risk Governance: Ensuring AI models are used responsibly involves understanding their inputs and outputs. Studies in this area focus on promoting transparency and accountability in model development and deployment.
The Importance of Explainability
Explainability is a central problem in machine learning, given that the algorithms used are often complex and therefore the model might appear as a black box. In the context of high-risk or high impact decisions, being able to explain the model output is crucial for a number of reasons. First, consumer trust increases when decision-making processes are clear, leading to greater acceptance of outcomes. Additionally, financial institutions are required to adhere to regulations that demand transparency in decision-making. Lastly, explainable models are vital for identifying and mitigating biases, ensuring fairness and accountability in consumer treatment.
Explainability techniques are methods that can be used to help stakeholders understand the model predictions. There are various approaches:
- Simpler Model Architectures: Using simpler models can inherently provide more transparency. This is the realm of inherently explainable ML methods, such as XGBoost with shallow depth. See e.g. PiML for example implementations. Another technique that according to Finreglab’s research works well in the context of consumer disclosures is grouping similar inputs together.
- Post-Hoc Analysis: This technique involves analyzing AI models after they have made decisions to determine which inputs were most influential. There exist two major types of post-hoc analysis techniques: local explainability techniques such as SHAP and LIME, which explain the result of a model in the neighbourhood of the prediction, and global techniques such as partial dependency plots, which illustrate the behavior of the model on a more global scale.
Challenges and Future Directions
As AI technologies advance, there is an increasing need for clear guidelines and principles to manage risks and ensure responsible use. Collaborative efforts between industry stakeholders and regulatory bodies are essential to develop effective metrics and benchmarks for AI applications in financial services.
According to Kelly, activities in this field will focus on a number of areas:
- Adverse Action Notices: Providing accurate and meaningful explanations to consumers when credit is denied or priced higher due to risk profiles.
- Fair Lending Compliance: Adapting fair lending practices to incorporate new technological capabilities.
- Consumer Education: Developing tools and methodologies to help consumers understand and navigate AI-driven decisions.
Conclusion
The insights shared in this webinar highlight the importance of explainability in machine learning within the financial sector. As AI continues to evolve, ongoing research, collaboration, and regulatory clarity will be crucial to ensure these technologies are used responsibly and effectively.