Jump to Content
Financial Services

Adapting model risk management for financial institutions in the generative AI era

October 24, 2024
Behnaz Kibria

Director, Government Affairs and Public Policy, Google Cloud

Jo Ann Barefoot

Co-Founder and CEO, Alliance for Innovative Regulation

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

Generative AI (gen AI) promises to usher in an era of transformation for quality, accessibility, efficiency, and compliance in the financial services industry. As with any new technology, it also introduces new complexities and risks. Striking a balance between harnessing its potential and mitigating its risks will be crucial for the adoption of gen AI among financial institutions. 

Historically, regulators and the financial services industry have developed various model risk management (MRM) frameworks to address the potential risks that arise from the use of models in decision-making. These principles-based frameworks typically encompass:

  • Model validation: A rigorous assessment of a model's accuracy, reliability, and limitations. This often involves testing the model with various datasets and scenarios to ensure it performs as expected and identifies any potential biases or weaknesses.
  • Governance: Clear roles and responsibilities for model development, implementation, and monitoring. This often includes establishing processes for approving models, tracking changes, and ensuring ongoing oversight.
  • Risk mitigation: Identifying and managing potential risks, such as model bias, data quality issues, and misuse. This often involves developing strategies to address risks, such as implementing bias detection techniques, data quality checks, and user access controls.

Our previous paper, written in partnership with the Alliance for Innovative Regulation (AIR) sought to assess the relevance of MRM for AI and machine learning (ML) models. Our latest joint paper expands on that foundation by exploring how MRM frameworks and established governance practices can be applied to manage risks in gen AI contexts.

Specifically, the paper proposes that regulators acknowledge best practices, provide enhanced regulatory clarity, and establish expectations in the following four areas: 1) model governance; 2) model development, implementation and use; 3) model validation and oversight; and 4) shared responsibility in third-party risk management. 

Understanding Gen AI and its potential impact

Gen AI has the potential to contribute significantly to the economy, with estimates suggesting an addition of up to $340 billion annually to the banking sector alone. Financial institutions are already taking advantage of gen AI-based solutions to enhance efficiency, increase productivity among employees, improve customer engagement, and mitigate fraud and security risks.   

Gen AI distinguishes itself from traditional AI by moving beyond analysis and prediction to creating new content. These models utilize probabilistic assessments, meaning they don't produce a single definitive output but rather a range of possibilities based on the patterns they've learned. This capability unlocks new potential for human-computer interaction, allowing for more dynamic and creative applications.

Adapting model risk management for gen AI

While gen AI applications offer significant potential benefits, the technology also has unique characteristics and risks that should be assessed and mitigated. Importantly, existing MRM frameworks, designed to ensure the reliability and transparency of financial models, are flexible enough to accommodate gen AI deployment within financial institutions.

To help mitigate uncertainty as to how model risk can be managed to account for these unique aspects of gen AI, regulators could anchor to industry best practices and standards that they consider strong – perhaps presumptive – evidence that the requirements of MRM frameworks have been met.

Our new paper posits that clear governance frameworks that define roles, responsibilities, and accountability will be essential for effective oversight of gen AI. We highlight three key topics where additional regulatory clarity can benefit all stakeholders:

  • Documentation requirements – We recommend updating and clarifying model risk management guidance to specify documentation expectations for gen AI models.
  • Model evaluation and grounding – We recommend that regulators take into account developers’ use of practices such as grounding and outcome-based model evaluations, in addition to model explainability and transparency, in establishing the safety and soundness of gen AI-based models.
  • Controls for safe and sound AI implementation – We recommend that regulators recognize a set of controls, including continuous monitoring, robust testing protocols, and human-in-the-loop oversight, that are appropriate for ensuring the responsible deployment of gen AI in financial services.

In this process, collaboration between industry participants, regulators, and governmental bodies will be key. While the path forward involves navigating complex regulatory and ethical landscapes, the collective commitment to responsible innovation and adherence to robust model risk management practices will be pivotal in realizing the full potential of gen AI in financial services and beyond.

Posted in