US financial services need comprehensive AI regulation and sector-specific guidance
by Prakash Bade, Associate Director, Data Science - Model Risk, CRISIL Global Research & Risk Solutions
Sucess Dialog
This is added to your favourites.
Warning Dialog
This is already added to your favourites.
sorry something went wrong.
Last week, the Consumer Finance Protection Bureau released a report on the use of artificial intelligence (AI)-driven chatbots by financial institutions (FIs) in the United States (US), spotlighting several instances of customer frustration, reduced trust, and even violations of the law.
This is but one instance of what FIs might increasingly face as use cases of AI rise but regulation lags - in this case, reputational risk.
To be sure, AI, machine learning (ML) and large language models - by virtue of their ability to handle a wide array of data types, including audio, video, and text and potential to enhance efficiency and accuracy - are invaluable tools for the financial sector.
But these benefits come with concomitant risks: of model explainability, bias and fairness, data privacy, legal and compliance, algorithm bias, accountability, AI ethics and information security, to name a few.
That inevitably calls for strong federal and local regulation for FIs to comply with.
However, current AI regulation in the US has been slow to move. The Blueprint for an AI Bill of Rights, released in October 2022, offers some guidelines but is non-binding.
The Office of the Comptroller of the Currency (OCC) has also set out supervisory expectations for managing AI risks. The National Institute of Standards and Technology released the AI Risk Management Framework 1.0 in January this year.
Still, these measures are piecemeal and do not adequately address the unique challenges posed by AI use, particularly around algorithmic biases and model explainability.
US Senator Chuck Schumer's push for a nationwide AI regulation highlights the need for more comprehensive standards1.
However, timelines for implementation remain unclear.
In contrast, the European Union (EU) has shown great agility in addressing AI-related risks and their management, and has drafted the world’s first comprehensive regulation - the AI Act - which close to becoming law. It takes a risk-based approach to regulate AI and emphasizes transparency, accountability, fairness, privacy, robustness, and non-discrimination.
The US could thus take a leaf out of the EU's book and incorporate similar requirements in its own AI regulations.
Meanwhile, AI use cases in finance are proliferating. They span fraud detection, credit scoring, risk management, cost reduction, customer satisfaction and other areas aimed at boosting efficiency.
According to McKinsey's global research report on AI adoption2 businesses have adopted AI in at least one area 2.5 times faster than in 2017 - a sign that AI adoption will only grow further.
A proactive approach to mitigating AI/ML model risks at US FIs
As AI/ML models continue to gain traction with FIs in the US, FIs must also work to establish a robust and AI-dedicated risk management practice at their level.
This calls for sector-specific AI model guidelines to manage potential risks. A robust risk management framework for AI models that FIs can proactively develop, should include enterprise-level AI governance that defines comprehensive standards, procedures, and policies for the full AI model lifecycle:
In conclusion, the absence of comprehensive and sector-specific Al regulations for US FIs is a pressing concern that requires immediate action. They are essential to promote responsible Al adoption, effective risk management, and alignment with ethical standards.
By actively engaging in regulatory discussions, US FIs can help shape the future of Al regulation not only in the US but also worldwide, thereby promoting a secure, compliant, and responsible Al ecosystem.
Reference: 1&2Scoop:Schumer lays groundwork for Congress to regulate AI: April 13, 2023