Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

How Can Financial Institutions Prepare for AI Risks?

Artificial intelligence (AI) technologies hold big promise for the financial services industry, but they also bring risks that must be addressed with the right governance approaches, according to a white paper by a group of academics and executives from the financial services and technology industries, published by Wharton AI for Business.

The white paper details the opportunities and challenges of implementing AI strategies by financial firms and how they could identify, categorize and mitigate potential risks by designing appropriate governance frameworks. 

“Professionals from across the industry and academia are bullish on the potential benefits of AI when its governance and risks are managed responsibly,” said Yogesh Mudgal, AIRS founder and lead author of the white paper. The standardization of AI risk categories proposed in the paper and an AI governance framework “would go a long way to enable responsible adoption of AI in the industry,” he added.

Potential Gains from AI

Financial institutions are increasingly adopting AI “as technological barriers have fallen and its benefits and potential risks have become clearer.” The paper cited a report by the Financial Stability Board, an international body that monitors and makes recommendations about the global financial system, which highlighted four areas where AI could impact banking.

The first covers customer-facing uses that could expand access to credit and other financial services by using machine learning algorithms to assess credit quality, or to price insurance policies, and to advance financial inclusion. Tools such as AI chatbots “provide help and even financial advice to consumers, saving them time they might otherwise waste while waiting to speak with a live operator,” the paper noted.

The second area for using AI is in strengthening back-office operations, including developing advanced models for capital optimization, model risk management, stress testing and market impact analysis.

The third area relates to trading and investment strategies. The fourth covers AI advancements in compliance and risk mitigation by banks. AI solutions are already being used for fraud detection, capital optimization and portfolio management.

Identifying and Containing Risks

For AI to improve “business and societal outcomes,” its risks must be “managed responsibly,” the authors write in their paper. AIRS research is focused on self-governance of AI risks for the financial services industry, and not AI regulation as such, said Kartik Hosanagar, Wharton professor of operations, information and decisions and a co-author of the paper.

In exploring the potential risks of AI, the paper provided “a standardized practical categorization” of risks related to data, AI and machine learning attacks, testing, trust and compliance. Robust governance frameworks must focus on definitions, policies and standards, inventory and controls. Those governance approaches must also address the potential for AI to present privacy issues and potentially discriminatory or unfair outcomes “if not implemented with appropriate care.”

In designing their AI governance mechanisms, financial institutions must begin by identifying the settings where AI cannot replace humans. “Unlike humans, AI systems lack the judgment and context for many of the environments in which they are deployed,” the paper stated. “In most cases, it is not possible to train the AI system on all possible scenarios and data.” Hurdles such as the “lack of context, judgment, and overall learning limitations” would inform approaches to risk mitigation.

Poor data quality and the potential for machine learning/AI attacks are other risks financial institutions must factor in. In data privacy attacks, an attacker could infer sensitive information from the data set for training AI systems. The paper identified two major types of attacks on data privacy: “membership inference” and “model inversion” attacks. In a membership inference attack, an attacker could potentially determine if a particular record or a set of records exist in a training data set and determine if that is part of the data set used to train the AI system. In a model inversion attack, an attacker could potentially extract the training data used to train the model directly. Other attacks include “data poisoning,” which could be used to increase the error rate in AI/machine learning systems and distort learning processes and outcomes.

Making Sense of AI Systems

Interpretability — or presenting the AI system’s results in formats that humans can understand — and discrimination — which could result in unfairly biased outcomes — are also major risks in using AI/machine learning systems. Those risks could prove costly: “The use of an AI system which may cause potentially unfair biased outcomes may lead to regulatory non-compliance issues, potential lawsuits and reputational risk.”

Algorithms could potentially produce discriminatory outcomes with their complexity and opacity. “Some machine learning algorithms create variable interactions and non-linear relationships that are too complex for humans to identify and review,” the paper noted.

Other areas of AI risks include how accurately humans can interpret and explain AI processes and outcomes. Testing mechanisms, too, have shortcomings as some AI/machine learning systems are “inherently dynamic and apt to change over time.” Furthermore, testing for “all scenarios, permutations and combinations” of data may not be possible, leading to gaps in coverage.

Unfamiliarity with AI technology could also give rise to trust issues with AI systems. “There is a perception, for example, that AI systems are a ‘black box’ and therefore cannot be explained,” the authors wrote. “It is difficult to thoroughly assess systems that cannot easily be understood.” In a survey AIRS conducted among its members, 40% of respondents had “an agreed definition of AI/ML” while only a tenth of the respondents had a separate AI/ML policy in place in their organizations.

The potential for discrimination is a particularly difficult risk to control. Interestingly, some recent algorithms helped “minimize class-control disparities while maintaining the system’s predictive quality,” the authors noted. “Mitigation algorithms find the ‘optimal’ system for a given level of quality and discrimination measure in order to minimize these disparities.” 

A Human-centric Approach

To be sure, AI cannot replace humans in all settings, especially when it comes to ensuring a fair approach. “Fair AI may require a human-centric approach,” the paper noted. “It is unlikely that an automated process could fully replace the generalized knowledge and experience of a well-trained and diverse group reviewing AI systems for potential discrimination bias. Thus, the first line of defense against discriminatory AI typically could include some degree of manual review.”

“It starts with education of users,” said Hosanagar. “We should all be aware of when algorithms are making decisions for us and about us. We should understand how this might affect the decisions being made. Beyond that, companies should incorporate some key principles when designing and deploying people-facing AI.”

Hosanagar has listed those principles in a “bill of rights”:

  • A right to a description of the data used to train users and details as to how that data was collected,
  • A right to an explanation regarding the procedures used by the algorithms expressed in terms simple enough for the average person to easily understand and interpret, and
  • Some level of control over the way algorithms work that should always include a feedback loop between the user and the algorithm.

Those principles would make it much easier for individuals to flag problematic algorithmic decisions and ways for government to act, Hosanagar said. “We need a national algorithmic safety board that would operate much like the Federal Reserve, staffed by experts and charged with monitoring and controlling the use of algorithms by corporations and other large organizations, including the government itself.”

Building accurate AI models, creating centers of AI excellence oversight and monitoring with audits are critical pieces in ensuring against negative outcomes. Drawing from the survey’s findings, the AIRS paper concluded that the financial services industry is in the early stages of adopting AI and that it would benefit from a common set of definitions and more collaboration in developing risk categorization and taxonomies.

This piece was originally published on Knowledge@Wharton.

Kartik Hosanagar

Professor of Operations, Information and Decisions at The Wharton School @KHosanagar

Kartik Hosanagar is a professor of operations, information and decisions at The Wharton School of The University of Pennsylvania. He is the author of A Human’s Guide to Machine Intelligence.

Yogesh Mudgal

Director and Head of the Emerging Technology Risk & Risk Analytics at Citi

Yogesh Mudgal is the director and head of the emerging technology risk & risk analytics at Citi. The goal of the program is to enable responsible innovation. He is responsible for leading the program globally, which includes identification of risks, evangelizing risks with emerging technologies, influence building of guardrails and frameworks and risk assessments of emerging technologies.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​