Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

In Practice

AI Has Alarming Power to Spread Gender Bias. Here Are Four Ways to Combat It.

“Did you mean Stephen Williams?” In 2016, a Seattle Times article found that searching for a female contact on LinkedIn — in this case, Stephanie Williams — yielded a prompt that asked if you meant to search for a similar-sounding man’s name, Stephen Williams.

In 2018, Amazon had to shut down an AI recruiting tool designed to screen applicant resumes because it was discriminating against women. A 2018 study on gender and race discrimination by machine learning algorithms done by MIT and Microsoft researchers found that prominent facial recognition software has higher error rates when presented with pictures of women. Error rates are even higher when the subject is a woman with darker skin. Similar to these examples, there are numerous use cases — such as voice and speech recognition — where AI applications have had worse performance for women. 

The Dangers of Learning from Humans

AI represents algorithms that learn to make logical connections between input data and outputs. For example, if you feed a company’s historical recruiting data (i.e., applications submitted by candidates) and the outputs (i.e., the job offer decision made by the company for each application) through an AI algorithm, the AI can deduce what factors (whether conscious or not) led to job offers. The AI can then use these learned connections and factors to make decisions when a new application is presented in the future. 

Through this process, AI algorithms, in essence, learn from the historical behavior and decisions of humans, in which biases, stereotypes and assumptions are ingrained. 

For example, if a company historically hired significantly more men than women, the AI will likely learn to associate candidate attractiveness with factors found in male applicants — or reject applications that have factors associated with female applicants. This is what happened in the case of Amazon’s recruiting tool. AI bias is generally a manifestation of humans’ historical behavior rather than a purely technical issue. 

Why Are So Many Virtual Assistants Female?

Gender bias in AI does not only reflect the gender stereotypes and biases that exist in society (and in all humans) — it also reinforces them through design and marketing decisions. 

Today, almost all of the AI-powered virtual assistants in our lives — Alexa, Siri, Cortana — have female-sounding voices. A whole generation of children are growing up shouting commands at women in digital boxes. AI-based meeting assistants that take care of tasks, such as meeting notes, also generally have female names. For example, Sonia is an “AI-based assistant that joins meetings to help take notes, summarize and handle follow ups.”

On the other hand, one of the most publicly visible supercomputers is male — Watson — and it “helps you unlock the value of your data in entirely new, profound ways.” This stark contrast is a representation of the gender bias that has been present in the corporate workplace for many decades. 

AI’s influence on society and our lives will only increase going forward as innovation and technological breakthroughs continue. 

Ingraining Gender Bias in Algorithms

Humans are filled with bias and prejudice — all decisions and actions we take are subject to our imperfect view of the world we live in. As a result, bias in corporate decisions, product development and marketing is nothing new. 

  • So why is gender bias in AI such a great concern? There are three key reasons:While we generally recognize that humans are flawed and biased, there is a commonly held assumption that machines are impartial and rational. Because of this assumption, we may not question the decision-making logic of an AI application the way we would challenge a human.
  • The logic of many AI applications is difficult to understand — “explainability” is a real challenge, especially with more complex algorithms. This can make AI bias more deeply ingrained and difficult to identify compared to the bias present in humans, which manifests itself in daily behavior and interactions that are more readily observable by other humans. 
  • A human’s bias can only spread and impact others at the speed and scale at which that human can operate, make decisions and take actions. Since AI is a digital phenomenon, its bias can spread instantly and create damage on a scale and at a speed that would be unthinkable in human terms.

As a result of these factors, with gender bias in AI, there is a risk of significant harm to customers, employees and/or society that could be done before the issue is identified and resolved.

What Can Companies Do to Avoid It?

AI has already profoundly transformed society at large as many of the key moments in our daily lives are enabled by AI: the virtual assistants that simplify our routines, the search engines that help us find what we are looking for, the movie, product and other content recommendations that influence our consumption behavior, the way we unlock and access our devices and accounts, how we save and invest, the way we find our way home, the list goes on. AI’s influence on society and our lives will only increase going forward as innovation and technological breakthroughs continue. 

As a result, it is crucial that companies develop and deploy AI applications in a responsible manner that proactively seeks to identify and eliminate existing societal biases so they are not encoded and amplified in the digital world.  

Toward this goal, we recommend companies take action now in four categories:

  • Data testing: Put in place AI development standards, testing procedures, controls and other technical governance elements designed to make sure the data used in training AI applications are thoroughly vetted and certified against a bias perspective before the application goes into production.
  • Output testing: Establish testing requirements and controls around the outputs produced or decisions made by the AI. Review and challenge these outputs and decisions against a bias perspective to make sure they represent fair and positive outcomes that are in line with expectations and do not adversely and unfairly impact any group of people. 
  • Development teams: Make sure AI design and development teams are diverse and include female data scientists, programmers, designers and other key team members who influence how an AI application is developed. Set targets and put in place training, recruiting and rotation programs to move toward this target.
  • AI I&D: Similar to existing inclusion and diversity (I&D) teams whose mandate is to make sure a psychologically safe, inclusive and diverse workplace is maintained, create AI I&D teams whose mandate is to independently review and challenge AI applications against a gender-bias perspective. This team would supplement development teams that conduct technical testing.  

It is important to note that, even though the focus of this article is gender bias, AI applications can and often do suffer from different types of societal biases, for example, around race, ethnicity and religion. As a result, companies should expand the above efforts and measures to make sure the AI applications they put in place do not have an adverse impact on any group of people. 

The concept of “inclusive AI” should be a guiding principle for any company developing AI applications. 

Ege Gürdeniz

Principal in Oliver Wyman’s Digital, Technology and Analytics practice

Ege Gürdeniz is a principal in Oliver Wyman’s Digital, Technology and Analytics practice with a focus on financial services.

Elizabeth St-Onge

Partner at Oliver Wyman

Elizabeth St-Onge is a Partner in Oliver Wyman’s Financial Services practice group. She leads our work with Financial Institutions in North America to understand, evaluate, measure and refine their culture and conduct.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​