Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

How To Build Trust in the World of Automation

As the adoption of applications that leverage complex machine learning grows, so do concerns about our ability to understand and explain decisions made and actions taken by machines. This concern has been particularly pronounced in areas where this lack of understanding can have a tangible negative impact on customers. Prominent examples include instances of unfair treatment of loan applicants within financial services or the misdiagnosis of patients within the health care industry.

Various terms such as artificial intelligence explainability, transparency and interpretability have been used by different groups and organizations to articulate this challenge. However, the fundamental issue boils down to our ability to trust the output produced by machines; to make a significant decision that impacts others based on a piece of output, we must sufficiently trust the output. To do this, we must know that the output is accurate and understand how and why the output was produced.

The Challenge

Machine learning algorithms can take many shapes and forms and vary in complexity. As a result, our ability to understand and trust the output produced by a machine depends on the specifics of the learning algorithm.

For example, a simple regression is much easier to understand and explain than a multilayer neural network. However, while simpler models are generally easier to explain, they also tend to perform worse (e.g., lower accuracy of predictions).

Therefore, as companies attempt to solve increasingly complex problems with increasing accuracy, they will need to use complex approaches. One such approach may include deep neural networks with myriad hidden layers and thousands or millions of parameters with nonlinear interactions, which humans cannot intuitively or immediately understand. With added complexity, trusting the machines will become more difficult.

The Machine Learning Center of Trust

The “machine-human ecosystem” comprises various groups of people with different levels and types of interactions with a machine, and each group may have a different level of need and ability to understand the output produced by a machine.

The absence of trust in the machine-human ecosystem will likely inhibit the large-scale adoption of machine learning.

Given the large number of machines and impacted parties involved as well as the need to follow a consistent methodology, the challenge of building trust and understanding in the machine-human ecosystem is best addressed centrally. Thus, our recommendation is to designate the existing machine development functions (e.g., head of data science, head of AI or head of analytics) as the “machine learning center of trust,” which would be responsible for executing the appropriate tasks and developing the necessary artifacts to help impacted groups understand and trust the machine.

A department like the machine learning center of trust would have the following key responsibilities with respect to explaining the machine:

  • Testing: Running a host of quantitative tests to assess input significance and how inputs impact the output.
  • Data review: Walking through the data sourcing methodology and tracing back the data used to train the model in order to identify and remediate any potential areas of bias.
  • Documentation: Creating user-friendly documentation that synthesizes the results of quantitative tests and any other qualitative assessments that were made (e.g., contrastive explanations of the output) and providing a nontechnical and intuitive explanation of the drivers behind the model output.
  • Procedures: Defining and implementing standards and procedures to make sure all machines are developed in a transparent and consistent way and to ensure that outputs are replicable by independent third parties.
  • Monitoring and reporting: Monitoring model inputs and outputs on an ongoing or regular basis at the appropriate frequency (depending on the tier of the machine) and reporting the results to the relevant parties (e.g., management).
  • Training: Designing and executing targeted training programs, workshops and communications—either internal or external. For example, the rollout of a high-importance machine could be accompanied by an appropriate workshop to educate all relevant parties on the new machine.
  • Customer support: Providing customer and employee support for machine-related inquiries. For example, a customer asking why their credit application was rejected by a robot, or a sales person asking why the machine is recommending they sell a particular product to a client.

Conclusion

The potential benefits of successfully using machine learning at scale are numerous and well-covered by industry publications, academic papers and mainstream media alike. New use cases, applications and experiments appear daily, further adding to the excitement and optimism around what machine learning can deliver for companies and consumers.

However, the absence of trust in the machine-human ecosystem will likely inhibit the large-scale adoption of machine learning as the risk of unintended negative consequences will be too great, and organizations may not have the appetite to face the potential regulatory, legal, ethical or financial consequences. To avoid this roadblock on adoption, institutions should start designating their own version of the machine learning center of trust and begin rolling out the associated guidelines and requirements now.

Chris DeBrusk

Partner, Digital Practice of Oliver Wyman

Chris more than 20 years of strategy, operations and technology consulting experience. He works extensively at the intersection of business and technology to help clients develop and execute digital and customer service strategies across a wide range of areas within financial services and retail industries.

Ege Gürdeniz

Principal in Oliver Wyman’s Digital, Technology and Analytics practice

Ege Gürdeniz is a principal in Oliver Wyman’s Digital, Technology and Analytics practice with a focus on financial services.

Shri Santhanam

Partner at Oliver Wyman Labs

Shri Santhanam is a partner with Oliver Wyman Labs, where he focuses on commercial effectiveness topics like pricing, growth, marketing, and sales force effectiveness.

Til Schuermann

Partner and Co-Head of Oliver Wyman’s Risk & Public Policy Practice

Til Schuermann is a partner and co-head of Oliver Wyman’s Risk & Public Policy practice in the Americas.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​