Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

How to Implement AI Ethics in a Company, Part 1

An interview with
A robot hand and a human hand reach for each other against a light blue background.

AI has become a business necessity. And AI ethics is rapidly becoming a key risk requirement. No company can afford the reputational damage that comes from bias in algorithms or discriminatory behavior. 

Yet, most companies have still not fully understood what AI ethics requires, according to Reid Blackman, former philosophy and ethics professor and the author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI

The second part of the interview can be found here.

BLACKMAN: We love AI because it does things super fast and at scale, but that means the ethical and reputational risks of AI scale fast as well. When you’re talking about discriminatory AI, you’re not talking about how you might discriminate against this one person or that other person, you’re talking about discriminating against a huge swath of people. 

Companies are going to do what they need to do with AI to improve their bottom line, but along the way, they shouldn’t put their brand, let alone people, at risk. This is much more than just a single hiring manager being discriminatory against a single person.

BRINK: Where are most companies in their thinking about this, in your experience? 

BLACKMAN: The dominant strategy of companies at the moment, if you can call it that, is one of crossed fingers. They’re just hoping that bad things don’t happen. And when a company does do something, they focus on bias, which is just a subset of all the ethical and reputational risks.

There are multinationals that are absolutely facing scrutiny right now and are being investigated by regulators and are subject to fines, no question about that. But to be honest, there are also some organizations that will get away with it. Different organizations are going to have different risk appetites. 

It’s the nature of the beast of machine learning that it recognizes very complex patterns and data. And so it may very well recognize discriminatory or biased patterns.

I’m an ethicist, and so I think you really ought to identify and mitigate these risks because people are getting hurt. But if you’re asking me a straightforward, empirical question about whether organizations can take the risk and maybe walk away unscathed, of course that’s possible. I wouldn’t say that constitutes being a responsible steward of your brand, but it’s possible. It’s a bet you’re making, and it seems to me a foolish bet.

Don’t Leave the Problem to Technologists

BRINK: Do you think that companies generally underestimate the risks of taking on AI because it’s a new field? 

BLACKMAN: There is an undervaluation of the risk, partly because they don’t understand what the risks are. One of the issues that we’ve got, quite frankly, is that talk about artificial intelligence, and more specifically machine learning, is intimidating to a lot of non-technologists. 

They think, “Oh, AI, AI risk, AI bias, that’s for the technical folk to figure out. That’s not what I do, I’m not a technologist, so I don’t deal with that.” The truth of the matter is that it’s the senior leaders who are ultimately responsible for the ethical and reputational behavior of the organization. 

And they undervalue the risks because they don’t believe that they can really come to understand them and because they are — again, to be perfectly frank — intellectually intimidated by phrases like machine learning and artificial intelligence.

The Three Big Risks

BRINK: You say the three big areas of risk are privacy, the black box issue, and bias. 

BLACKMAN: So those are the three big ones. And then the fourth one is just a big bucket that I would call “use case-specific ethical risks.” The reason that bias, explainability, and privacy come up again and again in talk of AI and machine learning ethics is because the probability of realizing those risks is highly increased because of the nature of the beast that is machine learning. 

It’s the nature of the beast of machine learning that it recognizes very complex patterns and data. And so it may very well recognize discriminatory or biased patterns. It’s the nature of the beast of machine learning that it recognizes phenomenally complex mathematical patterns that are too complex for humans to understand, and so you have the problem of not being able to explain how AI does that. And it’s the nature of the beast of machine learning that it requires a tremendous amount of data in order to get trained up, and so data scientists are incentivized to gather up as much data as they can.

Then there are the use case-specific ethical risks. So if, for instance, if you are creating a self-driving car, the main ethical risks are not going to be bias, explainability, or privacy violations, but killing and maiming pedestrians. If you’re creating facial recognition software, then it’s less about the data that you collect for training data that might violate people’s privacy, but about the surveillance that you’re engaging in. 

BRINK: You talk about structure and content — how does a company start to build some sort of structure to mitigate these risks?

BLACKMAN: The distinction between content and structure is really important. The content question is: What are the ethical risks we’re trying to mitigate? The structure question is: How do we identify and mitigate those risks?

A lot of organizations don’t know how to approach either question, and the main problem is that they’re not going sufficiently deep on the content side before tackling the structure side.

They’ve identified the content at an extremely high level, but it’s so general that it can’t be put into practice. So one of the things that I recommend to clients is that they need to think a lot deeper about the ethical risks that they’re trying to identify and mitigate that are industry-specific or specific to their organization. 

Make sure that whenever you articulate what you take to be an ethical risk, you tie it to things that are just off the table for your organization. So if you value X, then that means you’ll never do Y. For example, because we value privacy, we will never sell anyone’s data to a third party. 

At some point, once you’ve gone deeper on the content side, you have to start building that structure: What does your governance look like? What are the policies? What are the KPIs for compliance with those policies? What are the procedures that our data scientists, engineers, and product owners have to engage in? Do we need an ethics committee? And so on.

Reid Blackman

Author of "Ethical Machines"

Reid Blackman, Ph.D., is the author of the book Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press, July 2022) and Founder and CEO of Virtue, an AI ethical risk consultancy. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and volunteers as the Chief Ethics Officer to the non-profit Government Blockchain Association.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​