Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search
Technology

How to Implement AI Ethics in a Company, Part 2

An interview with
A robot hand reaches up with its forefinger extended. Geometric shapes are projected on a dark blue background.

AI has become a business necessity. And AI ethics is rapidly becoming a key risk requirement. No company can afford the reputational damage that comes from bias in algorithms or discriminatory behavior. 

In the second part of the interview, Reid Blackman, former philosophy and ethics professor and the author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI explains the major risks, starting with privacy breaches

The first part of this interview can be found here.

BLACKMAN: Some people just think about privacy in terms of cybersecurity … so long as we make sure that only the people who should have access do in fact have access to the data, that we’re sufficiently respecting the privacy of those whose data we have. 

That’s a straightforward cybersecurity conception of privacy. Notice that the people who the data is about, the data subjects, are passive with regards to that, right? They are in a state of being protected by the organization, by virtue of the organization restricting access in the right kinds of ways. 

What Is Privacy for You?

Then there are others who think about privacy in terms of regulatory compliance — so long as we’re compliant with, say GDPR and CCPA, then we are sufficiently respectful of people’s privacy. 

And then there’s a concept according to which privacy is respected so long as the data is sufficiently anonymized and cannot be de-anonymized. Again, that’s a passive conception of privacy, where data subjects’ privacy is respected on the condition that the organization anonymizes their data.

But there’s also a concept of privacy that is popular in ethics and legal circles, which is that privacy is a right that people can exercise; it’s an active capacity as opposed to a passive state. At the most extreme level of this, a data subject, an individual, would have control over who has access to their data, for how long, under what conditions, for what purposes, and so on. 

When organizations talk about respecting people’s privacy, they usually only think about it in terms of cybersecurity and regulatory compliance, and they don’t think about it from what you might call an ethical lens, which is: To what extent do we give people control over the data that’s about them?

The Black Box Problem

BRINK: What about the problem of explainability?

BLACKMAN: Explainability is the second big area of risk. Put simply, people find black box algorithms rather scary and a black box. I think that in some cases, it being a black box is not particularly problematic. So for instance, if you’re just labeling pictures of your dog and it’s really accurate, but you can’t explain how [the algorithm] does it, you might not care so much because the stakes are so low. 

But in other cases, the stakes are really high: diagnosing whether someone has cancer, making a recommendation about how to treat someone for diabetes, whether or not they should get an interview or a mortgage or loan, et cetera. Then when the stakes are really high, you might think, “Okay, we need explanations here. It can’t just be a black box.”

Explainability comes in degrees. How many true statements do we need? There are lots of true statements you can make about how this thing operates, how many do we need? What are the important ones? How much explanation is enough explanation? That’s an important qualitative question to be answered. 

Each of these problems, not to mention the ethical use case problems, are always going to be complicated. They will never admit a simple technical fix.

There are technical tools, most famously LIME and SHAP, which give data scientists some understanding about how the black box is operating. But one problem is these explanations are of necessity only approximations of how the black box is operating. 

So is this explanation good enough? Well, that’s going to depend upon your use case, and now you’re going to need an organizational capacity to assess on a per use case basis whether this approximate explanation is sufficient for your many purposes, e.g., talking to clients, consumers, regulators, and other stakeholders.

So How Do You Explain Your AI to an End User?

Let’s take the example of the physician who is thinking about the recommendation that the AI gives about whether this person has cancer. Suppose the doctor doesn’t understand why the AI is predicting that this person has cancer. Should they ignore the AI? Should they defer to it because it’s proven so accurate in other conditions? It’s not obvious. Having an explanation, though, would help the physician in making a well-informed judgment. 

More generally, there are explanations for regulators, there are explanations for the layperson, and lots of others. And these explanations are going to have to be tailored to their audiences, given that different audiences have different knowledge sets, skill sets, purposes, and tasks, they speak different languages, have different educational levels and so on. 

Each of these problems, not to mention the ethical use case problems, are always going to be complicated. They will never admit a simple technical fix. 

And in my view, the key question that every organization should ask itself is: What are the important qualitative, ethical, reputational, business decisions that need to be made; who within our organization should make those judgments; and where in the AI product lifecycle should they make them?

The Beginning of the Journey

BRINK: Do you think companies are going to get this right or are most going to ignore it or fudge it?

BLACKMAN:  I’ve seen a large uptick in the last year of businesses taking the issue of AI ethics seriously, and it’s interesting where a lot of the attention is coming from: financial services — but also, Fortune 500 companies more generally. They’re definitely taking it more and more seriously. They’re beginning to wrap their hands around it, but it’s still early days. 

Even the ones that are doing something, they’re still at the beginning of the journey. I think a lot of organizations won’t do anything until it’s required by regulations. Some others will pay the price and some won’t.

Reid Blackman

Author of "Ethical Machines"

Reid Blackman, Ph.D., is the author of the book Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press, July 2022) and Founder and CEO of Virtue, an AI ethical risk consultancy. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and volunteers as the Chief Ethics Officer to the non-profit Government Blockchain Association.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​