Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search
Technology

How to Stop Being Scared of AI and Learn to Love It

Opinion polls show that people are growing increasingly uneasy about artificial intelligence, and yet the technology is expanding into almost every walk of life and becoming more and more embedded in the way we live. 

So how do we bring greater transparency to how AI works and help to allay people’s fears? Peter Scott is the founder of the Next Wave Institute and the author of Artificial Intelligence and You.

SCOTT: Fear might be justified, but it’s not productive. If you’re attacked by a tiger rushing at you, you’re justified in feeling fear. But if that causes you to remain paralyzed or not do whatever you’re supposed to do when a tiger comes at you, then it doesn’t help. 

What I find is that the fear people have of an AI future is proportional to how little agency they perceive that they have. They feel like they’re being taken on a ride by technology companies that don’t know where they’re going and who are driving a car at breakneck speed down the highway, while everyone else sits in the backseat terrified. And when I talk to technologists, I try to make them aware that their natural enthusiasm can create this response. 

It Looks Like Magic

Arthur C. Clarke’s third law was that any sufficiently advanced technology is indistinguishable from magic. And to people looking at what AI is doing today, with things like the large language models, or the amazing image synthesis tools, it does look magical. But when we see AI doing one kind of magic, we assume that it can do all kinds of magic, because how are we supposed to know where the boundaries are? 

So we instantly jump to the idea of AI being a threat, or becoming conscious, or running amok. And all of those things are relatively unlikely. Mostly what AI is doing at the moment is exposing the fact that a lot of the things that we do cognitively as humans can be achieved by narrow artificial intelligence using its capability for finding patterns.

And the more familiar you get with it, the more you can understand that what’s going on is this pattern-finding. There’s this cognitive mismatch between what we’ve been told to expect any day now and what’s realistic. Most roboticists say that AI is not going to become generally intelligent until it understands the real world as well as we do.

AI Will Magnify Your Mistakes

BRINK: One challenge is what you call the control paradox — how do you control something if you don’t truly understand its limitations or how it works?

SCOTT: That’s being dealt with by many philosophers and leading computer scientists who are looking at how we will control AI when it finally becomes a lot more capable than it is right now. But you can also look at it in terms of the challenges for today’s executives incorporating AI in their organizations. Because AI basically allows today’s executives to make the same mistakes that they do right now, only much faster and at scale. 

You have to have an understanding of where your weaknesses and your strengths are as an organization and how you currently have bias in your proprietary data. It might be that the bias is currently only realized on a very small scale, but AI will magnify that. If you truly understand those issues, you know what to control.

BRINK: One specific area in pattern-finding is AI’s ability to match huge amounts of data about individuals from video surveillance, facial recognition, social media, etc. How should companies handle these privacy issues?

SCOTT: It’s a very pressing question, and it’s a good place for people to focus their attention because AI magnifies the ethical outlook of a business enormously. 

One example is what a company called Clearview AI did in mining huge amounts of social media information to be able to identify people from photographs and tell you pretty much everything that could be known about them instantly, which was great for law enforcement, but they had a lot of negative publicity about that. 

But these kinds of technologies are now within the reach of organizations that have less resources and less money as AI is pushed out into commoditization. So there is huge potential for abuse here that a lot of regulatory bodies are not really sure what to do about. If, as a government, you are too restrictive about this, then you run the risk of shutting down innovation within your country, and then your country’s businesses lose the AI race, and no one wants that. 

Companies Must Think Deeply About Their Ethics

When you focus your attention on this issue, it helps you understand who you are as a person and as a business in relation to ethics, because AI magnifies these questions, and so this thought process has to happen at a much deeper level than it did before. That’s why there are now so many businesses springing up to provide this ESG function for businesses with respect to their AI footprint.

If you take the viewpoint that, oh, AI is going to be the magic bullet that solves our problems, you risk avoiding responsibility, and it will get you into trouble, only much faster and at a bigger scale.

BRINK: Is AI an existential threat to humanity as Stephen Hawking suggested, or are you optimistic that the right controls will be brought in?

SCOTT: It’s an interesting question, and again, it goes back to the matter of agency. AI could very well be an existential threat. But what frustrates me when this kind of question is asked is that people tend to react in one of two ways.

If they hear that AI is likely to be a big threat, they panic, curl up in a ball and do nothing. If they hear that AI is unlikely to be a threat, then they go, oh good, I don’t need to do anything, and do nothing. And we’ve seen the impact of that in another existential crisis, climate change: people vacillating between “It’s not really a problem” and “I can’t do anything about it, we’re doomed.”

Neither of these responses is productive. The optimism/pessimism axis is one that I want to travel at right angles to, along the direction of “What can I do?” I want to tell people, here’s what you can do to assure us of a better outcome. Because I can see incredibly good outcomes, and I can also see incredibly bad ones.

Focus on the People, Not the Technology

BRINK: So what should executives focus on who don’t want to miss out on the AI revolution but don’t want to do it badly?

SCOTT: Firstly, it is to be aware of what AI can and can’t do, to understand its essence. There are many ways of doing that. One way is to look at the humor that’s generated around AI at the moment by basically pranking it, because then you get an idea of its edges, what it can’t do, and just how it fails. 

And then it comes down to the people. The technology will take care of itself — there are umpteen people around the world working on improving the technology, we don’t really need to put more attention on that. It’s going to happen regardless. 

Focus on the conversations with your people about where do we want to be as a business in a future of intelligent machines? What is it about us that gets us out of bed in the morning that we want to do more of? If we could have that optimistic future, then what would it be? 

Engage people throughout your enterprise in those conversations — I think that’s an enormously productive direction to go in. It’s obviously something that people could have done at any time, but there was no urgency. Now, AI is providing an impetus. Maybe that fear of what could happen if we don’t wake up and pay attention and start asking these deeper questions of ourselves actually provides the incentive to act.

Peter Scott

Author of Artificial Intelligence and You

Peter Scott has worked in technology and human development for over 30 years, from a NASA contractor to a business coach. Now he helps people and their businesses understand artificial intelligence and leverage their human capital to enter a future of smart machines.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​