Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

Developing the Perfect ‘Imperfect’ AI System

Humanity needs artificial intelligence systems that act more like companions than superheroes. In the future, AI systems will need humans just as much as humans need AI.

Despite the dystopian, Hollywood-infused notion of AI systems going rogue, science is on a path toward creating an environment in which humans and intelligent systems are virtually inseparable, bound in a continual give-and-take exchange of information that forms a relationship of symbiotic autonomy. In such a world, the AI systems are able to identify what they don’t know, what they can’t do, what they don’t understand, and ask humans for help. It’s a new way of thinking about human-AI interaction. And it’s already happening.

At Carnegie Mellon University we have developed roving “CoBots” that autonomously escort visitors around the campus and ask humans for help when needed. For example, the CoBots aren’t equipped with arms so when the guest they are escorting needs to ride an elevator to their destination, the CoBot will ask its human companion to please press the elevator buttons—the same way humans from time to time ask for help. CoBots assigned to delivery tasks will ask any human available for help, when needed.

This is the perfect, “imperfect” AI, acting similar to humans, who from time to time ask for help because we don’t know everything and we can’t do it all. I believe that there has to be a symbiotic autonomy in the sense that part of an AI’s algorithm has to be about it deciding what is outside of the boundary of its capability and, once it determines that, invoking an alert: “I don’t know how to do this. I don’t understand what you said. I can’t find what you told me to find. I don’t see that object.” You asked me, ‘Are my keys in my office?’ I can’t see any keys here. Maybe they are, but I can’t see anything with my sensors that has a high probability of being a set of keys.”

This symbiotic autonomy begins to lay the foundation for a future in which there is a coexistence between humans and AI systems that will be of service to humanity. These AI systems will involve software systems that handle the digital world, and yet also move around in the physical space, like robots and autonomous cars, helping humans make individual decisions. As time goes by, these AI systems may take on broader problems in society, such as managing traffic, making complex predictions about climate and risk mitigation strategies, all in the service of humans grappling with the big decisions of the day.

Why a Panda and Not a Porsche?

In order to turn over decisions—large or small—to AI systems, humans have to learn that the recommendations they make are trustworthy, that they are being made in a human’s best interest and are compliant with the instructions given it. For that to happen, machines have to be able to explain the decisions they make, or be able to review how they arrived at a particular recommendation or decision, so that humans can either correct them or confirm.

We’re in the infancy of AI in terms of algorithms and techniques—we’re still in the dark about a lot of things.

For example, suppose you give an AI system the overnight task of evaluating a series of car choices you’re thinking about and then ask it to give you the best option suited to your needs. When you wake up in the morning you see the AI system has recommended you buy a panda. What went so wrong that the AI is recommending you buy a lovable animal instead of a 3,200-pound vehicle? At that point we need to be able to ask: “Why did you recommend a panda and not a Porsche?”

The ability for AI to be queried is vital for building trust in AI. These systems will probably never be able to do it all, and they should tell you when they can’t do something or can’t figure something out. The option to query is vital for determining the best outcomes as well. Imagine a medical researcher working on a difficult case diagnosis alongside an AI system. The researcher sets the AI system loose on a particular task to scour all the world’s information in hopes of finding “the answer,” only to be disenchanted when the AI system seems stymied. What if we could ask the AI system, “what are you missing?” and the system replies, “if you can just tell me more about the interaction of the treatment with this chemical and this case, I may be able to further investigate a diagnosis and treatment.” If science can’t provide such an answer immediately, imagine how whole new fields of research might open up because such interaction with AI systems is possible.

No Magic AI Beans

None of this happens overnight. AI has been on a steady, evolving, incremental path, but we still have a long way to go. This is not something that will happen in one shot, where magically we wake up one day and “AI is here.” Yet, in many ways, AI is here, incrementally, every day. Every day, there are more algorithms and more apps and more programs that are capable of processing information intelligently.

There is a lot of research being done on trying to understand the concept of transfer learning. How do we have algorithms that, because they can address a particular task, they can also learn to do something else? We are not done with understanding AI. We don’t know how to do many things. We are in the infancy of AI in terms of algorithms and techniques, methods of making generalizations, methods of providing explanations—we’re still waving our hands about a lot of these things.

It All Comes Down to Humans

Symbiotic autonomy is an optimistic vision. Humans created computers, programming languages and the people that program these AI systems. Therefore, it’s critically important to invest in education to ensure people create good machines, beneficial machines, tools that serve and are intelligent for the good of humanity.

Manuela Veloso

Head of Machine Learning at Carnegie Mellon University

Manuela M. Veloso is the Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University. She is the head of the Machine Learning Department, and researches in Artificial Intelligence. Veloso is a fellow of the ACM, IEEE, AAAS, AAAI. With her students, she researches with a variety of autonomous robots,including mobile service robots and soccer robots. See www.cs.cmu.edu/~mmv for further information, including publications.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​