Director, Risk Innovation Lab at Arizona State University
A gaggle of Google self-driving cars. There are subtle dangers inherent in emerging technologies, threats that aren’t amenable to easy technological fixes and that we all too easily overlook.
Take an advanced technology. Add a twist of fantasy. Stir well, and watch the action unfold.
It’s the perfect recipe for a Hollywood tech-disaster blockbuster. And clichéd as it is, it’s the scenario that we too often imagine for emerging technologies. Think superintelligent machines, lab-bred humans, the ability to redesign whole species—you get the picture.
The list is aimed at raising awareness around potentially transformative technologies so that investors, businesses, regulators and others know what’s coming down the pike. It’s also an opportunity for us to think through what might go wrong as the technologies mature.
Admittedly, some of these technologies would stretch the imagination of the most creative of apocalyptic screenwriters—it’ll be a while, I suspect, before “Graphene Apocalypse” or “Day of the Perovskite Cell” hit the silver screen. But others show considerable potential for a summer scare-flick, including “brain-controlling” optogenetics and the mysterious-sounding “Internet of Nano Things.”
Putting Hollywood fantasies aside, though, it’s hard to predict the plausible downsides of emerging technologies. Yet this is exactly what is needed if we’re to ensure they’re developed responsibly in the long run.
Tech Problems, Tech Solutions
It’s tempting to ask what concrete harm technologies such as those in this year’s top 10 could cause, then simply figure out how to “fix” the problems. For instance, how do we ensure that “logical” self-driving cars safely share the road with less “logical” humans? Or how do we prevent bacteria that are genetically programmed to produce commercial chemicals from polluting the environment? These are risks that lend themselves to technological solutions.
But focusing on such questions can mask much more subtle dangers inherent in emerging technologies, threats that aren’t as amenable to technological fixes and that we all too easily overlook. For example, being infused with internet-connected nanosensors that reveal your most intimate biological details to the world could present social and psychological risks that can’t be solved by technology alone.
Similar concerns arise around “open artificial intelligence (AI) ecosystems,” the next step up from systems like Amazon’s Echo, Apple’s Siri and Microsoft’s Cortana. Combining “listening” devices, cloud computing and the Internet of Things, machines are increasingly combining the capacity to understand normal conversation with the ability to take action on what they hear.
This is a truly transformative technology platform. But what happens when these AI ecosystems begin to listen in on private conversations and share them with others? Or independently decide what’s best for you? These possibilities raise ethical and moral concerns that aren’t easily addressed solely by tech solutions.
Expanding Our Conception of What We Value
One way to tease out the subtler possible impacts of emerging technologies is to think of risk as a threat to something of value—an idea that’s embedded in the somewhat new concept of risk innovation. This “value” depends on what’s important to different individuals, communities and organizations.
Health, wealth and a sustainable environment are clearly important “things of value” in this context, as are livelihood, and food, water and shelter. Threats to any of these align with more conventional approaches to risk; a health risk, for instance, can be understood as something that threatens to make you sick, and an environmental risk as something that threatens the integrity of the environment.
But we can also extend the idea of a threat to something we value to less conventional types of risk: threats to self-worth, for instance, or culture, sense of security, equity, even deeply held beliefs.
These touch on things that define us as individuals and communities and get to the heart of what gives us a sense of purpose and belonging. In this way, relevant threats might include inequity or an eroded sense of self-worth from new tech taking away your job. Or anxiety over who knows what about you and how they might use it. Or fear of becoming socially marginalized by the use of new technologies. Or even dread over sacrosanct beliefs—such as the sanctity of life or the right to free choice—being challenged by emerging technological capabilities.
Threats like these aren’t easy to capture. Yet they have a profound impact on people—and as a consequence, on how new technologies are developed and used. Thinking more broadly about risk as a threat to value is especially helpful to understanding the possible undesired consequences of tech innovation and how they might be avoided.
Risks of Missing Out on New Technologies
This approach to risk also opens the door to considering the potential risks of not developing a technology. Beyond existing value, future value is also important to most people and organizations.
For instance, autonomous vehicles could eventually prevent tens of thousands of road deaths; optogenetics—using genetic engineering and light to manipulate brain cell activity—could help cure or manage debilitating neurological diseases; and materials such as graphene could ensure more people than ever have access to cheap clean water. Not developing these technologies potentially threatens things that many people hold to be extremely valuable.
Of course, on the flip side, these technologies may also threaten what is important to some. Self-driving cars might undermine human responsibility, not to mention the enjoyment of driving. Optogenetics raise the possibility of involuntary neurological control. And graphene might be harmful to some ecosystems if released into the environment in sufficient quantities.
By considering how emerging technologies potentially interact with what we consider to be important, it becomes easier to weigh the possible downsides of developing them—or at least developing them without due consideration—against those of either impeding their development or not developing them at all.
The Greatest Risk of All
What emerges when risk is approached as a threat to value is a much richer way of thinking about how emerging technologies might affect people, communities and organizations, and how they can be developed responsibly. It’s an approach that forces us to realize that the consequences of developing new technologies are complex and touch people in different ways—not all of them for the better. It’s not necessarily a comfortable reconceptualization, but looking at risk from this new angle does pave the way for technologies that benefit many people and disadvantage few, rather than the other way around.
In reality, unlike the simplicity of Hollywood blockbusters, the risks associated with emerging technologies are rarely clear-cut and almost never straightforward. Yet they nevertheless exist. Every one of this year’s World Economic Forum top 10 emerging technologies has the potential to threaten something of value to some person or organization, whether undermining an established technology or business model, jeopardizing jobs or influencing health and well-being.
These dangers are context-specific, often intertwined with each other, sometimes conflicting and often balanced by the risks of not developing the technology. Yet understanding and addressing them is essential to realizing the long-term benefits that these technologies offer.
Here, perhaps, is the greatest risk: that either in our enthusiasm for developing these technologies or our Hollywood-inspired fears of potential consequences, we lose sight of the value of developing new technologies that make our world a better place, not just a different one.
Director, Risk Innovation Lab at Arizona State University@2020science
Andrew Maynard is director of the Risk Innovation Lab and professor in the School for the Future of Innovation in Society, Arizona State University. He is a leading expert on the responsible development of emerging technologies, and is vice-chair of the World Economic Forum Global Agenda Council on Nanotechnology.