Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

Robot Law: Preventing Serious—and Subtle—Threats

An interview with

Robots aren’t people. So whom do we blame—and how do we react—when they spy on, injure or even kill us? And how could robots undermine even our most trusted professionals?

University of Miami law professor A. Michael Froomkin has been writing about the intersection of technology and the law for 20 years. He’s the founder of “We Robot,” a conference on legal and policy issues related to robotics, now in its fifth year.

He’s also co-editor of Robot Law, a new book scheduled for release this month. BRINK spoke to him on the eve of the book’s release. Some answers have been condensed.

Why are you releasing this book now?

It goes back to my experience with the Internet. When I started writing about Internet law and policies in the mid-’90s, the basic standards were already in place. The engineers made a number of choices—about privacy, security, domain names and the like—which had hideous legal and practical consequences that they could never have imagined.

Now here, we have signs that robots will become a widespread and transformative technology, like the Internet had been before it. But the standards are not quite baked. My co-editors and I thought, wouldn’t it be nice to get lawyers and policymakers involved in the conversation? We could design around as many of the problems we can identify, so we save people lots of time and money.

How are robot capabilities starting to push the boundaries of the law?

There are a couple of distinctions we make in the community. Some people say that a robot must be a physical thing. We say a robot could be a piece of software—like a program stock trader—that detects stimuli and responds in a way that affects the real world. The other distinction has to do with autonomy. Everyone agrees that a robot with no autonomy—a machine in an assembly line, for example—is still a robot. But where robots get interesting legally is when they have a degree of autonomy.

Currently, the degree of autonomy for robots varies dramatically. Drones are mostly under the direct control of a user, but it’s getting to the point where you can program a destination into a drone and it will go there and back. Others are programmed to come home autonomously if they lose a signal. Collision avoidance is a kind of autonomy. The question is, what happens when a robot in autonomous mode causes harm, like it runs into something?

If you don’t make rules now, it will be a very target-rich environment for litigators. It’s better to sort out as many issues as you can in advance so you know who’s responsible. They can buy insurance, they can make sure their engineers design as carefully as they can. Somebody needs to own the problem.

Drones are already creating legal issues. I’ve read two recent cases involving people who shot down their neighbors’ drones when they hovered over their property. In one case, the man was indicted on felony charges; in the other, the judge dismissed the case, saying he had the right to shoot down the drone.

Those cases were in different states, and a lot of these questions are decided by criminal law, which is a complicated patchwork of state laws and even municipal rules. In any case, if you’re in an inhabited area, it’s really dangerous to fire a gun in the air. In our paper on this question, we are asking it from a tort law perspective: If you suspect a drone is spying on you, can you disable it, and who do you sue?

The law is pretty clear that your property rights go beyond the walls of your house—typically to your sidewalk. The legal term is curtilage. But drones can go vertically, and the concept of vertical curtilage is not so well-developed. It’s probably trespassing up there as well. But it can be hard to tell what a drone is doing up there—does it have a camera, is it snooping your wifi, who owns it—and it’s hard to do something to stop it. The new FAA rules requires a drone to be traceable in case it crashes, but there’s very little in the rules about what to do if it’s in flight and then goes home again.

One of the worst things a drone can do right now is invade your privacy. Here, the law has a problem.

So, if you suspect a drone is spying on you, can you disable it? Now we get into the weeds of trespass law. The fundamental rule is reasonableness. The less threatening the drone is, the less right to self-defense you have. But how can you tell if it’s threatening? One of the worst things a drone can do right now is invade your privacy. Here, the law has a problem. We haven’t really cracked the problem of how to value privacy invasion, especially suspected privacy invasion. You see it all the time when there’s a data breach. Until you’ve proven the hacker has used your information, any damages are speculative, and you have very little claim against the card company.

The right to self-defense is actually a privilege. In this instance, it’s a defense against what would otherwise be a claim against you for damaging someone’s property. The law asks you to make a split-second decision—to the best of a normal person’s ability—about the value of the drone compared to the possible harm. Only if the value of the drone is in the neighborhood of the value of the harm can you damage it. Which means the fancier the drone looks, the less right you have to damage it!

In a recent paper with my colleague Zak Colangelo (Self-Defense Against Robots and Drones), we suggest some ways to reduce uncertainties about robots, including forbidding weaponized robots, requiring lights or markings that announce their capabilities and mandating RFID chips or serial numbers that identify the robot’s owner.

What about a self-driving car? What if someone hacks a self-driving car and there’s a crash? Who’s responsible then?

That’s easy—the hacker. It’s just a question of proof.

But what if Google failed to secure the car from hackers?

If you make a system really easy to break into, it could be seen as a product defect. On the other hand, the law does not go out of its way to blame people for the bad actions of others. When someone’s car is stolen, you don’t see a ton of suits against car manufacturers saying the locks weren’t good enough. There are things you can do with encryption and signatures that greatly reduce the threat of hacking.

OK, let’s say there’s no hack, but a self-driving car still crashes. What’s the driver’s responsibility?

On the self-driving cars that are being tested right now, the carmakers want the driver paying attention—and right now, they’re not doing it. They’re playing cards. That’s a little scary. And even if driver is well-intentioned, sitting there alert, it’s hard to stay alert for a long drive if you have nothing to do. This is the problem of “unintentional inattention,” and it goes well beyond cars.

Imagine you’ve got robot mall cops. There’s eight of them running around the mall and one guy in a room someplace looking at eight TV screens. He’s bored out of his mind, falling asleep. Then something happens, and it’s his fault because he’s asleep at the switch. Sometimes this is called the “human in the loop” problem. Autonomy seems dangerous, so you put a human in the loop. But that person’s job is very passive. It’s tough to do, and it may be a low-wage, low-status job. Then you blame the person who fails to monitor, even though the system sets them up for the fall. That’s not a good design, but I don’t know what the answer is.

And then there’s the related problem of the atrophy of human skills. If nearly all flights are on autopilot, won’t pilots eventually forget how to fly?

Absolutely. So far, pilots seem to be doing OK. But we’ll be having this conversation about medicine in 15 years, because we predict robots will get good at diagnosis. When they do, patients and hospitals will want the best diagnosis, the best track record, so they’ll go with the robot. Now you start deskilling the medical profession, and that would have terrible consequences.

So, if a robot doctor makes a dumb error, whom do you blame? The robot maker?

Robot doctors will be something like IBM’s Watson—they will get new data in real time, and be searching on a constantly improving database. So auditing its decisions will be next to impossible. When robots have a better track record than doctors, and a robot and a doctor disagree over the right course of action, which do you trust? How do we keep human skills sharp if humans are only used as a backup? Would you be willing to train and pay as many humans as you used to, and how would you train them?

These are really hard problems, and we’re trying to worry about them now before they become real problems.

A. Michael Froomkin

Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law at University of Miami Law School

A. Michael Froomkin is the Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law at the University of Miami Law School. He joined the University of Miami faculty after working in the London office of the Washington, D.C., firm of Wilmer, Cutler & Pickering.

He is the founder and editor-in-chief of JOTWELL: The Journal of Things We Like (Lots), an online law journal that publishes reviews of the best new scholarship relating to the law.  He is also the founder of the We Robot conference on legal and policy issues relating to robotics. He is an Affiliated Fellow of the Yale Information Society Project (Yale ISP), and on the Advisory Boards of the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF), and the Future of Privacy Forum.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​