If a Self-Driving Car Gets in an Accident, Who Is Liable?

In this photo taken Wednesday, May 14, 2014, a Google self-driving car goes on a test drive near the Computer History Museum in Mountain View, Calif.


On first contact with the idea that robots should be extended legal personhood, it sounds crazy.

Robots aren't people!

And that is true.

But the concept of legal personhood is less about what is or is not a flesh-and-blood person and who/what is or is not able to be hauled into court.

And if we want to have robots do more things for us, like drive us around or deliver us things, we might need to assign them a role in the law, says lawyer John Frank Weaver, author of the book Robots Are People, Too, in a post at Slate.

"If we are dealing with robots like they are real people, the law should recognize that those interactions are like our interactions with real people," Weaver writes. "In some cases, that will require recognizing that the robots are insurable entities like real people or corporations and that a robot’s liability is self-contained."

Here's the problem: If we don't define robots as entities with certain legal rights and obligations, we will have a very difficult time using them effectively. And the tool that we have for assigning those things is legal personhood.

Right now, companies like Google, which operate self-driving cars, are in a funny place. Let's say Google were to sell a self-driving car to you. And then it got into an accident. Who should be responsible for the damages—you or Google? The algorithm that drives the car, not to mention the sensors and all the control systems, are Google's products. Even the company's own people have argued that tickets should not be given to any occupant of the car, but to Google itself.

But in a real world situation, a self-driving car might require particular kinds of maintenance or to be operated only in certain zones. So, it could be that the software was not responsible, but the owner is.

Or take this difficult scenario that Weaver presented to me. Say that a robotic car swerves to avoid a deer, but in doing so, it crashes into another car. If the car did what a good human driver would have, should Google (or whichever self-driving car maker) be responsible for damages in this situation?

Weaver's argument is that the answer is no. The robot itself should be deemed liable. In his preferred legal world, "the car becomes a separate insurable being that potentially provides a faster insurance payout to victims while protecting the owners from frivolous lawsuits."

If this seems absurd, imagine the alternative scenario. If Google were to sell 100,000 cars, should they really be legally responsible for every ticket or accident those vehicles get in? What company would ever take on that level of legal liability?

But design consultant Brian Sherwood Jones countered Weaver. He said that the idea that an "'accident' is [a] 'robot's fault' is nonsense." And he contended that if we don't "assign liability to people," there will be "mass evasion of responsibility."

What's interesting, though, is that the liability for an autonomous car on the road today already lies with a non-human person in the form of a corporation.

Perhaps, Weaver argues, making robots separate legal entities can help us clarify their role in our lives for a situation like this. So, in the case of a self-driving car doing the right thing, but getting in an accident anyway, that car — as a legal person — would carry its own insurance. That is to say, damages would be paid by the legal entity of the car.

Another option is that companies like Google might develop business and operational models that allow them to both reduce and take on massive risk by capturing the accompanying rewards. So, instead of selling anyone a self-driving car, Google itself would operate a fleet of ultrasafe vehicles. Certainly the design of the Google car — tiny, light and speed limited — seems to indicate that Google is preparing for a world where no serious accidents can occur on its watch.

Or one could argue — and this is beyond my legal expertise — that the example of robot "people" points out that the way our legal system deals with "personhood" is struggling to keep up with the complexity of modern systems, corporate or robotic. Perhaps it's not that we need to extend personhood to robots, but to reform the entire notion of personhood for non-human entities.

But as Wendy Kaminer warned on our site, limiting personhood to "natural people" would have a host of unintended consequences. That is to say, pulling personhood back may be impossible, so instead, the most sensible thing may be to keep extending it... to robots.

    The Remarkable Way Chewbacca Got a Voice
    Why We Dug Atari
    When Robots Write Songs
    The Robots Are Coming, but Are They Really Taking Our Jobs?

This article originally published at The Atlantic here
Topics: car accidents, Dev & Design, driverless car, Google, U.S., US & World