By Cory Bilton.
Cars that can drive themselves are the wave of the future. Google has been developing a driverless car for years now. In the Washington DC area, local politicians and newspaper reporters are being entertained by autonomous car researchers from Carnegie Mellon University. The idea of removing the human from actively controlling and maneuvering the car is appealing. After all, most accidents are the result of human inattention, aggression, or error. The promise is that autonomous vehicles will greatly reduce, or maybe even eliminate, the effects of those human failings.
But this utopia is not guaranteed, and the path from all vehicles being driven by humans to having all vehicles driven by computers might be a little rough. Here are three issues where autonomous vehicles pose a significant challenge to the legal system:
Whose Fault is It?
The engineers creating autonomous vehicles promise that, in the future at least, autonomous vehicles will drive much more safely than humans. But in reality, collisions might still occur, even with a computer behind the wheel. For human drivers, the driver who caused the collision is financially liable for damages arising from the collision. But a computer cannot be legally responsible for a collision.
There are three options to handle this situation. The law may choose to hold the manufacturer of the autonomous vehicle system liable for any collisions. The scale of this kind liability for accidents could quickly bankrupt a manufacturer, so I suspect that manufacturers will ensure that the new laws are written to protect them from such liable (after all, lawmaking is a political process). The law may choose to hold the human “passenger” (the person sitting in the driver’s seat, but not operating the vehicle) liable. To many autonomous vehicle owners, though, this will seem a little unfair since one of the promises of the autonomous vehicle is that it drives itself. The third option is that the law may attempt to do away with the notion of “fault” altogether and instead provide insurance for damage from collisions, irrespective of who caused the damage.
No matter which option is chosen, our notion of fairness and responsibility will have to change in significant ways. Deeply ingrained is us is the idea that a person should be responsible for his behavior and the foreseeable consequences. While autonomous vehicles are engaging in complex human behavior (e.g. driving), they aren’t operated by a human being that we can easily expect to be responsible. And while we could do away with the notion of fault altogether, this may encourage egregious or fraudulent behavior (I’m sure someone will figure out how to jailbreak an autonomous vehicle).
What is the Driver’s Legal Duty?
Even if new laws provide that a human “passenger” is not responsible for collisions when the computer is controlling the car, does this person have no responsibility while riding in the car? Every autonomous vehicle demonstration I’ve read about so far involves some method for the human to quickly wrest control away from the computer in emergencies. The Carnegie Mellon University car has a big red button on the dash board that will allow the human to retake control. So even engineers acknowledge there are situations that the car cannot handle by itself. In these situations, is it the responsibility of the human “passenger” to act? If the answer is yes, this partially defeats the purpose of autonomous vehicles because the human “passenger” will have to remain attentive the whole time in the car. If the human driver has no duty to monitor or act to correct the car, but instead is free to read the newspaper on the car ride to work, then the law may be encouraging preventable harms to occur.
Will State’s Agree on the Legal Treatment of Autonomous Vehicles?
Traffic regulations, insurance laws, and vehicle licensing and registration are historically the domain of state law. Laws that permit autonomous vehicles to operate on the road may differ from one state to another. Policy determinations about fault and automobile insurance are also likely to differ from one state to the next. This will cause enormous confusion in places like the Washington, DC metropolitan area. As I frequently highlight on this blog, the differences between the law in Virginia, Maryland, DC, and federal property can be quite significant. For autonomous vehicles to be viable in our area, the laws need to be roughly similar. For example, imagine the confusion that would be caused if Virginia holds autonomous vehicle “passengers” liable for collisions while DC stops using fault to determine liability. Under such uneven treatment, different legal duties and liabilities will apply during different portions of your morning commute. This is a recipe for confusion and a broadly held belief that the laws are unfair.
These three issues are public policy considerations that will need further study, debate, and thought. I expect that the process will grow increasingly contentious as autonomous vehicles start hitting the roads. The engineers and inventors of these systems have every reason to be cheerleaders for autonomous vehicles. But the promise of greater safety with autonomous vehicles may not come true. Lawmakers will rely heavily on engineering promises they don’t really understand in trying to craft laws that are workable and fair. It’s too early to tell what the final result will be. But given political, social, and legal implications, there is a bumpy road ahead for autonomous vehicles.
Please review my disclaimer.