Hablamos Español, Llamanos Hoy Para Una Consulta Gratuita!
Daniels Law Firm PLLC

Call Now

(479) 521-7000

Email Us

shawn@danielsfirm.com

Our Location

129 W Sunbridge Dr,
Fayetteville, AR 72703

Hands-Off Driving? Examining Driverless Cars and the Law

Nov 6, 2018

By now you’ve probably heard of the latest great innovation in America: driverless cars. Driverless cars – or fully-autonomous cars – are vehicles completely controlled by onboard and off-board computers and satellite technology that make it possible to get from point A to point B, in theory at least.

The technology is almost there; Google has a fleet of self-driving cars on its campus, and municipalities have experimented with driverless cars here and there. There’s still a ways to go before the technology and infrastructure are anywhere close to full-scale adoption, but there’s another hurdle: the law.

Are driverless cars actually legal? What needs to happen to get them on the road? What are the legal implications? And what safety measures need to be put into place before autonomous cars are given the final greenlight across the nation?

Legal Status of Driverless Cars

Note that here, we’re talking about fully-autonomous cars in which the driving has been taken over by a computer. We’re not talking about computer-assisted driving, which is admittedly the solution that’s more likely to be adopted and probably the first method that will be put into place before completely driverless cars.

We’re early enough in the development of autonomous vehicles that there isn’t a lot of case law on the subject. The National Highway Traffic Safety Administration (NHTSA), the federal regulatory agency over transportation, doesn’t explicitly prohibit autonomous vehicles – probably because the issue hasn’t yet been considered. (Also, the Department of Transportation did release guidance in 2016 in which they admit they currently don’t have the regulatory toolsto completely regulate automated driving.) Furthermore, state vehicle codes don’t typically prohibit automated driving – although New York does have a lawthat requires the driver keep one hand on the wheel at all times, which a computer obviously can’t do.

That doesn’t mean there aren’t laws on the books that impact or restrict driverless driving. States do have laws against following too closely, which is one way driverless cars could be controlled on the road (in tightly-packed platoons). Additionally, the laws on the books in every state that govern driving assume that there is a licensed human driver at the wheel. Each computer system that operates in a given state would in effect have to be “licensed” by that state in order to operate a motor vehicle – and there currently is no process in place by which to do that.

All of this is to say that currently, driverless cars are more than likely legal, which brings us to the gap between the idea and its implementation.

Addressing the Technology Gap

Implementation of driverless cars, outside of a few isolated and controlled experimental situations, isn’t yet a reality and has some ways to go.

The software to manage automation on a mass scale is theoretically developed, but has never been tested for a nationwide network (or even a series of localized networks). The software system will have to be able to integrate control over the entire nation via a series of increasingly local networks. For example, a small city may have its own network that will have to be integrated into a bigger metropolitan network, which will be a part of a state network that will be linked to a regional network that, in the end, will have to communicate with other regional networks. No publically-released software has been created to manage this problem on a sufficient scale.

Hardware is also not currently there for the implementation of driverless car networks. These networks will first need cars on the road that are equipped, and the vast majority of private vehicles aren’t capable of automated driving. There will either need to be widespread augmentation or increased market share for automation-friendly vehicles. Additionally, there will likely need to be either a large network of sensors in the roadways or an array of long-distance communication devices – similar to cell towers – that communicate with each other and satellites to coordinate, and there may not be enough satellite capacity in orbit right now to handle a network of hundreds of millions of vehicles.

All of the required technology, by and large, is still in the theoretical stage. Parts of the technological problem are being addressed in experiments, though. Google has actually opened a fleet of driverless carsto the public – for free – in Phoenix in one of the first public, mass-scale experiments for the technology. Google’s sister company, Waymo, has logged over 2.5 million miles on public roads without active human assistance, by far the largest amount of miles logged by any self-driving company in the world. (What’s more is that there have only been 14 collisions during that time, an average of just one collision every 178,571 miles.)

With enough motivation, though, technological problems will be overcome because automated driving is within the envelope of possibility with plenty of incentive and resources for public and private organizations to develop the technology.

Legal Questions with Driverless Cars

There are quite a few legal questions with driverless cars that need to be figured out before we see wide scale adaptation.

Perhaps the biggest involves liability. If a driverless car gets into an accident, who’s responsible? Is it the owner of the vehicle – or is it the company controlling the software? If it’s not a company, but a local, state or federal entity, is it liable?

Moreover, who pays for the accident? Right now, every driver is required to have insurance. What does insurance look like when no one is driving? If the automation company – or agency – is responsible for accidents, then they would have to essentially have insurance. But no single insurer would be likely to take on underwriting such a massive undertaking.

There will have to be a serious reworking of auto accident liability and insurance before any self-driving scheme can take hold. Even if self-driving cars are many times safer than our current system, no system is 100 percent safe. There will be accidents, like the one that caused the deathof a Tesla self-driving car owner in 2016. Someone will need to be held liable and pay for damages caused by accidents when – not if – they occur.

To help solve the problem, carmakers are using advanced technology to improve safety. These technologies currently used today include radar, laser or camera-based collision avoidance systems; enhanced cruise control that keeps a certain distance between your car and the car in front; self-parking using cameras and sonar; and counter-steering to help prevent and/or rectify drifting.

Additionally, developers are working on multiple fail-safe systems that work even in the event of a power outage, so that traffic can continue unimpeded and vehicles don’t suddenly lose control.

Of course, concerns also include vulnerability to hacking, either from rogue hackers, state-sponsored hacking teams, or terrorists. It’s possible that there will be legal liability questions stemming from accidents caused by hackers, which would further complicate matters.

Driverless cars are intriguing and will probably become a part of our daily lives at some point in the not-too-distant future. Industry experts suggestthat the first commercial self-driving cars could be sold and on the road by as early as 2020, with full-scale adaptation by 2030. That doesn’t give legislators and regulators a lot of time to hammer out the nuances of law while the developers are work simultaneously advancing the technology.

It’s likely most of us alive today will encounter automated driving at some point, on some level, during our lives. It’s surely an exciting time, even given the legal challenges that exist and will develop as the technology matures