Bryant Walker Smith is a fellow at the Center for Internet and Society at Stanford Law School and the Center for Automotive Research at Stanford (CARS) whose current research focuses on the law and policy of self-driving vehicles. Recently, Bryant hosted a workshop entitled “How an Autonomous Driving Bill Becomes Law” at Stanford, including Marilyn Dondero-Loop (Assembly Transportation Committee), Bruce Breslow (Nevada Department of Motor Vehicles), Troy Dillard (Nevada Department of Motor Vehicles), David Estrada (Google) and David Goldwater (Nevada lobbyist). A video of the panel is below:

After the panel, I had the opportunity to pick Bryant’s brain on a host of topics related to the future of autonomy. The interview is below, with my question in bold and Bryant’s answers following.

Autonomy requires, by its very nature, a collaboration between manufacturers, governments, and researchers. What does this mean for the future of how autonomy moves forward?

One of the big questions in this field is, "How much do these emerging technologies depend on changes to our existing infrastructure? Infrastructure in this sense certainly means the physical -- For example, will we need new traffic signals? Special lanes? But infrastructure is broader than the mere physical. There's legal infrastructure -- Will we need new laws or legal theories? And digital infrastructure -- Is there a role for shared data? Or for cooperative mapping? Manufacturers, governments, and universities have potentially different resources, risk tolerances, and motivations -- and the perfect combination and coordination of all of these will probably remain elusive. It's like playing with Legos without following the instructions.

The Arizona bill that failed was a bit of a teaching moment for the self-driving world. What about that didn't work and why?

The sequence of bills has been fascinating. Part of Google successfully lobbied for a bill in Nevada -- and it caught a lot of people in both industry and government off guard. Individual legislators in several other states then followed up with their own bills, but they didn't find the sustained outside support that Nevada's had enjoyed. In fact, a major automotive trade group opposed Arizona's bill, and a majority of legislators on the relevant committee saw legislation in this area as too soon (in terms of where the technology was at) and too much (in terms of the resources that the state would have to devote to implementation). In contrast, Florida's legislation emerged in part through negotiations between this trade group and Google -- and it looks different from both Nevada's, which came before, and California's, which came after.

Photo: Stanford Center for Internet and Society

Bruce Breslow raises an important notion that states who push these bills forward don't want to be responsible for regulating safety. What is the current line of thinking about who is ultimately responsible for the actions of a self-driving car?

Responsibility has technical, moral, and legal meanings. Within the law, it can refer to obligation (who must do something), criminal liability (who gets fined or jailed), and tort liability (who gets successfully sued).

Governments certainly do have the authority, and in some cases the responsibility, to advance safety. State governments regulate products and behaviors under what's known as their "police power." And federal agencies – including the National Highway Traffic Safety Administration (NHTSA) implement various statutory schemes directed at safety. One of the looming questions is whether states or NHTSA will take the lead on automated vehicles. Here’s why that question is particularly interesting: Simplistically speaking, states generally regulate driver performance and NHTSA generally regulates vehicle performance, so who regulates the performance of a vehicle that drives itself?

“How?” matters as much as “who?” States, for example, regulate in several ways. Legislators pass statutes. Agencies enact regulations and make specific contextual decisions -- like whether to register a vehicle or issue a driver's license. These are the obvious ways. But lawsuits are also a form of regulation: Judges and juries decide whether a product was defective and whether a company or individual was negligent -- in essence what is reasonably safe and what isn't.

Who might end up a defendant in a lawsuit involving an automated vehicle? A key point here is that tort liability is not an either/or proposition. Automated vehicle operators, owners, data and service providers, manufacturers, suppliers, dealers, insurers, and others could all end up in court—and some or all of them could end up on the hook for damages.

The claim that state governments will end up liable for defects in vehicle design is probably, and perhaps strategically, overblown. States generally enjoy what's called sovereign immunity, which often means that they’re not liable in court for consequences of their policy decisions. Certainly, state agencies would prefer avoiding court altogether, even if the suits are ultimately dismissed. And at an even more basic level, everyone wants to prevent the injuries and fatalities that actually give rise to lawsuits.

Marilyn Dondero Loop brings up the difficulty of pushing a bill forward without knowing all the unintended consequences, including the notion of drivers licenses. What are some of those consequences that you and the community have identified?

Great question. Given the early stage of these technologies, we’re really talking “unknown unknowns.” My recent white paper identifies numerous laws that are in some tension with these potential technologies. For example, who is the operator of an automated vehicle? What must she do in both the legal and technical senses? And what is she likely to actually do, regardless of what the law or the technology requires? Texting while driving is a pretty good sign of where some of these problems could be heading.

What does the next 24 months for autonomous driving legislation look like? Are there bellwether bills or proposals that you are keeping your eye on?

The biggest question is what the National Highway Traffic Safety Administration (NHTSA) will do; the agency is currently ramping up its research into this area, and I’ll be interested to see how strongly it signals its future regulatory course. California’s nascent rulemaking process is also critically important. DC is also getting a lot of attention for the legislation that is currently pending. And finally, I’m keeping my eye on a few big states that might act through legislation or simply regulation in the next year. As these activities develop, it’ll be interesting to see the postures of the major political players, particularly Google and automakers. A lot of people will say that law is the biggest obstacle to the commercialization of automated vehicle technologies, but I disagree. The technologies themselves are the real key—and will, I think, be the true (metaphoric) driver over the next 24 months and beyond.

End