November 8, 2017

Self-Driving Cars Will Make Most Auto Safety Regulations Unnecessary

Filed under: Business, Economics, Government, Technology — Tags: , , — Nicholas @ 04:00

Published on 6 Nov 2017

Cars are becoming computers on wheels, meaning software, not hardware, will soon be paramount for safety. This will eliminate the need for most federal vehicular safety regulations.

Federal auto safety regulations fill nearly 900 pages with standards that determine everything from rear-view mirror and steering wheel placement to the shape of vehicles and the exact placement of seats. Many of the rules don’t make sense in the coming era of self-driving cars. Autonomous vehicles don’t need rear-view mirrors, or (eventually) steering wheels. Their ideal physical form is still a work in progress.

But an even bigger rethink is in order. As motor vehicles become essentially computers on wheels, software, not hardware, will soon be paramount for safety. This will make most government regulation unnecessary, and, to the extent that it slows innovation, could even cost lives on the highway.

“Basically, the entire vehicle code can be boiled down to be safe and don’t unfairly get in the way of other people,” says Brad Templeton, an entrepreneur and software architect, who has worked as a consultant with Google on its self-driving car project. (He also blogs regularly on the topic.)

One difference between self-driving cars and traditional automobiles is that companies will have every incentive to fix safety problems immediately. With today’s cars, that hasn’t always been the case. Templeton cites General Motors’ 2014 recall of 800,000 cars with faulty ignition switches. The company knew about the safety flaw over a decade prior, but didn’t act on the information because recalls are so costly. The companies actions had dire consequences: One-hundred-and-twenty-four deaths were linked to the ignition defect.

But the safety problems of the future will primarily be bugs in software not hardware, so they’ll be fixed by sending ones and zeros over the internet without the need for customers to return hundreds of thousands of vehicles to the manufacturer. “Replacing software is free,” Templeton says, “so there’s no reason to hold back on fixing something.”

Another difference is that when hardware was all that mattered for safety, regulators could inspect a car and determine if it met safety standards. With software, scrutiny of this sort may be impossible because the leading self-driving car companies (including Waymo and Tesla) are developing their systems through a process called machine learning that “doesn’t mesh in with traditional methods of regulation,” Templeton says.

Machine learning is developed organically, so humans have limited understanding of how the system actually works. And that makes governments nervous. Regulations passed by the European Union last year ban so-called unknowable artificial intelligence. Templeton fears that our desire to understand and control the underlying system could lead regulators to prohibit the use of machine learning technologies.

“If it turns out that [machine learning systems] do a better job [on safety] but we don’t know why,” says Templeton, “we’ll be in a situation of deliberately deploying the thing that’s worse because we feel a little more comfortable that we understand it.”

For full text and links, go to: https://reason.com/archives/2017/11/06/self-driving-autonomous-regulation

Shot, written, edited, and produced by Jim Epstein. Filmed at the 2017 Automated Vehicles Symposium.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress