So says a headline on this month's cover of Wired magazine. As the article notes, Google's self-driving cars have already traversed 200,000 miles:
As the news of Google’s self-driving car has spread, company Kremlinologists and auto industry wags alike have debated—with varying shades of anticipation or dread—whether it is a fun experiment or a serious challenge to the auto industry. Was the company just looking for an oxygen hit of innovation via its stockholder-straining techno-fantasist skunkworks, Google X? Is it just expressing the aw-shucks altruism the company is known for? The fanboy enthusiasms of higher management? Or does Google, as one report had it, have designs on building its own cars?
When I ask these questions around Google, the answers I get are polite, if bordering on exasperated. “Our clear statement,” [explains Google's Chris Urmson], “is we want to improve people’s lives by transforming mobility.” I get the sense that asking about business models is a bit rude. “Like a lot of things at Google, we want to figure out something big and important,” he continues, “and we’ll figure out the rest later.”
If Urmson’s team is taking a Googly approach to the auto industry, it’s also taking a Googly approach to driving. The company, Urmson notes, “is really all about processing big data,” and the road is just another data set to be mined. So Google isn’t teaching its computers how to drive. It’s collecting data—its cars have driven 200,000 miles in total, recording everything they see—and letting its algorithms figure out the rules on their own.
“If you read the DMV handbook on four-way stop signs, it’s easy,” Urmson says. “Whoever gets there first gets to go. If there are simultaneous arrivals, priority goes to the vehicle on the right.” But it rarely works that way. “People optimize stop signs,” he says. A polite robot vehicle, playing by the official driving rules, could be lost in a sea of aggressive humans. Instead, it needs to learn how people really drive. “This is the data-driven viewpoint,” says Sebastian Thrun, the Stanford roboticist who heads the self-driving project. “The data can make better rules. It’s very deep in the roots of almost everything Google does.” Urmson describes it as an attempt to “hack driving.”
As the article notes, for years our cars have had self-driving features--like cruise control and anti-lock breaks. Cars of the future will contain more and more self-driving features but human drivers will still retain much control:
As I drive the Mercedes through Palo Alto, I am reminded of a horseback outing in South America a few weeks earlier. A novice, I was put on an experienced and quite tame horse. It knew the route we were on by heart, accelerating when it could smell the comfort of its own stable, and I had to make only occasional corrections. “Driving an automated car is very much like riding a horse,” says Donald Norman, author of The Design of Future Things and a consultant for BMW, among other automakers. “You can ride a horse with tight reins or loose reins. Loose reins means the horse is in control—but even when you’re in control, the horse is still doing the low-level guidance, stepping safely to avoid holes and obstacles.”
While the technology of self-driving cars races forward, the most formidable challenges may be legal and political ones:
“There are places where technology outpaces the law,” Google’s Levandowski says. “This is one area where it outpaces it by a lot.” In California, there is no law concerning self-driving cars. In 2011, Google helped Nevada draft the first legislation to allow autonomous cars to be driven legally on state highways. It’s the only time a motor vehicle department has had to deal with the issue.
Beyond bureaucracy, there are deeper legal questions. Ryan Calo, director for privacy and robotics at Stanford Law School’s Center for Internet and Society, which is studying the legal framework for quasi-autonomous vehicles, notes how active the liability landscape already is when it comes to cars’ safety features. “People sue over all kinds of stuff. People sue because some feature that was supposed to protect them didn’t. People sue because their car didn’t have a blind-spot warning when other cars at the same price point did.” Imagine the complexity we’ll have when cars drive themselves. Who will be responsible for their operation—the car companies or the drivers? What happens, for example, when a highway patrol officer pulls over a self-driving car? Who gets the ticket?
As a RAND report observed, even as automakers create more semiautonomous technologies, they “will want to preserve the social norm that crashes are primarily the moral and legal responsibility of the driver, both to minimize their own liability and to ensure safety.” Consider what happened to the remote-parking assistant BMW developed a few years ago for getting into narrow spots. “You push a button and the car goes in and parks itself” while the driver waits outside, says Donald Norman, the Design of Future Things author. When he asked BMW executives why he didn’t see it on the market, Norman says he was told, “The legal team wouldn’t let them go forward.”