28
July
2014
|
05:44 AM
America/Los_Angeles

The Risky Human In The Algorithmic Car

Las Vegas Dec 2011  24 of 564

MIT researcher Karl Iagnemma believes one of the biggest risks of self-driving cars is the Human-Machine Interface problem.

By Intel Free Press

Depending on where you get your news, the phrase "autonomous vehicle" can mean anything from a benign self-driving Prius to a sentient and rampaging four-wheeled Terminator. But this image of a machine may be rooted more in science fiction than science fact.

As with any nascent technology, the risks and benefits of autonomous vehicles are both poorly understood and at the same time overblown. To be sure, there are real dangers and challenges presented by self-driving cars, according to experts. But these concerns are not the stuff of action thrillers.

According to Karl Iagnemma, MIT's principal research scientist for the Robotic Mobility Group, the benefits of the technology are very real, but limiting its potential to that of a robotic chauffeur is a failure of imagination.

The Robotic Mobility Group at MIT is a think tank of sorts, for research into autonomous vehicles. As the group's head, Iagnemma is uniquely positioned to separate the real debates over driverless cars from the hyperbole.

In May, for example, Wired ran an opinion piece arguing that the algorithms being developed to mitigate injury and damage in the event of an autonomous car accident represent something sinister and insidious. What happens, the author asked, when a crash is inevitable and the car must decide between swerving into either a light sedan or a heavy SUV?

"Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems," the writer argues. "And this takes the robot-car industry down legally and morally dangerous paths... [Autonomous cars] can make split-second choices to optimize crashes-that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases."

Iagnemma said this "targeting" discussion is off point. "In a practical sense, it's probably difficult, if not impossible, to code to every possible contingency," he said. "It's not inconceivable to program something like this in a crude way, but that's not on the technical horizon. That's not the realistic goal of anyone who's serious."

The real dangers posed by robotic vehicles, Iagnemma said, have a lot more to do with the human element than the machine.

Here's a real question that bedevils designers of autonomous vehicles: What happens when an autonomous vehicle needs to cede control back to the driver? This sounds simple enough. But how does the vehicle determine when a human occupant is "ready" to take control?

"The primary challenge is the time that it requires for an operator to regain situational awareness. This is a significant unknown," Iagnemma said. "Ensuring safety and failure conditions are the kinds of things aircraft and spacecraft designers have had to worry about. In the automotive world, you've always been able to fall back on the human operator."

One of the approaches to this problem has been to design autonomous systems in such a way as to entirely eliminate the need for a driver to take control.

"Instead of ever requiring human intervention, you would have an architecture designed so you'd always have the possibility to come to a safe stop," he said.

According to Iagnemma, most of the components needed to move autonomous vehicles out of the beta phase already exist and simply need to be improved. Indeed, Google's fleet of robotic cars has already clocked some 300,000 miles without a single ticket. There was an accident with one of Google's driverless cars in 2011, but the company said the car was being driven manually at the time.

Iagnemma said that while perception algorithms still need to become more robust, the challenge now lies chiefly in bringing down the cost of hardware like LiDAR sensors by several orders of magnitude. This will enable cars to improve their sensing capabilities and overall performance without becoming significantly more expensive.

With Google leading the way, other technology companies are accelerating into the fast lane on autonomous driving, trying to increase the pace of innovation while driving down costs. Intel, whose processors powered Stanford's winning entry in the DARPA Grand Challenge in 2005 and are rumored to be powering Google's self-driving cars, is pushing an open-platform approach with an eye toward interoperability versus competing sets of proprietary standards.

Intel is also investing in companies like ZMP Inc., a Japanese company developing robotic platforms, sensor systems, connected car informatics and other autonomous technologies.

Developers are working to make an autonomous car requiring zero human intervention, what the industry calls a Level 4 vehicle. (Google's driverless cars are considered Level 3, which means drivers must be ready to take the wheel if needed.)

"The good news is that the physics of the sensors doesn't require them to be expensive," he said. "We're at the point now where we have reasonably good solutions to all the building blocks, but there's sort of a fallacy in expectations that once a Level 4 vehicle is deployed, that technology remains fixed."

Once they prove out a Level 4 automobile, though, what can we do with it? Iagnemma believes the answer is a lot more intriguing than being able to nap undisturbed while we're shuttled between points A and B.

"Most people drive a car for one hour total each day. The rest of the time, it's parked taking up space. I'd rather own 0.3 of a car," he said. "Autonomous vehicles will enable fundamentally new modes of transportation. Car sharing. I think a lot of people are going to want that type of option either as an additional car or as the baseline form of transportation."