Google has just posted an update on its self-driving car program, which we've been watching closely for the past several years. The cars have surpassed 700,000 autonomous accident-free miles (around 1.13 million kilometers), and they're learning how to safely navigate through the complex urban jungle of city streets. Soon enough, they'll be better at it than we are. Much better.
We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.
This is why we're so excited about a future full of autonomous cars. Yes, driving sucks, especially in traffic, and we'd all love to just take a nap while our cars autonomously take us wherever we want to go. But the most important fact is that humans are just terrible at driving. We get tired and distracted, but that's just scratching the surface. We're terrible at dealing with unexpected situations, our reaction times are abysmally slow, and we generally have zero experience with active accident avoidance if it involves anything besides stomping on the brakes and swerving wildly, which sometimes only make things worse.
An autonomous car, on the other hand, is capable of ingesting massive amounts of data in a very short amount of time, exploring multiple scenarios, and perhaps even running simulations before it makes a decision designed to be as safe as possible. And that decision might (eventually) be one that only the most skilled human driver would be comfortable with, because the car will know how to safely drive itself up to (but not beyond) its own physical limitations. This is a concept that Stanford University was exploring before most of that team moved over to Google's car program along with Sebastian Thrun.
Now, I may be making something out of nothing here, but if we compare the car in the image that Google provided with its latest blog post withan earlier Google car from 2012 (or even the Google car in the video), you'll notice that there's an extra piece of hardware mounted directly underneath the Velodyne LIDAR sensor: a polygonal black box (see close-up, right). I have no idea what'sin that box, but were I to wildly speculate, my guess would be some sort of camera system with a 360-degree field of view.
The Velodyne LIDAR is great at detecting obstacles, but what Google is working on now is teaching their cars to understand what's going on in their environment, and for that, you need vision. The cars always had cameras in the front to look for road signs and traffic lights, but detecting something like a cyclist making a hand signal as they blow past you from behind seems like it would require fairly robust vision hardware along with some fast and powerful image analysis software.
Or, it could be radar. Or more lasers. We're not sure, except to say that it's new(ish), and that vision is presumably becoming more important for Google as they ask their cars to deal with more complex situations with more variables.