I used to live in the Bay Area, where I dreaded driving on Highway 101. Beyond the incessant congestion, my real peeve was the idiosyncrasies of other drivers.
The classic 101 maneuver is a last-second, 70-mph three-lane fly-over into an exit. And then, there are the drivers who pay more attention to their mobile phones than to the traffic all around them. Once, I spotted a 101-bound driver eating a bowl of cereal, with both hands, apparently steering with his knees.
Fast forward to 2015.
Car OEMs and Tier Ones are increasingly pushing new automobiles equipped with advance driver assistance system (ADAS) features. Chip vendors such as Nvidia are pitching “deep learning” as the Holy Grail of the automotive autonomy. Robotic cars are out there on 101 learning, identifying objects that pop up on the road in front of them.
The year 2015, in short, has made many of us almost believe that the high-tech building blocks necessary for autonomous cars are coming together for automotive safety. I’m not disputing the safety possibilities.
I mean, anything to mitigate the clear and present danger of that idiot with his bowl of Cap’n Crunch.
But here’s the thing.
Although our cars may have learned much more about objects on streets, they appear to be still behind the curve on the bad behavior of human drivers.
In the era of big data, nobody seems to have collected data big enough to predict what that teenage driver whose face reflects the eerie blue glow of her iPhone is going to do next.
As I noted in my last blog (“Automotive Fatality: Is Connectivity Killing Us?)” the number of deaths from traffic accidents in the United States jumped 8.1 percent in the first half of 2015, compared with the same period last year. Most alarming in the National Highway Traffic Safety Administration’s (NHTSA’s) announcement is the fact that distracted driving accounted for 10 percent of all crash fatalities, killing 3,179 people in 2014. In a press briefing last month, Mark Rosekind, who heads the NHTSA, said, “The increase in smartphones in our hands is so significant, there's no question that has to play some role. But we don't have enough information yet to determine how big a role.”
Further, in reality, no consumer will benefit from these great ADAS/semi-autonomous features unless we already have a Tesla which can regularly add — via over-the-air software upgrades — the latest autonomous car functions. We’re stuck with old cars that remain as stubbornly dumb as doornails. Car OEMs are, of course, counting on us to eventually buy new cars that are smarter than we are.
Deep learning outside and inside cars
This is where Nauto, a Palo Alto, Calif.-based automotive startup, comes in. Scheduled to unveil its technology and products at CES 2016 next month, Nauto is deploying cameras coupled with smart computer-vision algorithms to retrofit cars with driver-assistance technology that set the stage for full autonomy.
In a telephone interview with EE Times, CEO Stefan Heck explained that Nauto’s computer vision system, a small box designed to attach between the rear-view mirror and the windshield, not only learns what’s going on out on streets but it “learns the human drivers’ behavior.” Heck said, “We think we are breaking new ground.”
Further, “with this little box, we can learn more about what causes accidents,” he said. The Nauto system is set up to learn not just from car crashes but from “near misses” that usually go unreported.
Heck, an expert in the clean-tech, energy and automotive industries, once ran the global semiconductor industry practice at McKinsey and Company and is a Stanford University consulting professor. With a Ph.D. in deep learning/neural networks, he understands what the robotic car can do in the future.
To read the rest of this article, visit EBN sister site EE Times.