MADISON, Wis.—Movidius, an ultra-low-power computer vision-processor startup best known for its partnership with Google on Project Tango, has extended its relationship with Google. This time, the collaboration is focused on neural network technology, with plans to accelerate the adoption of deep learning in mobile devices.
In an interview with EE Times, Remi El-Ouazzane, CEO, Movidius, called the agreement “a new chapter” in the partnership.
In Project Tango, Google used a Movidius chip in a platform that uses computer vision for positioning and motion tracking. The project’s mission was to allow app developers to create user experience that works on indoor navigation, 3D mapping, physical space measurement, augmented reality and recognition of known environments.
The new agreement with Google is all about machine learning. It’s intended to bring super intelligent models—extracted from deep learning at Google’s data centers – over to mobile and wearable devices.
El-Ouazzane said Google will purchase Movidius computer vision SoCs and license the entire Movidius software development environment, including tools and libraries.
Google will deploy its advanced neural computation engine on a Movidius computer vision platform.
Movidius’ vision processor will then “detect, identify, classify and recognize objects, and generate highly accurate data, even when objects are in occlusion,” El-Ouazzane explained. “All of this is done locally without Internet connection,” he added.
What's in it for Google?
The public endorsement from Google will boost the startup, said Jeff Bier, a founder of the Embedded Vision Alliance. The announcement is also “interesting,” he added, because it shows “Google has a serious interest in [the use of deep learning for] mobile and embedded devices.” It demonstrates that Google’s commercial interest in artificial neural networks isn’t limited to their use in data centers.
Different teams within Google, including its machine intelligence group (Seattle), are involved in this agreement with Movidius. Google will be developing commercial applications for deep learning. Movidius is “likely to get more input from Google, and get opportunities—over time—to optimize its SoC for Google’s evolving software,” Bier speculated.
Movidius’ agreement with Google is unique. “Not everyone has access to Google’s well-trained neural networks,” said El-Ouazzane, let alone the opportunity to collaborate on computer vision with the world’s most prominent developer of machine intelligence.
Asked if the work with Google involves the development of embedded vision chips for autonomous cars (i.e. Google Cars), Movidius CEO El-Ouazzane said, “Google intends to launch a series of new products [based on the technology]. I can’t speak on their behalf. But the underlying technology – high quality, ultra-low power for embedded vision computing – is very similar” whether applied to cars or mobile devices.
For now, however, Movidius’ priority is getting its chip into mobile and wearable devices. El-Ouazzane said, “Our [embedded vision SoCs] are to the IoT space, as Mobileye’ chips are to the automotive market.” Mobileye today has the lion’s share of the vision chip market for Advanced Driver’s Assistance Systems.
To read the rest of this article, visit EBN sister site EE Times.