How a machine recognizes a man from a tree seems like a pretty mundane, theoretical question. How a machine differentiates an enemy combatant from a friend or any other object on the battlefield, however, is definitely the sort of question the Defense Advanced Research Projects Agency (DARPA) should be asking. Similarly, making sure a car identifies a pedestrian on the street — under any and all weather conditions — is a significant challenge to the development of advanced driver assistance systems.
Despite recent advances in the embedded vision area, vision algorithms still are an experimental field studded with stubborn problems. Weeding through a growing number of vision algorithms to aid the time-consuming task of generating test objects necessary for better vision algorithms has been DARPA's focus in recent years.
Under a program called Visual Media Reasoning (VMR), DARPA contracted two private companies — SRI International and Next Century Corporation — and completed the development of two general-purpose vision system development tools. One offers the automated evaluation of vision algorithm performance over a massive parameter space. The other enables generation of synthetic image content for use in training and testing detection and recognition algorithms.
Mike Geertsen, DARPA program manager, will present an overview of the VMR program and these enabling tools at the Embedded Vision Summit scheduled for Wednesday, Oct. 2, in Boston.
The most remarkable thing about the latest development is that “these tools will be released as an adjunct to the OpenCV open-source computer vision software library in late 2013 or early 2014,” Jeff Bier, a founder of the Embedded Vision Alliance, told us. “You rarely hear 'DARPA' and 'open-source' in the same sentence.”
Although there are many shades of open-source, Bier said that DARPA's idea is to make these general-purpose tools publicly available in a source code form.
DARPA's open-source move underscores the reality that computer vision is still a developing field, with lots of people trying different ideas and technologies.
If you are in search of the best vision algorithms, your challenge is not only assessing “a whole bunch of different algorithms,” but also looking into each algorithm featured with “10 different knobs” to turn, Bier explained. In other words, “You could end up running through 100 algorithms with 1,000 parameter settings.”
True, computer vision is no longer an academic theory. There has been significant growth in vision algorithms, application developers, and their communities. But that very growth has resulted in “a cluttered landscape of algorithms,” according to Bier.
Under DARPA's VMR program, SRI has developed automated performance characterization tools, providing an efficient means of assessing the performance of different algorithms across imaging domains. The tools identify how well an algorithm will perform for a given image at a chosen parameter setting, and the parameters for a particular algorithm and image.
Meanwhile, Next Century Corp., under the VMR program, has developed tools to generate synthetic images that can be used for the development of vision algorithms.
Why synthetic images?
Noting that 3D synthetic images used in video games these days are very realistic, Bier said that until now, vision algorithm developers had to collect vast numbers of images that required manual annotation. In order to develop vision algorithms, such annotated data has to cover the range of object variations, poses, and environment situations, so that a computer vision system can perform successfully in operational situations.
“By generating these images synthetically — where we know a priori what they are, we can supplement the vast data needed for vision algorithm development.”
DARPA going open-source with these vision tools is certainly a novelty to a lot of people. More importantly, Bier predicts there will be a lot of eager developers lining up to get their hands on them.
This blog was originally posted to EE Times.