It's ironic that the issues that deserve the most industry cooperation tend to be the most contentious ones. Few issues have been more contentious than performance benchmarking. Slowly, we're moving in the right direction, but more work is needed.
Benchmarks are widely used for evaluating anything electronic. To get the best scores, silicon and system vendors aggressively “optimize” for their target benchmarks. Sometimes these optimizations are more like manipulations. The technical press is littered with stories of unfair benchmarking practices, and what is reported is only a small portion of the common practices.
Benchmarks face other limitations, too. The rapid pace of innovation often makes it a challenge to accurately test all functions of a system in ways that reflect real user experiences across a wide variety of platforms. Image capture and editing, for example, may be handled by a variety of chips and APIs, frustrating efforts to make meaningful comparisons across Android, iOS, and Windows phones.
For the full story, see EBN sister site EE Times.
— Jim McGregor is the founder and a principal analyst at Tirias Research, and a contributor to EE Times.