All measured data comes from testing under certain conditions. If you vary the conditions, the specs can either be much better or much worse.
Why can that be a problem? It's not practical for manufacturers to provide performance data for every conceivable set of test conditions. Thus manufacturers often provide typical performance characteristics along with maximum or worst-case performance characteristics. The typical conditions are usually chosen to be those that the user is most likely to experience when using the device.
For instance, a typical spec might be for operating the device with a signal that is 6 dB below full scale. A set of worst-case conditions may include the results of operating the device with one or more operating parameters set to an extreme. For data-acquisition and digitizer products, these may include sample rate, input signal level, multiplexing rate, temperature, and more.
For example, running fast Fourier Transforms (FFTs) on data acquisition products gives meaningful information on noise, harmonics, distortion, etc. Running the systems at full scale and at maximum throughput is the toughest test, which yields slightly worse specs than running them at -6 dB (half scale) and at 1 kHz (versus maximum throughput at over 100 kHz). The best representation would be to show the entire spectrum under minimum and maximum ranges for full scale and full throughput.
OK, problem solved. Wait… Is that true?
I am reminded of a customer in Europe using one of our imaging boards a few years ago. They had a very smooth waveform from their detector using our competitor's board. Our board showed jagged edges with higher frequency components. The customer liked the competitor's board because of the smooth output. We pointed out to the customer that the competitor's board didn't have high enough bandwidth and essentially was filtering the “real” output of its detector. Because this was a medical application, they were not seeing the actual output from their own detector. To the customer, the specs were the same, but obviously the bandwidth of the other product was not meeting spec. They quickly changed to our product.
Another area in data acquisition is analog-to-digital converter (ADC) resolution. Today, 24-bit sigma-delta ADCs are widely used to provide high-resolution measurements. They also provide filtering to prevent fictitious data known as aliasing, where high-frequency input components erroneously appear as lower frequencies after sampling.
A 24-bit ADC means its output word is 24 bits, but the effective resolution of the ADC may be much less. Low-frequency sigma-delta ADCs used for weigh scales may have an effective resolution of 22 bits, yet higher bandwidth 24-bit ADCs used for audio applications may provide the equivalent accuracy of only 18 bits.
Theoretically, 24-bit ADC means it can resolve 1 part in 16 million. For many casual users, that resolution is the spec. But that's the resolution of the ADC alone. What about the accuracy of the resolution? How does it behave over sampling frequency? What is the stability? What is the stability over the operating temperature?
These are specs for just the ADC. Now using it in a system brings up many other questions that require specifying the whole “front end.” Most systems have input filters, gain amplifiers, ESD protection, impedance matching, switching mechanisms, etc.
So the bevy of components at the input connector to the ADC will lower the resolution and accuracy greatly. To adequately specify the “front end” including the ADC, the FFT running under dynamic conditions gives an accurate portrait of the overall performance metrics. Our favorite is something called ENOBs (Effective Number of Bits), which runs the FFT under varying conditions and shows what the overall performance is of the whole measurement front-end.
To read the rest of this article, visit EBN sister site EETimes.