Defining Accuracy in Monitors & Sensors
The term "accuracy" is routinely used to describe a wide array of performance specifications when talking about measuring instruments. The designer, manufacturer and user of a particular instrument may use the same word but can mean entirely different things with associated varying expectations.
If someone indicates that they want an ozone monitor with a range up to 100 parts per million (ppm) with an accuracy of 1.0 ppm, what do they really want?
- Does the 1.0 ppm refer to total deviation (+/- 0.5 ppm) or does he / she want to reach their target within 1.0 ppm (+/- 2.0 ppm)?
- Do they require consistency by having repetitions with ±1.0?
- Is the user looking for resolution of 1.0 ppm?
- How linear must the output be?
- Will output variations with changes in temperature be of concern?
- What effect will EMI and RFI have on the signal?
Accuracy is often used as a catch phrase for many of the following terms:
Resolution:
The smallest distinguishable, discrete unit. If the resolution of a sensing system is greater than the separation of the reading or indicator, then "accuracy" has no relevant meaning. If, on the other hand, the resolution is too fine, the user may be paying for something they do not need and may pay a price for when it comes to response time or stability. Generally specified as a percentage of Full Scale (FS), resolution is usually indicated using the term "less than or equal to" (<=).
Repeatability
The figure describing an instrument's ability to achieve the same result, in repeated tests from the same direction, under identical conditions, specifications state the tolerance within which, the device will give the same output signal in repetitive cycles.
Without this information, resolution loses its practical meaning. What would the purpose of excellent resolution be if the tolerance for repeating the output signal was, for example, greater than the resolution? Repeatability is generally specified as a percentage for full scale (FS), with ± understood.
Non-Linearity
The deviation from straight-line output vs linear input. With most gas sensing devices the output is advertised as "linear" or "linearized." With electrochemical sensors, output vs concentration is very close to being linear. With a solid state, a catalytic sensors output is far from linear, but may be linearized (output is modified to compensate for the response curve of the sensor). Non-linearity is generally specified as a percentage of FS.
Temperature Drift
The variation in output readings for a function of temperature changes. Temperature drift is one of the more simple "accuracy" parameters with the exception of the fact that there is no uniformity in the way it is specified by manufacturers of sensors and transducers. Typically, it can be specified as ± XX % full scale (or ppm) per degree F or degree C. This figure can have a great effect on final readings and should therefore be carefully taken into consideration.
Noise
The variation superimposed on the output signal resulting from either outside influences such as RFI, ground loop feedback, power source variations EMI, etc., or inherent eccentricities of the device itself. Because of the nature of noise, it cannot be specified and the general rule is to try to figure out the source of the noise and minimize it. In general terms, noise becomes more of an issue as resolution becomes tighter.
The information above shows that accuracy is a term that can mean many things to different people. As such, it should be used sparingly when discussing sensing devices.