Accuracy vs Repeatability Revisited:

Sometimes people question accuracy.  And with good reason, I’ve seen some accuracy statements that really mystified me as to what accuracy really is.  Consider this statement for a magnetic flowmeter:

Accuracy is ±0.2% of reading, or ±0.003 ft/sec (0.001 m/s) (whichever is greater) up to a maximum velocity  of >49 ft/sec.

I can understand this pretty easily.  If I show a flow rate of 100 gpm, my actual flow is somewhere between 98.8 to 100.2 gpm. There’s one gotcha, and that’s the minimal flowrate.  After calculating the ±0.003 ft/sec. statement, we find that this meter will perform at 0.2% accuracy anywhere past a 1.64 feet per second flow rate.  This is pretty phenomenal accuracy, but is pretty easy to achieve in mags these days.    Here’s another statement for a magnetic flowmeter:

Accuracy = ± accuracy of rate(%) ± (  zero point value(mm/sec) * 100% / flow velocity (mm/sec)  )

Hmmm . . .  this one’s harder and requires more research in order to determine a zero point value and the accuracy of rate.  But after wiggling through the calculations, we find that this meter is pretty comparable to the first.  And both of these make me wonder what calibration standard tare they using to get this level of accuracy in a relatively cheap meter?

The National Institute of Standards and Technology state that they can measure flow to “standard uncertainties on the order of 0.015 % by “dynamic” weighing”  If this is true, then an accuracy of 0.2% is close to the NIST standard.  Old school calibration stated that you could never rate accuracy better than 4 times the standard that you were compared against.  And is your installation really as good as the laboratory installation?  Regardless, we have nothing to pin our beliefs to other than the factory statement.

But how could you tell if your meter is really that accurate?  No user in the real world has any facilities to test with, and there are very few labs that you can send a meter to that meets these requirements.  But I’m getting off point here.  This post was to discuss the terms used in describing accuracy as I understand them.

So in a nutshell, here are my understandings of accuracy:

Accuracy:  This is the ability of a meter to measure a specified amount of a substance that will match a known quantity that is guaranteed accurate by some recognized agency.  Accuracy is a function of calibration. Hence an ideal linear and repeatable meter may not be accurate as delivered if the manufacturer’s calibration gear was not up to spec.

Repeat-ability:  The worst case ability of a meter to repeat the same reading, when returned to the same flow rate any number of times.  This usually ends up to be the defining factor in meter turn down as repeat-ability falls off to a horn shaped area at the low end of the flow rate for velocity based measurements.  The only meter that is not velocity based is a drum based gravity meter that reads specific chamber fills.  These meters have an accuracy statement that has no discernible low flow rate, but falls off in flow rates higher than the rated capacity of the measuring element.  As far as I know, gravity metering is the only one that has no specific turn down, it measures a drip as well as a max flow.

Linearity:  The sag or gain in measurement, at some point between maximum and minimum flow.  Usually specified as the worst case, regardless of where that worse case fell in the flow range.  Linearity is usually measured at multiple points ending up in a flow curve.

Therefore:

  • You cannot calibrate to certain accuracy if you do not have repeat-ability.
  • Linearity is not needed if you only need to be accurate at a single flow rate.
  • Accuracy over the entire flow range requires both linearity and repeat-ability.
  • Repeat-ability and Linearity have nothing to do with actual accuracy, but give the ability to repeat accuracy at a single point, or over the entire flow range.
  • Only the calibration rig can guarantee accuracy, and the accuracy is only as good as the calibration rig.
  • Even a broken clock is right exactly twice a day. But without another clock, nobody can tell you when that is. Stated in metering terms, a horribly non-repeatable or non-linear meter may be accurate at a certain flow rate, we just don’t know what flow rate that is.  Therefore, a meter spot tested at a single point may not be accurate anywhere else.

Dave