Part 2. Principles of Instrumentation Measurement – Error, Accuracy and Resolution,

Overview: This new 5 part series has been written to explain the hardware, software and instrumentation used in the testing of soil and rock. The series comprises of 5 chapters (see below). The series is aimed at people interested in gaining a better understanding of geotechnical laboratory equipment.


Read below or download the PDF file here

INTRODUCTION

Part 2 is written as an aid to help Earth scientists and civil engineers and technicians to understand and use instrumentation and transducers in the soil mechanics and rock mechanics laboratories. It is not a full technical description of how such instrumentation and transducers are designed or indeed how they work in detail. We hope this will enable the end user to intelligently make use of the electronic “black boxes” and the many transducers that now abound in the soil mechanics laboratory without having to do a PhD in electronics!

What is Error

Often people say “accuracy” when they mean “resolution” or even “error”. For example, a common remark when referring to a transducer is “how accurate is it?” The answer is of course dependent on the calibration that is the means whereby accuracy (how “true” the transducer is with respect to some standard) is imparted to the transducer and the relationship is established between the electronic output of the transducer and an engineering quantity such as force, pressure or displacement. In the mind of the questioner, however, is probably the wish to find the smallest amount that the transducer can measure to. This is another misconception. Many transducers such as load cells, pressure transducers and displacement transducers have infinite resolution i.e. their analogue output is step-less and continuous. It is the instrumentation means of measurement that determines the smallest amount the transducer can be read (or “resolved”) to. This is the “resolution”. Alternatively, it is also possible that what is behind the question “how accurate is it?” is really “what is the error of the readings?” The accuracy of the measuring means together with the resolution of the measuring system combine to give any given measurement or “reading” its “error”. This can be expressed in the form of an equation as

Accuracy + Resolution = Potential Error (1.1)

Consider a one-metre rule shown in Fig. 2 (On following page). It has a “range” of 1m. It is important to specify range because accuracy and resolution are defined as a percentage of range or, in instrumentation jargon, “Full Range Output” (FRO) or “Full Scale Output” (FSO). Consider also a 1m rule “A” shown in Fig. 2 that is actually 1.001m long and is graduated in 1cm or 10mm intervals. Expressed as a percentage of the full range of the rule (1m), the rule has reasonably good accuracy (0.1%) but poor resolution (1% if read to the nearest division). The total error for a given reading would be the sum of the accuracy and the resolution i.e. 1.1%.


Now consider a 1m rule “B” shown in Fig. 2 that is actually 1.01m long and is graduated in 1mm intervals. The accuracy is poor (1%) but the resolution is good (0.1%). The error of any given measurement using this rule would be the sum of the accuracy and the resolution i.e. the error is 1.1% again. Clearly it is pointless to have good accuracy but poor resolution (or the other way round) because the error of the readings will be dominated by the poorer of the accuracy or resolution i.e. it makes sense that accuracy and resolution be compatible. But consider the ability to detect a change in length of 100mm with either rule. With A it can be done to 10.1%, with B to 1.1%. In these cases high resolution is important even if not compatible with accuracy. This is particularly the case for stiffness and creep measurements.




Fig. 2 Schematic diagram representing accuracy and resolution for theoretical one-metre rules.

Accuracy

The accuracy of a measurement is how “true” it is with respect to a high accuracy standard. Usually the standard will itself have a known accuracy. The process of calibration allows accuracy of the standard to be imparted or transferred to the measurement means (see “Calibration” later). The measurement means is usually a transducer energised by a power source (which will be stabilised to some extent). The output of the transducer is read by some metering system (in instrumentation jargon this is called the “signal conditioning system”). The relationship between increments in the 


standard and corresponding measurements of the transducer output constitutes the transducer calibration from which the sensitivity can be ascertained. Ideally the relationship will be nearly linear so the sensitivity of the transducer can be expressed as a single factor in terms of engineering units per milli-volt (e.g. kPa/mV for a pressure transducer). Usually, the manufacturer of the transducer will specify the linearity (i.e. the maximum amount a reading could deviate from the standard) e.g. 0.05% FRO is common for pressure transducers.

Resolution: Analogue to Digital conversion

Transducers commonly used in soil and rock mechanics laboratories, such as load cells, pressure transducers, and various kinds of displacement transducers, are almost always analogue devices. This means that they are supplied by an input low voltage direct current typically 2 to 15Vdc. Their output is normally in mV dc. The accuracy of the transducer, we have already seen, is expressed as a percentage of the full range output (FRO) of the transducer e.g. 0.05% FRO. The FRO itself will also specify the properties of the transducer e.g. 100mV FRO is common for a pressure transducer. Clearly, range is a very important factor and, as we will see later, is vitally important to applying transducers in a sensible way.

It is necessary to convert the analogue signal into a digital one so that the output of the analogue transducers can be recorded and manipulated by digital loggers and computers. This process is called “analogue to digital conversion” and is often abbreviated as “A/D conversion”.

A/D conversion is carried out in the signal conditioning/measuring system that is metering the test by means, unsurprisingly, of an A/D converter. These are integrated circuits (ICs) or “micro chips” specified as 12 bit or 16 bit, say. This tells us how many bits (figuratively speaking in terms of small steps) and bits (literally speaking in terms of binary digits) the analogue signal can be broken down into. A 12 bit A/D converter set to record over the 100mV range will break the devices voltage sensing range down into 212 steps (or 4096 steps or “counts”). Consider a pressure transducer with an FRO of 100mV ranged over 2000kPa.


A measuring (or signal conditioning) system having a 12 bit A/D converter will be able to discriminate or resolve to a resolution of 2000/4096 or very nearly 0.5kPa (Fig. 3). A 16 bit A/D converter will resolve to 1 in 216 or 65,536. The resolution is 16 times lower. Now the pressure transducer ranged to 2000kPa will have a resolution of 2000/64,000 or about 0.03kPa. Because of other considerations, however, such as environmental noise (cell phones particularly introduce radio frequency noise) as well as noise within the signal conditioning system, this theoretical resolution is rarely attainable and a figure of about 0.1kPa is more realistic for a 16 bit A/D conversion.


Fig. 3 Representations of plotted outputs for a pressure transducer for (a) analogue output in mV, and (b) digital output in bits for 12 bit analogue-digital conversion.
Reference:

Usher, M. J. (1984). Information theory for information technologists. Macmillan, London, 225p.

Doebelin, E. O. (1983). Measurement systems – application and design. McGraw-Hill, 1078p.

Sydenham, P. H., Hancock, N. H. and Thorn, R. (1989). Introduction to measurement science and engineering. Chichester: Wiley, 327p.