Skip to Main Content
Skip Nav Destination

When a value is not known precisely, the amount of uncertainty is usually called “error.” This has given the whole business of uncertainty analysis a bad name, because in common usage “error” implies sloppiness, very likely caused by sinfulness. (She saw the error of her ways.)

In the language of technology, this section deals with the analysis of errors—how to judge their magnitude, how to describe them in conventional ways, and how to take them into account in calculating numerical values based on a number of individual measurements. As we shall emphasize, error represents uncertainty and has nothing to do with mistakes or sloppiness. Indeed, reducing the amount of error in a measurement beyond the immediate need is usually a mistake and a sign of poor judgment.

When a value is not known precisely, the amount of uncertainty is usually called “error.” This has given the whole business of uncertainty analysis a bad name, because in common usage “error” implies sloppiness, very likely caused by sinfulness. (She saw the error of her ways.)

In the language of technology, this section deals with the analysis of errors—how to judge their magnitude, how to describe them in conventional ways, and how to take them into account in calculating numerical values based on a number of individual measurements. As we shall emphasize, error represents uncertainty and has nothing to do with mistakes or sloppiness. Indeed, reducing the amount of error in a measurement beyond the immediate need is usually a mistake and a sign of poor judgment.

We categorize the reporting and handling of uncertainty in four stages, each involving a particular degree of precision. The first is concerned only with order of magnitude of a number. Consider this to be a zeroth approximation. Next are the conventions regulating the use of significant figures, a first approximation of limited usefulness. The second approximation deals with the maximum and minimum range of measured quantities. The rules of manipulation of such error limits are simple, and this system should be the one most often used by students of introductory science. Finally, there is a third approximation to error citation and analysis, involving rules derived from probability and statistics. This system is frequently misunderstood and misapplied. Its use is justified only when the primitive data fulfill certain requirements of quantity, distribution, and probability.

The exponential notation is just a means of indicating and keeping track of the decimal point position in a number. However, use of the method provides a powerful aid in doing arithmetic problems or setting them up for slide-rule solution. The notation is particularly important in dealing with very large or very small numbers. Because students often meet exponential notation for the first time in science classes, the system is sometimes called “scientific notation.” Fortunately, this terminology is unknown in the real world outside the classroom.

Note that the 2 in 102 can be thought of as either the number of zeros in 100 or the number of times that 10 is multiplied by itself—10 squared. Negative exponentials indicate the reciprocal power of the number: 10−2 = 1/102 = 1/100 = 0.01.

To multiply the powers of 10, add the exponentials: 100 × 1000 = 102 × 103 = 100,000 = 105. 0.1 × 0.001 × 100 = 10−1 × 10−3 × 102 = 0.01 = 10−2. This property justifies the assignment 100 = 1. We must have this equality since if 10 × 0.1 = 1, then 101 × 10−1 = 101 − 1 = 100 = 1.

Here are some examples of numbers written in exponential notation:

Notice the difference between writing 90.0 and 90. By convention the first number indicates that the value is known to be between 89.95 and 90.05. The extra zero is significant and must be retained. The form of the second number may be ambiguous. The value may be known to be between 89.5 and 90.5 or perhaps only between 85 and 95. The exponential notation provides a way of avoiding the ambiguity. To indicate the former value, we would write 9.0 × 101; to indicate the latter value, we would write 9 × 101. (For rules concerning significant figures, see p. 6.)

An example of how to set up a multiplication and division problem is given below. This procedure should be used, especially if the problem is going to be solved with slide rule or logs.

Using this technique, a problem like this can quickly be solved to one or two significant figures by inspection, and the placement of the decimal point determined. A slide rule, logs, or pencil-and-paper calculation can then be used to add whatever other significant figures are needed and are justified. For instance, the third step above sets up the problem so that it can be solved approximately by looking at it. The numerator product must be about 30 (6 × 5); the denominator product must be about 20 (7 × 3). Therefore, the answer must be about 1.5 × 10−4. A slide-rule check yields 1.4. If the slide rule had not read close to 1.5, we would have suspected that a mistake had been made.

Notice that, in this example, no more significant figures are retained in any one of the numbers to be multiplied than belong to the number having the least significant figures. Since 32 and 0.051 have only two significant figures, 5832 is rounded off to 5.8 × 103. When using a slide rule, however, it is usually just as easy to set each number to three significant figures.

Raising a number to some power, or taking a root, is easy to do with exponential notation. For example, (102)3 = 102 × 102 × 102 = 106 = 102 × 3. When raising to a power, multiply the exponents. When taking a root, divide the exponents: 106=106/2=103. Here are some more complicated examples.

The symbol ≈ means “approximately equal.” The parenthetical value is the approximation obtained just by looking at the number.

(Arrange the exponent of 10 so that it is easily divisible by the root. For instance, it would not be sensible to write 0.47=4.7×101=4.7×101/2, etc.)

14,000383×1.0%=14×1033.83×102×1.0×102=143.83103100(312×103)=3.7×103

By leaving 14,000 as 14 × 103 instead of 1.4 × 104, we made the numerator larger than the denominator, and so ended with a quotient greater than 1—an unnecessary but convenient trick. The parenthetical value of 312×103 is the approximation obtained just by looking at the fraction.

0.084×869.2×74=8.4×102×8.6×1019.2×7.4×1017068×102=1.1×102

4623×98×410.0062×122(2.5×107)

186×1023(3×1023)

10,001×0.08143×9.87(0.6)

Fermi questions were named after Enrico Fermi, the great physicist who contributed to both experiments and theory concerned with atomic nuclei and fundamental particles. (He was a master at computing approximations to answers when it seemed that no information was available.) The point of such questions is that reasonable assumptions linked with simple calculations can often narrow down the range of values within which an answer must lie. The order of magnitude refers to the power of 10 of the number that fits the value. To increase an order of magnitude means to increase by a factor of 10. Very often an order of magnitude calculation is all that the interest in a problem justifies. Even if more precision is needed, an order of magnitude calculation done firs: may indicate whether or not it is worthwhile to pursue the problem, and sometimes may indicate how the next approximation to the required value can best be obtained.

Here are some Fermi questions:

A. How many golf balls will fit in a suitcase? Assume that the suitcase is 30″ × 8″ × 24″. The volume is 30″ × 8″ × (100/4)″ ≈ 6 × 103 in3. Assume that each golf ball is a sphere 1 in. in diameter. The volume of the ball is a little less than 1 in3. The order of magnitude of the number of golf balls that will fit in a suitcase is 104.

This question could not be taken too seriously unless it were asked by a traveling golf ball salesman. Since the size of the suitcase is not specified, there is no point going to the closet to measure a real one. Imagine a reasonable size. Nor is there any point in worrying about whether or not the balls are close packed as nested spheres; the packing factor could not be greater than 1.5. Surely the diameter of a golf ball is closer to 1 in. than 2 in. Doubling the diameter would increase the ball volume by a factor of 8. That would reduce the number of balls in the suitcase by an order of magnitude. Consider the reasonableness of the order of magnitude answer. Surely the number of golf balls that would fit in a standard suitcase must be greater than 1000 and less than 100,000. Our answer is good within a factor of 10.

B. How many piano tuners are there in New York City? Assume 107 people in New York and 2 × 106 families. Assume 1 piano for every 5 families; therefore, 4 × 105 pianos. Assume each piano tuned once every 2 years; therefore, 2 × 105 pianos tuned each year. Assume each tuner tunes 2 a day for 250 days a year. (At $10 per tuning, he barely makes a living—a factor of 2 could make a big difference to the tuner.) Therefore, (20 × 104 tunings per year)/(500 tunings per year per tuner) = 400 tuners.

It is unlikely that New York City has less than 40 piano tuners or more than 4000. If you do not like the assumptions made, choose your own reasonable guesses and see if your answer is not of the same order of magnitude. Note that sometimes in these calculations one significant figure is carried along. The extent to which you do this depends on the problem and your style; rules would be cumbersome and probably useless. For instance, whether 400 is of order of magnitude 102 or 103 is a silly question, because a reasonable answer depends on the meaning of the number and the way it is going to be used. When in doubt, carry an extra figure along. Note, incidentally, that a factor of 2 in one of the assumptions makes a big difference to the piano tuners but not to the final result of our order of magnitude calculation.

C. How many cells are there in a human body? Assume that the average cell diameter is 10 microns (µ)= 10−5 meter (m). Then, volume = 10−15 m3. The order of magnitude of human volume is 10−1 m3. Therefore, there are 1014 cells in a human body.

This question illustrates again how some information can be obtained out of very little definite knowledge. Living cells come in a great range of sizes. However, they can all be seen with an ordinary light microscope and therefore must have a diameter larger than the wavelength of light. They can scarcely be seen with the unaided eye and so must have a diameter smaller than 0.1 millimeter (mm). We assumed that the diameter was the geometric mean between these values. (The geometric mean of A and B is A×B. In this case, it is (106)(104)=105. The arithmetic mean would be practically the same as 10−4.) Notice that for these calculations it makes no sense to differentiate between the volume of a sphere and that of a cube. The volume of the human body could be estimated by assuming a reasonable height, width, and thickness of a column that is human size. An alternative method requires knowing that 1 liter of water has a mass of 1 kilogram (kg) and a volume of 1 × 10−3 m3. A cubic meter of water (or flesh) would therefore have a mass of 1000 kg and would weigh about a ton (1 kg weighs 2.2 pounds; therefore 1000 kg weighs 2200 pounds or 1.1 tons). The assumed volume for the body was 110 m3.

How many hairs are on a human head? (Assume that the spacing between hairs is 1 mm. Then there are 10/centimeter (cm) along a line or 100/cm2. We figure 2 × 104 hairs on a human head. Do your assumptions lead to results of the same order of magnitude?)

How many individual frames of film are needed for a feature length film? (We get 1.5 × 105.)

What is the ratio of spacing between gas molecules to molecular diameter in a gas at standard temperature and pressure? [A mole (6 × 1023) of gas molecules at STP occupies 22.4 liters. A molecular diameter might be 2 × 10−8 cm. Compare available volume per molecule with the volume of a molecule. We get (spacing between molecules)/(molecular diameter) = 10.]

There are how many seconds in a year? [We get 3 × 107 seconds (s)/year, or 10 megapi s/year (π × 107 s/year).]

If your life earnings were to be doled out to you at a rate of so much per hour for every hour of your life, how much is your time worth? [Perhaps, $0.50/hour (hr)?]

What is the weight of the solid garbage thrown away by American families each year? (We figure about 108 tons.)

How many molecules are in a standard classroom? (∼1028)

The number of significant figures in a numerical value is a first approximation to showing the limits within which the value is known. There are more precise ways of indicating the error limits; the next approximation is shown on p. 8.

The generally accepted conventions for writing significant figures are summarized in A.

A.

  1. When we say that a quantity has the value 3, we mean—by convention—that the value could actually be anywhere between 2.5 and 3.5.
    However, if we say that the value is 3.0, then we mean that it lies between 2.95 and 3.05
  2. Note the ambiguity of a number such as 300. Does it imply 250 < 300 <350 or 299.5 < 300 < 300.5? A superior method of writing such a number is to use power of 10 notation. In that form the number is 3 × 102 or 3.00 × 102, depending on which precision is intended.

  3. 0.000,01 has one significant figure. (It could be written 1 × 10−5.)

    1.000 has four significant figures.

    1.000,01 has six significant figures.

Rules for the proper use of significant figures in addition or subtraction are illustrated in B.

B.

In addition or subtraction, the sum or difference has significant figures only in the decimal places where both of the original numbers had significant figures. This does not mean that the sum cannot have more significant figures than one of the original numbers. In the examples in B, note that 0.001 has only one significant figure, but the sum properly has four. It is the decimal place of the significant figure that is important in addition and subtraction. In the final examples in B there is another example of the ambiguity of final zeros. If you estimate that there are 500 students in a lecture, implying a number between 450 and 550, your estimate is not changed if 4 people leave. On the other hand, if you draw out $500 from the bank and spend $4, you have $496 left.

Rules for the proper use of significant figures in multiplication or division are illustrated in C.

C.

In multiplication and division, illustrated in C, the product or quotient cannot have more significant figures than there are in the least accurately known of the original numbers. Consider the first example: the product might be as large as (5.25)(3.15) = 16.5375 or as small as (5.15)(3.05) = 15.7075. The rule for significant figures in multiplication is evidently justified in this case. Usually, during multiplication or division, an extra significant figure is carried along, and the final answer is then rounded off appropriately.

2.000 + 0.01 = 2.01 Justify the answers in 1 and 2 by calculating the extremes in the sum and product which could be justified by the extremes in the original numbers.

2.000 × 0.01 = 0.02

4832×0.16515×264=(0.20)

What is the volume of a piece of chalk that is 10.5 cm long and has a diameter of 1 cm? (volume = 8 cm3)

What is the volume of a rectangular box with length 3.025 cm, width 2.5 cm, and height 2 cm? (By taking extremes of the significant figures, show why it would be unreasonable to cite the answer with only one significant figure. A good answer would be volume = 15 cm3. This variation of the rule applies when the first digit is 1. The special rule and the indeterminancy point up the need for a more precise way of specifying and dealing with error limits on numbers.)

This second approximation to error statement and analysis is based on maximum pessimism. The absolute error, the ±2 cm in the first example in A, defines the maximum excursion of the main value. The implication is that all the lengths measured and the true length fall between 95 and 99 cm.

A.

The rules for compounding measurements through addition or multiplication are based on the assumption that the worst possible coincidence of errors will occur. For instance, in adding 97 ± 2 cm to 12 ± 2 cm the sum could be as large as 113 or as small as 105 if the original values were off by the maximum amount in the same direction. Often there will be cancellation of errors in arriving at a compound quantity, because some of the original measurements will have values that are too high and others will have values that are too low. On p. 12 we show how this effect can be taken into account, but these second approximation rules are simpler and often satisfactory.

B. To find compound error in addition or subtraction, add the absolute errors.

C. To find compound error in multiplication or division, add the percentage errors.

The determination of the absolute error is a matter of judgment. How would one estimate that a length is 97 ± 2 cm? The error being assigned to this value has nothing to do with any mistake. There is no mathematical treatment that can automatically compensate for mistakes. The error limits of ±2 cm may be assigned as the result of the observation of numerical data. Several people may have measured the length or one person may have done it several times, and all the values were found to be between 95 cm and 99 cm, with 97 as the average. Perhaps only one measurement was made with a meter stick marked every 10 cm, and the experimenter estimated that the length was about 97 cm and could not be more than 2 cm different. In this case, the instrument itself was the limiting factor. Perhaps the measurement was done with a tape that had markings down to millimeter size, but the experimenter did not trust the shrinking or streching of the tape closer than ±2 cm. Perhaps the measuring instrument was precise and trustworthy, but the object being measured was moving and difficult to measure, or irregular so that a closer measurement was not justified. In all these cases, the assignment of reasonable error limits depends on the judgment of the experimenter based on the instrument, the object, and the need for precision. Note that it is not generally true that the absolute error is equal to some fraction of the smallest scale division of the measuring instrument. For instance, you could use a vernier caliper to measure the diameter of a piece of chalk, but because of the uneveness of the chalk the error would be closer to 2 mm than to 0.1 mm.

Note from the examples in A that the same absolute error in two measurements may yield radically different percentage errors. Note also that if only one significant figure is given in the absolute error, only one significant figure is justified in the percentage error. The third example in B illustrates the same sort of principle in the addition of errors. If there is only one significant figure in one of the errors, it seldom makes sense to end up with a compound error that contains two significant figures. As pointed out on p. 7, an exception to this rule is justified if the first digit of the final number is 1. Thus (62 ± 10) + (21 ± 5) = 83 ± 15.

Although the numbers used as examples in this section displayed symmetrical errors, there is no necessity for the given value to lie halfway between the extremes. For example, suppose you were sure that a mass that was determined to be 102 grams (g) could not be less than 100 g but might be as large as 108 g. Its mass should then be given as 102(+62)g.

Justification for the rule for compounding errors in multiplication can be obtained algebraically, graphically, and by the trial of extreme values. Suppose that an area with true length L and true width W is measured to have values L + l and W + w, where l and w are the absolute error.

Algebraic:

If l/L and w/W are ≪ 1, lw/LW is small compared with l/L or w/W.

Graphical:

Numerical trial: If L = 100 ± 10 cm and W = 50 ± 10 cm, then L = 100 ± 10 per cent and W = 50 ± 20 per cent. The rule gives: Area = 5000 cm2 ± 30 per cent or ± 1500 cm2. If L = 110 cm and W = 60 cm, A = 6600 cm2. If L = 90 cm and W = 40 cm, A = 3600 cm2.

What is the volume of a sphere that has a diameter of 6.2 ± 0.2 cm? (Volume=43πr3=16πd3=1.2×102cm3±10%.)

Carry out the algebraic demonstration for the other extreme case in which the measured values are Ll and Ww. Does the rule still hold?

How long did an event last if it started at 3.4 ± 0.2 s and ended at 5.0 ± 0.2 s? What is the percentage accuracy? (1.6 ± 0.4 s or ± 30 %).

What is the volume of a cylinder with length 6.24 ± 0.01 cm and diameter 2.1 ± 0.1 cm ? (22 ± 2 cm3 or ±10%)

What is the average velocity if a bullet travels 100.00 ± 0.1 m in 0.15 ± 0.01 s ? (670 ± 40 m/s or ±7%)

The use of this third approximation to error analysis is justified only when certain experimental conditions and demands are met. If the formalism is applied blindly, as it often is, sophisticated precision may be claimed when it does not exist at all. The mathematical techniques are derived from and are concerned with statistical laws of probability. The actual measurements are assumed to cluster around an average value and to vary from that average in a way specified by the bell-shaped Gaussian distribution.

In Sections 5.1 and 4.6 we describe the Gaussian function and its properties. This particular distribution function occurs under very general conditions of chance determination. The major requirement is that if the probability of getting a particular value i is pi and the probability of getting j is pj then the probability of getting the values i and j in sequence is the product pipj It is also necessary that the probability be maximum for one particular value. These conditions are frequently met by data subject to ordinary errors of measurement.

There are many reasons why the measurements may not fit the Gaussian function; there may be some cut-off or prejudice against low readings as opposed to high readings, producing a skewed distribution; there may be mistakes in readings or in instruments; the quantity being measured or the instruments may have periodic variations; the data sample may be so small that statistical fluctuations cause a warped distribution. Even when the precision (the scatter of data), determined by the statistical analysis of random errors, is correctly stated, the accuracy may be poor because of systematic errors or other types of mistakes.

If elaborate analysis of experimental error is necessary, it may be helpful to consult the following references:

Introduction to the Theory of Error, Yardley Beers (Addison-Wesley, Reading, Mass., 1958).

Experimentation and Measurement, W. J. Youden (Scholastic Book Service New York, 1962).

Experimental Measurements: Precision, Error and Truth, N. C. Barford (Addison-Wesley, Reading, Mass. 1967).

An Introduction to Error Analysis, John R. Taylor (University Science Books, Mill Valley, California, 1982).

If repeated measurements of the same quantity produce a distribution curve like the one shown in the diagram, it is misleading to use the maximum range as the error limits. The precision is really better than that. A reasonable criterion for the width of the uncertainty is σ, the standard deviation of the Gaussian function. Two thirds of the data points fall within the range between x¯+σ and x¯σ. The mean, x¯, is the arithmetic average of the readings.

The standard deviation is sometimes called the root-mean-square deviation. It can be calculated from the data in a straightforward, though sometimes laborious, way. An example of such a calculation is given on p. 91.

In the second approximation to error analysis we made the most pessimistic assumption about the way that errors in separate measurements might combine to yield a total error. If x ± Δx is to be added to y ± Δy, it is possible that the true value could be as large as x + y + Δx + Δy or as small as x + yΔxΔy. In some cases, however, it is possible that the errors will cancel to some extent so that the range of final error will not be ±(Δx + Δy). If there are sufficient data to justify the use of the standard deviation to express the error probabilities in the measurement of the separate variables, then the rules in this section can be used in calculating compound errors. One additional requirement is that the variables and their measurement be independent of each other. (Note Sample 3.)

A. The plausibility that there will be cancellation in combining errors can be seen by considering what happens if two data distribution curves are added to each other. The Gaussian peaks will overlap. A revealing way to think of the error combination is in terms of right-angle geometry, which the formula below suggests. This representation emphasizes the way in which a larger error dominates a smaller one.

If s = x ± y, σs=σx2+σy2.

B. The usefulness of repeated measurements of a quantity is illustrated in the example given below. Note that this relationship is valid only if each of the measurements contains sufficient data to justify the use of the standard deviation. (This relationship is actually just a special case of the rule given in A.)

C. The rule for finding errors of products is just an extension of the rule used in the second approximation, but with the probability of cancellation of errors taken into account. The derivations of these particular rules plus descriptions of how to deal with more complicated situations (such as when the variables are not completely independent) are given in the references cited on p. 11.

If p = xy or x/y, σp/p=(σx/x)2+(σy/y)2.

By using the rule in A, show that the average standard deviation of n measurements, each with about the same standard deviation, is given by the rule in B. [Calculate the sum of the values and the sum of their standard deviations (nσ). Then divide by n to get the mean and its standard deviation.]

The number of photons per second coming through a filter is 1.06 × 104 ± 1 × 102. If a detector is gated with an on-time of 10.0 ± 0.1 milliseconds (ms), how many photons will it detect? (106 ± 1.4 photons/gate. How could there be an uncertainty of 1.4 photons when the number of photons must be an integer? What is the significance of the mathematical result?)

The diameter of a sphere is measured to be 1.000 ± 0.002 cm. What is the volume? (Volume = 0.524 ± 0.003 cm3.)

The total reaction time of a relay is 38 ± 2 ms. The time for the contact arm to leave the base position is 20 ± 2 ms. How long is the part of the reaction when the arm is moving? (18 ± 3 ms.)

The measured masses of five blocks are 390 ± 10 milligrams (mg), 460 ± 10 mg, 270 ± 10 mg, 540 ± 10 mg, and 420 ± 10 mg. What is the total mass? (2080 ± 22 mg).

A. Does an average of 10 readings, each with n significant figures, yield a value with n + 1 significant figures?

It is commonly assumed that this is the case. The standard argument is that since the first figure after the decimal is significant for each reading, the first figure after the decimal should be significant for the sum (in this case, 187.5). Dividing by 10 does not change this significance, and therefore the average has an additional significant figure.

This situation illustrates the difficulty of laying down hard and fast rules about the analysis of data. In the example given, it may be that the experimenter could judge each reading within ±0.05, but for one reason or another the average value did not always lie within that range. The spread of readings indicates that the figure after the decimal point was not significant in the original technical sense. Circumstances such as this are common in actual experiments. In general, there is no justification for citing an additional significant figure for an average value.

B. The error in a student experiment is not the difference between the student’s experimental value and the textbook value. Error limits of individual measurements should be determined and cited by the experimenter. The compound error due to errors in the individual variables should then be calculated. If the error limits of the experimental value do not overlap the textbook value, there is evident reason to reassess the original judgments of possible error limits or to look for a mistake. An experimental value of 84 ± 20 is not in disagreement with a textbook value of 100. (Of course, if the assignment was to determine the value within 5 per cent, the experimenter has other problems.)

C. For a compound value made up of several individual values, do not seek more precision for any one value than is justified by the precision with which you know the others. If one value has a 10 per cent error, there is usually no point in obtaining another value to within 1 per cent. Notice especially how unimportant small errors become if they express standard deviations, linked with others through the square root of the sum of their squares. (In this case, the product of two values, one with a 10 per cent error and the other with a 3 per cent error, has an error of 102+32=10.4 per cent, and not 13 per cent)

If the diameter of a cylinder is measured to within 2 per cent and the height to within 1 per cent, then the volume is known to within 5 per cent, assuming that the data were not sufficient to justify statistical procedures. The use of three significant figures for π (3.14) yields 16 per cent error margins, which do not add to the overall error margins for the volume. The use of four significant figures would not be justified. There would also be no point in improving the precision of the measurement of the height, since the major contribution to the error is provided by the diameter measurement.

D. You have no doubt heard that ancient saw, “If a thing is worth doing, it is worth doing well.” That, of course, is nonsense. If a thing is worth doing, it is worth doing well enough for the purpose at hand. To do it any better than that is surely silly and probably wrong. Do not think, however, that this realistic view makes life easier. The purpose at hand may require years of devoted and meticulous work. Furthermore, the individual is faced with the awful responsibility of using his head to determine the requirements of the problem. No rules exist.

Precision is expensive. In 1954 a particular cross-section value for a high-energy particle reaction was measured during the course of an afternoon to an accuracy of 15 or 20 per cent. During the following two years, three scientists and numerous technicians spent a considerable fraction of their time and over $100,000 to obtain that cross section to a ±4.4 per cent error. There was good reason for obtaining that precision, and the probable cost and difficulty were carefully considered in advance. The very first response that anyone should make when faced with a task is: “For what purpose is this required?” Your procedure will often depend strongly on the answer.

What is the area of your front yard? Your choice of measuring instruments depends on the purpose for which the information is required. If you want to know the area so that you can buy lime in 80-lb bags at the store, then you can measure the yard by looking at it. Lime is cheap, you cannot buy a fractional bag, and the exact dosage is unimportant. If you want to know the area in order to buy grass seed at $1.00/lb, then you should pace out the yard and perhaps even use pencil and paper to check your arithmetic. Whether the yard is exactly rectangular, or whether your pace is 5 feet (ft) or 5 ft 3 in is unimportant. You certainly would not use a meter stick. (Where is the seed salesman who can advise you in terms of lb/m2 ?) If you want to measure the area of your yard for legal purposes, the assessor will probably insist on knowing the area to within 0.01 acre. A surveyor’s transit is the appropriate measuring instrument.

See Appendix 11 on p. 256.

Close Modal

or Create an Account

Close Modal
Close Modal