Difference between uncertainty and precision?

In summary: Instrument error is cumulative and biased. The goal is to eliminate the errors to one percent. To achieve this, the precision of the instrument must be ten times as accurate as the part. the gage maker rule of ten was introduced in the 1930s. The "Gage Maker's Rule of Ten" is a method of reducing measurement errors in the past. It is still used in some industries today.
  • #1
terryds
392
13
Is precision the smallest division in measurement instruments?
Is uncertainty one-tenth or a half of the smallest division in measurement instruments? I'm confused some say one-tenth, some say a half...
Thanks in advance
 
Engineering news on Phys.org
  • #2
terryds said:
Is precision the smallest division in measurement instruments?
It's much more involved than that: If a weighing scale is badly calibrated the precision suffers. If the zero of the scale is off, precision suffers.
terryds said:
Is uncertainty one-tenth or a half of the smallest division in measurement instruments?
Same answer. I take it you refer to instruments with needles (for most digital readings the one-tenth doesn't apply, right?). Take a simple ruler, preferably with a resonable mm division): can you distinguish between 33.6 and 33.7 mm ? The 1/10 is probably too optimistic and the 1/2 seems a bit too coarse.
 
  • #3
BvU said:
It's much more involved than that: If a weighing scale is badly calibrated the precision suffers. If the zero of the scale is off, precision suffers.
Same answer. I take it you refer to instruments with needles (for most digital readings the one-tenth doesn't apply, right?). Take a simple ruler, preferably with a resonable mm division): can you distinguish between 33.6 and 33.7 mm ? The 1/10 is probably too optimistic and the 1/2 seems a bit too coarse.

Is there a standard formula (convention) for this??
 
  • #4
In engineering the permitted range of deviation from an ideal value is usually specified as a tolerance band .

For instance 35 mm +/- 0.1 mm
 
Last edited:
  • #5
terryds said:
Is there a standard formula (convention) for this??
I looked it up in G. L. Squires, Practical Physics (mine is dutch translation from 1968 :smile:) and there it says a ruler has a precision of 0.2 mm, BUT you have to avoid the parallax error (by looking really vertically) AND you have to take two measurements: one at the left, one at the right. So there you have a factor ##\sqrt 2## (if the measurements are independent) already. And sure enough you end up smack halfway between the 0.5 and the 0.1 mm !

There's no strict standards; common sense is our best tool, followed by experience; a good dose of paranoia when experimenting comes in handy.
 
  • Like
Likes terryds
  • #6
Precision is the fineness of measurement.
the gage maker rule of ten was introduced in the 1930s. Gage Maker’s “ Rule of Ten”Before the introduction of the Coordinate Measurement Machine, and associated Windows based Software, common methods of measurement were fraught with measurement errors. These may be categorized as the four sources of measurement error; inherent instrument error, observational error, manipulative error and bias. To dramatically reduce these measurement errors, the “ Ten-to-One Rule” was developed.Rule: The instrument must be capable of dividing the tolerance into ten parts.



The Purpose: To eliminate 99% of the instrumentation error of previous steps in measurement.



When Applied: To every step in the measurement sequence until the limit of the available instrument is reached.



The results: Fewer bad parts accepted and good parts rejected.



This whole concept was to reduce the zone of uncertainty and achieve reliable measurement. Before the Digital Age, this Rule was applied to Vernier Metrology Instruments such as Micrometers, Scales and Calipers. Let us return to the four sources of measurement error discussed above.Observational Error – This error is all but eliminated with introduction of inexpensive DRO metrology instruments. Errors caused by misreading the lines on a micrometer have been eliminated (within reason) with the DRO device.Manipulative error – Operator influence during measurement is dramatically reduced when an electrical Touch Probe is used to acquire data. It is still possible to “ crank down” a micrometer barrel when measuring a part, however, this act becomes an intentional event and not an error. When the operator chooses to completely ignore the ratchet screw attached to the micrometer barrel, during the measurement process, error is not random.Bias – Unconscious influence during measurement used to be a problem. Most precision measurements required several readings, which were averaged. The obvious wrong ones were thrown out. This was an open invitation to bias. Again, the introduction to low cost Digital devices much eliminated these errors.Instrument error – Common practice says, “the measuring instrument should be ten times as accurate as the part”. What does this really mean? What is the goal? When you finally narrow the requirement, the goal is to limit the amount of instrument error that can creep into the measurement, to one percent. Restated the rule is simply: The instrument should divide the tolerance into ten parts. Note that the rule stated applies only to the precision of the instrument, not the accuracy. This is because the accuracy is derived from the standard. If the standard is not accurate, all is lost. The Fineness of measurement described above relates to Precision of measurement and more directly, the Resolution of the instrument.



Why the Ten to One Ratio – This rule was established in an attempt to ensure that all instrument readings would fall within the zone of uncertainty of the instrument. This simply means that over 99% of all measurement readings would show up on the meter. This was statistically determined based on the 3 Sigma confidence level. All repeat errors or dispersion of readings would appear on the meter or readout device. In the old days, inspectors wanted the dial indicator or gage meter to be able to display the Part Error and the instrument error so that all possible problems were covered. I am sure you can understand the main problem with this as tolerances tighten up. You may be discarding “ bad parts” based upon the error of the instrument.



CMM and the “Rule of Ten”-
If one were to carry the 10 to 1 rule to its extreme, a typical CMM with volumetric accuracy of .00025” would only be able to measure parts with 0.0025” tolerance. If the part tolerance is .0005”, then a CMM with accuracy of .000050” would be required along with a $ 250,000 price tag. No one that I know of adheres to the “ Rule of Ten” when discussing CMM’s. So how do we measure a CMMs Accuracy and Repeatability?CMM Accuracy



From its introduction until the implementation of ERROR MAPPING, the accuracy of a CMM was measured using a Laser Interferometer. The laser beam was “ bucked” (made parallel) to the X axis and the readings over the entire length were observed. The laser reflector was usually positioned mid axis on the Y and Z-axis. Typical accuracy was stated as Linear accuracy +/- .0002” over X-axis. The procedure was repeated on the Y & Z-axis. With the adoption of the ANSI B89 standard, the Ball Bar was added to measure “ Volumetric accuracy. A ball bar looks like a dumb bell. Two datum balls are attached with a length of bar. One “ ball” is located on a magnetic mount that permits rotation. The other ball is locked at various positions on the CMM and numerous measurements of the ball are made. This method of measurement had several flaws. VDI-VDE 2617 was added in an attempt to qualify measuring uncertainty of the CMM. This standard provides for a 95% confidence interval and evaluates linear accuracy specified as U1 and volumetric accuracy specified as U3. In each case, the specification provides a 95% confidence, which means 5% of the observations can be excluded from the testing data. This means "flyers" are tossed which effectively allowed machine accuracy statements to report better accuracy than could actually be achieved.

Understanding the basics of ISO 10-360-2

The ISO 10-360-2 specification changed all of this recognizing that in practical every day measurement the cmm use does not have the luxury of excluding "flyers" and generally users do not even know when flyers are observed. As such the ISO spec requires 100% of all observations to be included in the evaluation of the cmm.Under ISO 10-360-2, the machine is evaluated in at least 3 areas. MPEE is volumetric length measurement utilizing calibrated gage blocks. Ball bars are not used as their length is arbitrary and only the spheres can be calibrated. Moreover, the measurement of a sphere employs many points to resolve the center, which does not represent practical
https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image001.png​
measurement. In practical measurement, point measurements are taken as required for the respective part feature with the expectation each point is accurate. There is no luxury to measure multiple points and resolve (1) one point to be used in the part feature calculation. Under ISO 10-360-2 discrete single points are measured, bi-directionally to evaluate length and the MPEE value reports the range of measurements from seven different positions with five different gage lengths, repeated 3 times. All 105 measurements with their deviation from certified lengths are considered and are provided as an Uncertainty of measured length.
https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image002.png​



The next ISO 10-360-2 evaluation is MPEP which is Probing uncertainty. The machine measures 25 discrete points and is evaluated as 25 individual radii. The range of radii variation , min to max, is the MPEP value.
https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image004.jpg​
The next metric in the new Wenzel LH brochure is MPETHP which is similar to MPEE however this is achieved by full contact scanning of 4 lines, of which, only one can be a full 360 degree equatorial scan. This evaluation comes from another ISO evaluation, ISO 10-360-4, and is specific to full contact scanning. As above the points with in each scan are treated as radii and the range of radii deviation, min to max, is reported.https://www.physicsforums.com/file:///C:/Users/MIKEBI~1/AppData/Local/Temp/msohtmlclip1/01/clip_image006.png
Sources



Practical Metrology K.J. Hume, G.H. Sharpe, London, Macdonald & Co. Ltd.Quality Control Handbook J.M. Juran, New York, McGraw-Hill Book Co. Inc.Fundamentals of Dimensional Metrology Ted Busch, Delmar Publishers Inc.

Inspection and Gaging Kennedy-Hoffman-Bond, New York, Industrial Press Inc

and Gaging Kennedy-Hoffman-Bond, New York, Industrial Press Inc
 
Last edited by a moderator:

Related to Difference between uncertainty and precision?

1. What is the difference between uncertainty and precision?

Uncertainty is the measure of how much a measurement or value may vary from its true value. Precision, on the other hand, refers to the level of detail or exactness in which a measurement is taken. In other words, uncertainty is about the potential for error, while precision is about the level of detail in a measurement.

2. How do uncertainty and precision affect scientific measurements?

Uncertainty and precision are important factors in scientific measurements because they can affect the accuracy of the results. Inaccurate or imprecise measurements can lead to incorrect conclusions and hinder the progress of scientific research.

3. Can a measurement be precise but not accurate?

Yes, it is possible for a measurement to be precise but not accurate. Precision refers to the consistency or reproducibility of a measurement, while accuracy refers to how close the measurement is to the true value. A measurement can be precise but still have a consistent error that makes it inaccurate.

4. How can uncertainty and precision be reduced in scientific measurements?

Uncertainty and precision can be reduced by using more precise instruments, taking multiple measurements and averaging the results, and reducing sources of error. It is also important to clearly define and understand the limitations of the measurement and the instrument being used.

5. Is it better to have high uncertainty or high precision in scientific measurements?

Ideally, scientific measurements should have both low uncertainty and high precision. However, if one must choose between the two, it is generally better to have high precision. This is because precise measurements can help identify patterns and trends, while measurements with high uncertainty may be too unreliable to draw meaningful conclusions from.

Similar threads

  • Introductory Physics Homework Help
Replies
2
Views
368
Replies
5
Views
1K
Replies
1
Views
626
  • Classical Physics
Replies
20
Views
1K
Replies
7
Views
633
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Atomic and Condensed Matter
Replies
4
Views
1K
Replies
5
Views
1K
Replies
21
Views
1K
Back
Top