Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Accuracy and precision
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Quantification and applications== {{See also|False precision}} In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions.<ref>Creus, Antonio. ''Instrumentación Industrial''{{citation needed|date=February 2015}}</ref> Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some [[traceability|traceable]] reference [[Technical standard|standard]]. Such standards are defined in the [[SI|International System of Units]] (abbreviated SI from French: ''Système international d'unités'') and maintained by national [[standards organization]]s such as the [[National Institute of Standards and Technology]] in the United States. This also applies when measurements are repeated and averaged. In that case, the term [[standard error (statistics)|standard error]] is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the [[central limit theorem]] shows that the [[probability distribution]] of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: * the difference between the [[mean]] of the measurements and the reference value, the [[bias of an estimator|bias]]. Establishing and correcting for bias is necessary for [[calibration]]. * the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of [[significant figures]]. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 10<sup>3</sup> m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 10<sup>3</sup> m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 10<sup>3</sup> m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to [[false precision]] errors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000. Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10<sup>−10</sup> m, meaning a range of between 7.54375 and 7.54421 × 10<sup>−10</sup> m. Precision includes: * ''repeatability'' — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and * ''reproducibility'' — the variation arising using the same measurement process among different instruments and operators, and over longer time periods. In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within.<ref>{{Cite book|last=Black|first=J. Temple|url=http://worldcat.org/oclc/1246529321|title=DeGarmo's materials and processes in manufacturing.|date=21 July 2020|publisher=John Wiley & Sons |isbn=978-1-119-72329-5|oclc=1246529321}}</ref> For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data.<ref>{{Cite journal|last1=Parker|first1=Christopher J.|last2=Gill|first2=Simeon|last3=Harwood|first3=Adrian|last4=Hayes|first4=Steven G.|last5=Ahmed|first5=Maryam|date=2021-05-19|title=A Method for Increasing 3D Body Scanning's Precision: Gryphon and Consecutive Scanning|journal=Ergonomics|volume=65|issue=1|language=en|pages=39–59|doi=10.1080/00140139.2021.1931473|pmid=34006206|issn=0014-0139|doi-access=free}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)