Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Coefficient of variation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistical parameter}} {{distinguish|Coefficient of determination}} {{Use American English|date=January 2019}} {{Use dmy dates|date=October 2017}} In [[probability theory]] and [[statistics]], the '''coefficient of variation''' ('''CV'''), also known as normalized [[Root-mean-square deviation|root-mean-square deviation]] (NRMSD), '''percent RMS''', and '''relative standard deviation''' ('''RSD'''), is a [[Standardized (statistics)|standardized]] measure of [[statistical dispersion|dispersion]] of a [[probability distribution]] or [[frequency distribution]]. It is defined as the ratio of the [[standard deviation]] <math> \sigma </math> to the [[mean]] <math> \mu </math> (or its [[absolute value]], {{nowrap|<math>| \mu |</math>)}}, and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in [[analytical chemistry]] to express the precision and repeatability of an [[assay]]. It is also commonly used in fields such as [[engineering]] or [[physics]] when doing quality assurance studies and [[ANOVA gauge R&R]],{{citation needed|date=September 2016}} by economists and investors in [[economic model]]s, in [[epidemiology]], and in [[psychology]]/[[neuroscience]]. ==Definition== The coefficient of variation (CV) is defined as the ratio of the standard deviation <math>\sigma</math> to the mean <math>\mu</math>, <math>CV = \frac{\sigma}{\mu}.</math><ref>{{cite book|url=https://archive.org/details/cambridgediction00ever_0|title=The Cambridge Dictionary of Statistics|last=Everitt|first=Brian|publisher=Cambridge University Press|year=1998|isbn=978-0521593465|location=Cambridge, UK New York|url-access=registration}}</ref> It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on scales that have a meaningful zero ([[ratio scale]]) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on an [[interval scale]].<ref>{{cite web | url=http://www.graphpad.com/faq/viewfaq.cfm?faq=1089 | title=What is the difference between ordinal, interval and ratio variables? Why should I care? | access-date=22 February 2008 | publisher=GraphPad Software Inc | url-status=live | archive-url=https://web.archive.org/web/20081215175508/http://graphpad.com/faq/viewfaq.cfm?faq=1089 | archive-date=15 December 2008 | df=dmy-all }}</ref> For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand, [[Kelvin]] temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability. Measurements that are [[log-normal]]ly distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements. A more robust possibility is the [[quartile coefficient of dispersion]], half the [[interquartile range]] <math> {(Q_3 - Q_1)/2} </math> divided by the average of the quartiles (the [[midhinge]]), <math> {(Q_1 + Q_3)/2} </math>. In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a [[Maximum likelihood estimation|maximum-likelihood estimation]] approach.<ref>{{Cite journal|last1=Odic|first1=Darko|last2=Im|first2=Hee Yeon|last3=Eisinger|first3=Robert|last4=Ly|first4=Ryan|last5=Halberda|first5=Justin|date=June 2016|title=PsiMLE: A maximum-likelihood estimation approach to estimating psychophysical scaling and variability more reliably, efficiently, and flexibly|journal=Behavior Research Methods|volume=48|issue=2|pages=445β462|doi=10.3758/s13428-015-0600-5|issn=1554-3528|pmid=25987306|doi-access=free}}</ref> == Examples == In the examples below, we will take the values given as '''randomly chosen from a larger population of values'''. * The data set [100, 100, 100] has constant values. Its [[standard deviation]] is 0 and average is 100, giving the coefficient of variation as 0 / 100 = 0 * The data set [90, 100, 110] has more variability. Its standard deviation is 10 and its average is 100, giving the coefficient of variation as 10 / 100 = 0.1 * The data set [1, 5, 6, 8, 10, 40, 65, 88] has still more variability. Its standard deviation is 32.9 and its average is 27.9, giving a coefficient of variation of 32.9 / 27.9 = 1.18 In these examples, we will take the values given as '''the entire population of values'''. * The data set [100, 100, 100] has a [[population standard deviation]] of 0 and a coefficient of variation of 0 / 100 = 0 * The data set [90, 100, 110] has a population standard deviation of 8.16 and a coefficient of variation of 8.16 / 100 = 0.0816 * The data set [1, 5, 6, 8, 10, 40, 65, 88] has a population standard deviation of 30.8 and a coefficient of variation of 30.8 / 27.9 = 1.10 ==Estimation== When only a sample of data from a population is available, the population CV can be estimated using the ratio of the [[Standard deviation#Estimation|sample standard deviation]] <math>s \,</math> to the sample mean <math>\bar{x}</math>: :<math>\widehat{c_{\rm v}} = \frac{s}{\bar{x}}</math> But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a [[biased estimator]]. For [[normally distributed]] data, an unbiased estimator<ref>Sokal RR & Rohlf FJ. ''Biometry'' (3rd Ed). New York: Freeman, 1995. p. 58. {{ISBN|0-7167-2411-1}}</ref> for a sample of size n is: :<math>\widehat{c_{\rm v}}^*=\bigg(1+\frac{1}{4n}\bigg)\widehat{c_{\rm v}}</math> ===Log-normal data=== Many datasets follow an approximately log-normal distribution.<ref>{{cite journal |doi=10.1641/0006-3568(2001)051[0341:LNDATS]2.0.CO;2 |title=Log-normal Distributions across the Sciences: Keys and Clues |year=2001 |last1=Limpert |first1=Eckhard |last2=Stahel |first2=Werner A. |last3=Abbt |first3=Markus |journal=BioScience |volume=51 |issue=5 |pages=341β352|doi-access=free }}</ref> In such cases, a more accurate estimate, derived from the properties of the [[log-normal distribution]],<ref>{{cite journal |doi=10.1093/biomet/51.1-2.25 |title=Confidence intervals for the coefficient of variation for the normal and log normal distributions |year=1964 |last1=Koopmans |first1=L. H. |last2=Owen |first2=D. B. |last3=Rosenblatt |first3=J. I. |journal=Biometrika |volume=51 |issue=1β2 |pages=25β32}}</ref><ref>{{cite journal |pmid=1601532 |year=1992 |last1=Diletti |first1=E |last2=Hauschke |first2=D |last3=Steinijans |first3=VW |title=Sample size determination for bioequivalence assessment by means of confidence intervals |volume=30 |pages=S51β8 |journal=International Journal of Clinical Pharmacology, Therapy, and Toxicology|issue=Suppl 1 }}</ref><ref>{{cite journal |doi=10.1081/BIP-100101013 |title=Why Are Pharmacokinetic Data Summarized by Arithmetic Means? |year=2000 |last1=Julious |first1=Steven A. |last2=Debarnot |first2=Camille A. M. |journal=Journal of Biopharmaceutical Statistics |volume=10 |pages=55β71 |pmid=10709801 |issue=1|s2cid=2805094 }}</ref> is defined as: :<math>\widehat{cv}_{\rm raw} = \sqrt{\mathrm{e}^{s_{\ln}^2}-1}</math> where <math>{s_{\ln}} \,</math> is the sample standard deviation of the data after a [[natural log]] transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation <math>s_b \,</math> is converted to base e using <math>s_{\ln} = s_b \ln(b) \,</math>, and the formula for <math>\widehat{cv}_{\rm raw} \,</math> remains the same.<ref>{{cite journal | last1 = Reed | first1 = JF | last2 = Lynn | first2 = F | last3 = Meade | first3 = BD | year = 2002 | title = Use of Coefficient of Variation in Assessing Variability of Quantitative Assays | journal = Clin Diagn Lab Immunol | volume = 9 | issue = 6| pages = 1235β1239 | doi = 10.1128/CDLI.9.6.1235-1239.2002 | pmid = 12414755 | pmc = 130103 }}</ref>) This estimate is sometimes referred to as the "geometric CV" (GCV)<ref>Sawant, S.; Mohan, N. (2011) [http://pharmasug.org/proceedings/2011/PO/PharmaSUG-2011-PO08.pdf "FAQ: Issues with Efficacy Analysis of Clinical Trial Data Using SAS"] {{webarchive|url=https://web.archive.org/web/20110824094357/http://pharmasug.org/proceedings/2011/PO/PharmaSUG-2011-PO08.pdf |date=24 August 2011 }}, ''PharmaSUG2011'', Paper PO08</ref><ref>{{cite journal | last1 = Schiff | first1 = MH | display-authors = etal | year = 2014 | title = Head-to-head, randomised, crossover study of oral versus subcutaneous methotrexate in patients with rheumatoid arthritis: drug-exposure limitations of oral methotrexate at doses >=15 mg may be overcome with subcutaneous administration| journal = Ann Rheum Dis | volume = 73| issue = 8| pages = 1β3 | doi = 10.1136/annrheumdis-2014-205228 | pmid = 24728329 | pmc = 4112421}}</ref> in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood<ref>{{cite journal |last1=Kirkwood |first1=TBL |title=Geometric means and measures of dispersion |journal=Biometrics |year=1979 |volume=35 |issue=4 |pages=908β9 |jstor=2530139 }}</ref> as: :<math>\mathrm{GCV_K} = {\mathrm{e}^{s_{\ln}}\!\!-1}</math> This term was intended to be ''analogous'' to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of <math>c_{\rm v} \,</math> itself. For many practical purposes (such as [[sample size determination]] and calculation of [[confidence intervals]]) it is <math>s_{ln} \,</math> which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of <math>c_{\rm v} \,</math> or GCV by inverting the corresponding formula. ==Comparison to standard deviation== ===Advantages=== The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a [[dimensionless number]]. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation. ===Disadvantages=== * When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale. * Unlike the standard deviation, it cannot be used directly to construct [[confidence interval]]s for the mean. ==Applications== The coefficient of variation is also common in applied probability fields such as [[renewal theory]], [[queueing theory]], and [[reliability theory]]. In these fields, the [[exponential distribution]] is often more important than the [[normal distribution]]. The standard deviation of an [[exponential distribution]] is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an [[Erlang distribution]]) are considered low-variance, while those with CV > 1 (such as a [[hyper-exponential distribution]]) are considered high-variance{{Citation needed|date=June 2019}}. Some formulas in these fields are expressed using the '''squared coefficient of variation''', often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the [[RMSD|Root Mean Square Deviation (RMSD)]]. While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant [[absolute error]] over their working range. In [[actuarial science]], the CV is known as '''unitized risk'''.<ref>{{cite book|last1=Broverman|first1=Samuel A.|title=Actex study manual, Course 1, Examination of the Society of Actuaries, Exam 1 of the Casualty Actuarial Society|date=2001|publisher=Actex Publications|location=Winsted, CT|isbn=9781566983969|page=104|edition=2001|url=https://books.google.com/books?id=qd4PAQAAMAAJ&q=%22unitized+risk%22|access-date=7 June 2014}}</ref> In industrial solids processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached.<ref>{{cite web|url=https://www.powderprocess.net/Measuring_Degree_Mixing.html|title=Measuring Degree of Mixing β Homogeneity of powder mix - Mixture quality - PowderProcess.net|website=www.powderprocess.net|access-date=2 May 2018|url-status=live|archive-url=https://web.archive.org/web/20171114145327/https://www.powderprocess.net/Measuring_Degree_Mixing.html|archive-date=14 November 2017|df=dmy-all}}</ref> In [[fluid dynamics]], the '''CV''', also referred to as '''Percent RMS''', '''%RMS''', '''%RMS Uniformity''', or '''Velocity RMS''', is a useful determination of flow uniformity for industrial processes. The term is used widely in the design of pollution control equipment, such as electrostatic precipitators (ESPs),<ref>{{cite web |url=http://www.airflowsciences.com/sites/default/files/docs/201810Int-ESP-Improved-Methodology-for-Accurate-CFD-and-Physical-Modeling-of-ESPs.pdf| title = Improved Methodology for Accurate CFD and Physical Modeling of ESPs | last1 = Banka | first1 = A | last2 = Dumont | first2 = B | last3 = Franklin | first3 = J | last4 = Klemm | first4 = G | last5 = Mudry | first5 = R | date = 2018 | publisher = International Society of Electrostatic Precipitation (ISESP) Conference 2018 }}</ref> selective catalytic reduction (SCR), scrubbers, and similar devices. The Institute of Clean Air Companies (ICAC) references RMS deviation of velocity in the design of fabric filters (ICAC document F-7).<ref>{{cite web |url=https://www.icac.com/resource/resmgr/ICAC_F-7_FF_Flow_Model_Studi.pdf| title = F7 - Fabric Filter Gas Flow Model Studies | date = 1996 | publisher = Institute of Clean Air Companies (ICAC) }}</ref> The guiding principle is that many of these pollution control devices require "uniform flow" entering and through the control zone. This can be related to uniformity of velocity profile, temperature distribution, gas species (such as ammonia for an SCR, or activated carbon injection for mercury absorption), and other flow-related parameters. The '''Percent RMS''' also is used to assess flow uniformity in combustion systems, HVAC systems, ductwork, inlets to fans and filters, air handling units, etc. where performance of the equipment is influenced by the incoming flow distribution. ===Laboratory measures of intra-assay and inter-assay CVs=== CV measures are often used as quality controls for quantitative laboratory [[assay]]s. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required.<ref>{{cite journal|last=Rodbard|first=D|title=Statistical quality control and routine data processing for radioimmunoassays and immunoradiometric assays.|journal=Clinical Chemistry|date=October 1974|volume=20|issue=10|pages=1255β70|doi=10.1093/clinchem/20.10.1255|pmid=4370388|doi-access=free}}</ref> It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples β in this case standard error in percent is suggested to be superior.<ref name="ReferenceA">{{cite journal|last1=Eisenberg|first1=Dan|title=Improving qPCR telomere length assays: Controlling for well position effects increases statistical power|journal=American Journal of Human Biology|date=2015|doi=10.1002/ajhb.22690|pmid=25757675|volume=27|issue=4|pages=570β5|pmc=4478151}}</ref> If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the [[intraclass correlation]] coefficient are recommended.<ref name="Eisenberg-CV-ICC">{{cite journal|last1=Eisenberg|first1=Dan T. A.|title=Telomere length measurement validity: the coefficient of variation is invalid and cannot be used to compare quantitative polymerase chain reaction and Southern blot telomere length measurement technique|journal=International Journal of Epidemiology|volume=45|issue=4|date=30 August 2016|pages=1295β1298|doi=10.1093/ije/dyw191|pmid=27581804|issn=0300-5771|doi-access=free}}</ref> === As a measure of economic inequality === The coefficient of variation fulfills the [[Income inequality metrics|requirements for a measure of economic inequality]].<ref name="Champ1999">{{cite book |last1=Champernowne |first1=D. G.|last2=Cowell |first2=F. A.|date=1999 |title=Economic Inequality and Income Distribution |publisher=Cambridge University Press }}</ref><ref name="Campano2006">{{cite book |last1=Campano |first1=F. |last2=Salvatore |first2=D. |date=2006 |title=Income distribution |publisher=Oxford University Press }}</ref><ref name="Bellu2006">{{cite web |url=http://www.fao.org/docs/up/easypol/448/simple_inequality_mesures_080en.pdf |title=Policy Impacts on Inequality β Simple Inequality Measures |last1=Bellu |first1=Lorenzo Giovanni |last2=Liberati |first2=Paolo |date=2006 |website=EASYPol, Analytical tools |publisher=Policy Support Service, Policy Assistance Division, FAO |access-date=13 June 2016 |url-status=live |archive-url=https://web.archive.org/web/20160805101141/http://www.fao.org/docs/up/easypol/448/simple_inequality_mesures_080en.pdf |archive-date=5 August 2016 |df=dmy-all }}</ref> If '''x''' (with entries ''x''<sub>''i''</sub>) is a list of the values of an economic indicator (e.g. wealth), with ''x''<sub>''i''</sub> being the wealth of agent ''i'', then the following requirements are met: * Anonymity β ''c''<sub>''v''</sub> is independent of the ordering of the list '''x'''. This follows from the fact that the variance and mean are independent of the ordering of '''x'''. * Scale invariance: ''c''<sub>v</sub>('''x''') = ''c''<sub>v</sub>(α'''x''') where ''α'' is a real number.<ref name="Bellu2006" /> * Population independence β If {'''x''','''x'''} is the list '''x''' appended to itself, then ''c''<sub>''v''</sub>({'''x''','''x'''}) = ''c''<sub>''v''</sub>('''x'''). This follows from the fact that the variance and mean both obey this principle. * PigouβDalton transfer principle: when wealth is transferred from a wealthier agent ''i'' to a poorer agent ''j'' (i.e. ''x''<sub>''i''</sub> > ''x''<sub>''j''</sub>) without altering their rank, then ''c''<sub>''v''</sub> decreases and vice versa.<ref name="Bellu2006" /> ''c''<sub>''v''</sub> assumes its minimum value of zero for complete equality (all ''x''<sub>''i''</sub> are equal).<ref name="Bellu2006" /> Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the [[Gini coefficient]] which is constrained to be between 0 and 1).<ref name="Bellu2006" /> It is, however, more mathematically tractable than the Gini coefficient. === As a measure of standardisation of archaeological artefacts === Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts.<ref>{{cite journal |last1=Eerkens |first1=Jelmer W. |last2=Bettinger |first2=Robert L. |title=Techniques for Assessing Standardization in Artifact Assemblages: Can We Scale Material Variability? |journal=American Antiquity |date=July 2001 |volume=66 |issue=3 |pages=493β504 |doi=10.2307/2694247|jstor=2694247 |s2cid=163507589 }}</ref><ref>{{cite journal |last1=Roux |first1=Valentine |title=Ceramic Standardization and Intensity of Production: Quantifying Degrees of Specialization |journal=American Antiquity |date=2003 |volume=68 |issue=4 |pages=768β782 |doi=10.2307/3557072 |jstor=3557072 |s2cid=147444325 |url=http://doi.org/10.2307/3557072 |language=en |issn=0002-7316}}</ref> Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies.<ref>{{cite journal |last1=Bettinger |first1=Robert L. |last2=Eerkens |first2=Jelmer |title=Point Typologies, Cultural Transmission, and the Spread of Bow-and-Arrow Technology in the Prehistoric Great Basin |journal=American Antiquity |date=April 1999 |volume=64 |issue=2 |pages=231β242 |doi=10.2307/2694276|jstor=2694276 |s2cid=163198451 }}</ref> Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation.<ref>{{cite journal |last1=Wang |first1=Li-Ying |last2=Marwick |first2=Ben |title=Standardization of ceramic shape: A case study of Iron Age pottery from northeastern Taiwan |journal=Journal of Archaeological Science: Reports |date=October 2020 |volume=33 |pages=102554 |doi=10.1016/j.jasrep.2020.102554|bibcode=2020JArSR..33j2554W |s2cid=224904703 |url=http://osf.io/q8hn9/ }}</ref> Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs.<ref>{{cite journal |last1=Krishnamoorthy |first1=K. |last2=Lee |first2=Meesook |title=Improved tests for the equality of normal coefficients of variation |journal=Computational Statistics |date=February 2014 |volume=29 |issue=1β2 |pages=215β232 |doi=10.1007/s00180-013-0445-2|s2cid=120898013 }}</ref><ref>{{cite book |last1=Marwick |first1=Ben |last2=Krishnamoorthy |first2=K |title=cvequality: Tests for the equality of coefficients of variation from multiple groups |date=2019 |publisher=R package version 0.2.0. |url=https://cran.r-project.org/package=cvequality}}</ref> == Examples of misuse == Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in [[Celsius]] and [[Fahrenheit]] (both relative units, where [[kelvin]] and [[Rankine scale]] are their associated absolute values): Celsius: [0, 10, 20, 30, 40] Fahrenheit: [32, 50, 68, 86, 104] The [[Standard deviation#Sample standard deviation|sample standard deviations]] are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%. If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute. Comparing the same data set, now in absolute units: Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15] Rankine: [491.67, 509.67, 527.67, 545.67, 563.67] The [[Standard deviation#Sample standard deviation|sample standard deviations]] are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%. Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable <math>X</math>, the coefficient of variation of <math>aX+b</math> is equal to the coefficient of variation of <math>X</math> only when <math>b = 0</math>. In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form <math>ax+b</math> with <math>b \neq 0</math>, whereas Kelvins can be converted to Rankines through a transformation of the form <math>ax</math>. ==Distribution== Provided that negative and small positive values of the sample mean occur with negligible frequency, the [[probability distribution]] of the coefficient of variation for a sample of size <math>n</math> of i.i.d. normal random variables has been shown by Hendricks and Robey to be<ref>{{cite journal| last1=Hendricks|first1=Walter A.| last2=Robey|first2=Kate W.| year=1936| title=The Sampling Distribution of the Coefficient of Variation| journal=The Annals of Mathematical Statistics|volume=7| issue=3|pages=129β32| doi=10.1214/aoms/1177732503| jstor=2957564|doi-access=free}}</ref> <math display="block"> \mathrm{d}F_{c_{\rm v}} = \frac{2}{\pi^{1/2} \Gamma {\left(\frac{n-1}{2}\right)}} \exp \left( -\frac{n}{2\left(\frac{\sigma}{\mu}\right)^2} \cdot \frac{{c_{\rm v}}^2}{1+{c_{\rm v}}^2} \right) \frac{{c_{\rm v}}^{n-2}}{(1+{c_{\rm v}}^2)^{n/2}}\sideset{}{^\prime}\sum_{i=0}^{n-1}\frac{(n-1)! \, \Gamma \left(\frac{n-i}{2}\right)}{(n-1-i)! \, i! \,} \cdot \frac{n^{i/2}}{2^{i/2} \cdot \left(\frac{\sigma}{\mu}\right)^i} \cdot \frac{1}{(1+{c_{\rm v}}^2)^{i/2}} \, \mathrm{d}c_{\rm v} ,</math> where the symbol <math display="inline">\sideset{}{^\prime}\sum</math> indicates that the summation is over only even values of <math>n - 1 - i</math>, i.e., if <math>n</math> is odd, sum over even values of <math>i</math> and if <math>n</math> is even, sum only over odd values of <math>i</math>. This is useful, for instance, in the construction of [[hypothesis test]]s or [[confidence interval]]s. Statistical inference for the coefficient of variation in normally distributed data is often based on [[McKay's approximation for the coefficient of variation|McKay's chi-square approximation]] for the coefficient of variation.<ref>{{cite journal |jstor=1267363 |title=Comparisons of approximations to the percentage points of the sample coefficient of variation |year=1970 |last1=Iglevicz |first1=Boris |last2=Myers |first2=Raymond |journal=Technometrics |volume=12 |issue=1 |pages=166β169|doi=10.2307/1267363 }}</ref><ref>{{cite book | last1 = Bennett | first1 = B. M. | title = Contribution to Applied Statistics | chapter = On an Approximate Test for Homogeneity of Coefficients of Variation | year = 1976 | volume = 22 | pages = 169β171 | doi = 10.1007/978-3-0348-5513-6_16 | series = Experientia Supplementum | isbn = 978-3-0348-5515-0 }}</ref><ref>{{cite journal |jstor=2685039 |title=Confidence intervals for a normal coefficient of variation |year=1996 |last1=Vangel |first1=Mark G. |journal=The American Statistician |volume=50 |issue=1 |pages=21β26 |doi=10.1080/00031305.1996.10473537}}.</ref><ref>{{cite journal |doi=10.1002/(SICI)1097-0258(19960330)15:6<647::AID-SIM184>3.0.CO;2-P |title=An asymptotic test for the equality of coefficients of variation from k populations |journal=Statistics in Medicine |volume=15 |issue=6 |pages=647 |year=1996 |last1=Feltz |first1=Carol J |last2=Miller |first2=G. Edward |pmid=8731006 }}</ref><ref>{{Cite journal | url=http://pub.epsilon.slu.se/4489/1/forkman_j_110214.pdf | title=Estimator and tests for common coefficients of variation in normal distributions | access-date=23 September 2013 | last1=Forkman | first1=Johannes | journal=Communications in Statistics β Theory and Methods | volume=38 | issue=2 | pages=21β26 | doi=10.1080/03610920802187448 | year=2009 | s2cid=29168286 | url-status=live | archive-url=https://web.archive.org/web/20131206021229/http://pub.epsilon.slu.se/4489/1/forkman_j_110214.pdf | archive-date=6 December 2013 | df=dmy-all }}</ref><ref>{{cite journal |doi=10.1007/s00180-013-0445-2 |title=Improved tests for the equality of normal coefficients of variation |journal=Computational Statistics |volume=29 |issue=1β2 |pages=215β232 |year=2013 |last1=Krishnamoorthy |first1=K |last2=Lee |first2=Meesook |s2cid=120898013 }}</ref> ===Alternative=== Liu (2012) reviews methods for the construction of a confidence interval for the coefficient of variation.<ref>{{Cite thesis |last=Liu |first=Shuang |title=Confidence Interval Estimation for Coefficient of Variation |publisher=Georgia State University |url=https://scholarworks.gsu.edu/math_theses/124 |access-date=2014-02-25 |url-status=live |archive-url=https://web.archive.org/web/20140301102042/http://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1116&context=math_theses |archive-date=1 March 2014 |df=dmy-all |at=p.3 |year=2012}}</ref> Notably, Lehmann (1986) derived the sampling distribution for the coefficient of variation using a [[non-central t-distribution]] to give an exact method for the construction of the CI.<ref>Lehmann, E. L. (1986). ''Testing Statistical Hypothesis.'' 2nd ed. New York: Wiley.</ref> == Similar ratios == [[Standardized moment]]s are similar ratios, <math>{\mu_k}/{\sigma^k}</math> where <math>\mu_k</math> is the ''k''<sup>th</sup> moment about the mean, which are also dimensionless and scale invariant. The [[variance-to-mean ratio]], <math>\sigma^2/\mu</math>, is another similar ratio, but is not dimensionless, and hence not scale invariant. See [[Normalization (statistics)]] for further ratios. In [[signal processing]], particularly [[image processing]], the [[Multiplicative inverse|reciprocal]] ratio <math>\mu/\sigma</math> (or its square) is referred to as the [[signal-to-noise ratio]] in general and [[signal-to-noise ratio (imaging)]] in particular. Other related ratios include: * [[Efficiency (statistics)#Estimators of u.i.d. Variables|Efficiency]], <math>\sigma^2 / \mu^2</math> * [[Standardized moment]], <math>\mu_k/\sigma^k</math> * [[Variance-to-mean ratio]] (or relative variance), <math>\sigma^2/\mu</math> * [[Fano factor]], <math>\sigma^2_W/\mu_W</math> (windowed VMR) ==See also== * [[Standard score]] * [[Information ratio]] * [[Omega ratio]] * [[Sampling (statistics)]] * [[Variance function]] ==References== {{Reflist}} ==External links== *[https://cran.r-project.org/package=cvequality cvequality]: [[R (programming language)|R]] package to test for significant differences between multiple coefficients of variation {{Statistics|descriptive}} {{DEFAULTSORT:Coefficient Of Variation}} [[Category:Statistical deviation and dispersion]] [[Category:Statistical ratios]] [[Category:Income inequality metrics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite thesis
(
edit
)
Template:Cite web
(
edit
)
Template:Distinguish
(
edit
)
Template:ISBN
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:Use American English
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)