Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Median
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Middle quantile of a data set or probability distribution}} {{About|the statistical concept}} [[File:Finding the median.png|thumb|Calculating the median in data sets of odd (above) and even (below) observations]] The '''median''' of a set of numbers is the value separating the higher half from the lower half of a [[Sample (statistics)|data sample]], a [[statistical population|population]], or a [[probability distribution]]. For a [[data set]], it may be thought of as the “middle" value. The basic feature of the median in describing data compared to the [[Arithmetic mean|mean]] (often simply described as the "average") is that it is not [[Skewness|skewed]] by a small proportion of extremely large or small values, and therefore provides a better representation of the center. [[Median income]], for example, may be a better way to describe the center of the income distribution because increases in the largest incomes alone have no effect on the median. For this reason, the median is of central importance in [[robust statistics]]. Median is a 2-[[quantile]]; it is the value that partitions a set into two equal parts. ==Finite set of numbers== The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest. If the data set has an odd number of observations, the middle one is selected (after arranging in ascending order). For example, the following list of seven numbers, {{block indent | em = 1.5 | text = 1, 3, 3, '''6''', 7, 8, 9}} has the median of ''6'', which is the fourth value. If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the [[arithmetic mean]] of the two middle values.<ref name="StatisticalMedian">{{MathWorld |urlname=StatisticalMedian |title=Statistical Median }}</ref><ref>Simon, Laura J.; [http://www.stat.psu.edu/old_resources/ClassNotes/ljs_07/sld008.htm "Descriptive statistics"] {{webarchive|url=https://web.archive.org/web/20100730032416/http://www.stat.psu.edu/old_resources/ClassNotes/ljs_07/sld008.htm |date=2010-07-30 }}, ''Statistical Education Resource Kit'', Pennsylvania State Department of Statistics</ref> For example, this data set of 8 numbers {{block indent | em = 1.5 | text = 1, 2, 3, '''4, 5''', 6, 8, 9}} has a median value of ''4.5'', that is <math>(4 + 5)/2</math>. (In more technical terms, this interprets the median as the fully [[trimmed estimator|trimmed]] [[mid-range]]). In general, with this convention, the median can be defined as follows: For a data set <math>x</math> of <math>n</math> elements, ordered from smallest to greatest, {{block indent | em = 1.5 | text = if <math>n</math> is odd, <math>\operatorname{med}(x) = x_{(n + 1)/ 2} </math>}} {{block indent | em = 1.5 | text = if <math>n</math> is even, <math>\operatorname{med}(x) = \frac{x_{(n/2)} + x_{((n/2)+1)}}{2} </math>}} {| class="wikitable" |+ Comparison of common [[average]]s of values [ 1, 2, 2, 3, 4, 7, 9 ] ! Type ! Description ! Example ! Result |- | align="center" | [[Mid-range|Midrange]] | Midway point between the minimum and the maximum of a data set | align="center" | '''1''', 2, 2, 3, 4, 7, '''9''' | align="center" | '''5''' |- | align="center" | [[Arithmetic mean]] | Sum of values of a data set divided by number of values: <math display="inline">\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i</math> | align="center" | {{nowrap|(1 + 2 + 2 + 3 + 4 + 7 + 9) / 7}} | align="center" | '''4''' |- | align="center" | Median | Middle value separating the greater and lesser halves of a data set | align="center" | 1, 2, 2, '''3''', 4, 7, 9 | align="center" | '''3''' |- | align="center" | [[Mode (statistics)|Mode]] | Most frequent value in a data set | align="center" | 1, '''2''', '''2''', 3, 4, 7, 9 | align="center" | '''2''' |} ==Definition and notation== Formally, a median of a [[Population (statistics)|population]] is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median. The median is well-defined for any [[Weak ordering|ordered]] (one-dimensional) data and is independent of any [[distance metric]]. The median can thus be applied to school classes which are ranked but not numerical (e.g. working out a median grade when student test scores are graded from F to A), although the result might be halfway between classes if there is an even number of classes. (For odd number classes, one specific class is determined as the median.) A [[geometric median]], on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the [[medoid]]. There is no widely accepted standard notation for the median, but some authors represent the median of a variable ''x'' as med(''x''), ''x͂'',<ref name="Bissell1994">{{cite book|author=Derek Bissell | title=Statistical Methods for Spc and Tqm | url=https://books.google.com/books?id=cTwwtyBX7PAC&pg=PA26 |access-date=25 February 2013|year=1994 |publisher=CRC Press |isbn=978-0-412-39440-9 |pages=26–}}</ref> as ''μ''<sub>1/2</sub>,<ref name="StatisticalMedian" /> or as ''M''.<ref name="Bissell1994"/><ref name="Sheskin2003">{{cite book| author=David J. Sheskin|title=Handbook of Parametric and Nonparametric Statistical Procedures | edition = Third |url=https://books.google.com/books?id=bmwhcJqq01cC&pg=PA7 |access-date=25 February 2013|date=27 August 2003 |publisher=CRC Press|isbn=978-1-4200-3626-8 |page=7}}</ref> In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is a special case of other [[location parameter|ways of summarizing the typical values associated with a statistical distribution]]: it is the 2nd [[quartile]], 5th [[decile]], and 50th [[percentile]]. ==Uses== The median can be used as a measure of [[location parameter|location]] when one attaches reduced importance to extreme values, typically because a distribution is [[skewness|skewed]], extreme values are not known, or [[outlier]]s are untrustworthy, i.e., may be measurement or transcription errors. For example, consider the [[multiset]] {{block indent | em = 1.5 | text = 1, 2, 2, 2, 3, 14. }} The median is 2 in this case, as is the [[mode (statistics)|mode]], and it might be seen as a better indication of the [[central tendency|center]] than the [[arithmetic mean]] of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see {{slink||Inequality relating means and medians}} below.<ref>{{cite journal |url=http://www.amstat.org/publications/jse/v13n2/vonhippel.html |title=Mean, Median, and Skew: Correcting a Textbook Rule |journal=Journal of Statistics Education |volume=13 |issue=2 |author=Paul T. von Hippel |year=2005 |access-date=2015-06-18 |archive-date=2008-10-14 |archive-url=https://web.archive.org/web/20081014045349/http://www.amstat.org/publications/jse/v13n2/vonhippel.html |url-status=dead }}</ref> As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.<ref name="Robson">{{cite book | last1=Robson|first1=Colin | title=Experiment, Design and Statistics in Psychology |date=1994|publisher=Penguin |isbn=0-14-017648-9|pages=42–45}}</ref> Because the median is simple to understand and easy to calculate, while also a robust approximation to the [[mean]], the median is a popular [[summary statistic]] in [[descriptive statistics]]. In this context, there are several choices for a measure of [[variability (statistics)|variability]]: the [[Range (statistics)|range]], the [[interquartile range]], the [[mean absolute deviation]], and the [[median absolute deviation]]. For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the [[Efficiency (statistics)|efficiency]] of candidate estimators shows that the sample mean is more statistically efficient [[if and only if|when—and only when—]] data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions.{{citation needed|date=February 2020}} Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.<ref name="Williams 2001 165">{{cite book |last=Williams |first=D. |year=2001 |title=Weighing the Odds |url=https://archive.org/details/weighingoddscour00will_530 |url-access=limited |publisher=Cambridge University Press |isbn=052100618X |page=[https://archive.org/details/weighingoddscour00will_530/page/n184 165]}}</ref><ref>{{Cite book | last1=Maindonald|first1=John| url=https://books.google.com/books?id=8bMj8m-4RDQC&pg=PA104| title=Data Analysis and Graphics Using R: An Example-Based Approach| last2=Braun|first2=W. John |date=2010-05-06|publisher=Cambridge University Press|isbn=978-1-139-48667-5| pages=104|language=en}}</ref> ==Probability distributions== For any [[real number|real]]-valued [[probability distribution]] with [[cumulative distribution function]] ''F'', a median is defined as any real number ''m'' that satisfies the inequalities <math display="block"> \lim_{x\to m-} F(x) \leq \frac{1}{2} \leq F(m) </math> (cf. the [[Expected value#Uhl2023Bild1|drawing]] in the [[Expected value#Arbitrary real-valued random variables|definition of expected value for arbitrary real-valued random variables]]). An equivalent phrasing uses a random variable ''X'' distributed according to ''F'': <math display="block"> \operatorname{P}(X\leq m) \geq \frac{1}{2}\text{ and } \operatorname{P}(X\geq m) \geq \frac{1}{2}\,. </math> [[File:visualisation mode median mean.svg|thumb|upright|[[Mode (statistics)|Mode]], median and mean ([[expected value]]) of a probability density function<ref>{{cite web|title=AP Statistics Review - Density Curves and the Normal Distributions| url=http://apstatsreview.tumblr.com/post/50058615236/density-curves-and-the-normal-distributions|access-date=16 March 2015| archive-url=https://web.archive.org/web/20150408230922/https://apstatsreview.tumblr.com/post/50058615236/density-curves-and-the-normal-distributions |archive-date=8 April 2015}}</ref>]] Note that this definition does not require ''X'' to have an [[absolute continuity|absolutely continuous distribution]] (which has a [[probability density function]] ''f''), nor does it require a [[discrete distribution|discrete one]]. In the former case, the inequalities can be upgraded to equality: a median satisfies <math display="block">\operatorname{P}(X \leq m) = \int_{-\infty}^m{f(x)\, dx} = \frac{1}{2}</math> and <math display="block">\operatorname{P}(X \geq m) = \int_m^{\infty}{f(x)\, dx} = \frac{1}{2}\,.</math> Any [[probability distribution]] on the real number set <math>\R</math> has at least one median, but in pathological cases there may be more than one median: if ''F'' is constant 1/2 on an interval (so that ''f'' = 0 there), then any value of that interval is a median. ===Medians of particular distributions=== The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the [[Cauchy distribution]]: * The median of a symmetric [[unimodal distribution]] coincides with the mode. * The median of a [[symmetric distribution]] which possesses a mean ''μ'' also takes the value ''μ''. ** The median of a [[normal distribution]] with mean ''μ'' and variance ''σ''<sup>2</sup> is μ. In fact, for a normal distribution, mean = median = mode. ** The median of a [[uniform distribution (continuous)|uniform distribution]] in the interval [''a'', ''b''] is (''a'' + ''b'') / 2, which is also the mean. * The median of a [[Cauchy distribution]] with location parameter ''x''<sub>0</sub> and scale parameter ''y'' is ''x''<sub>0</sub>, the location parameter. * The median of a [[Power law|power law distribution]] ''x''<sup>−''a''</sup>, with exponent ''a'' > 1 is 2<sup>1/(''a'' − 1)</sup>''x''<sub>min</sub>, where ''x''<sub>min</sub> is the minimum value for which the power law holds<ref>{{cite journal | arxiv=cond-mat/0412004 | doi=10.1080/00107510500052444 | title=Power laws, Pareto distributions and Zipf's law | year=2005 | last1=Newman | first1=M. E. J. | journal=Contemporary Physics | volume=46 | issue=5 | pages=323–351 | bibcode=2005ConPh..46..323N | s2cid=2871747 }}</ref> * The median of an [[exponential distribution]] with [[rate parameter]] ''λ'' is the [[natural logarithm]] of 2 divided by the rate parameter: ''λ''<sup>−1</sup>ln 2. * The median of a [[Weibull distribution]] with shape parameter ''k'' and scale parameter ''λ'' is ''λ''(ln 2)<sup>1/''k''</sup>. ==Properties== ===Optimality property=== The ''[[mean absolute error]]'' of a real variable ''c'' with respect to the [[random variable]] ''X'' is <math display="block">\operatorname{E}\left[\left|X-c\right|\right]</math> Provided that the probability distribution of ''X'' is such that the above expectation exists, then ''m'' is a median of ''X'' if and only if ''m'' is a minimizer of the mean absolute error with respect to ''X''.<ref>{{cite book |last=Stroock |first=Daniel |title=Probability Theory |url=https://archive.org/details/probabilitytheor00stro |url-access=limited |year=2011 |publisher=Cambridge University Press |isbn=978-0-521-13250-3 |pages=[https://archive.org/details/probabilitytheor00stro/page/n66 43] }}</ref> In particular, if ''m'' is a sample median, then it minimizes the arithmetic mean of the absolute deviations.<ref>{{cite book | last = DeGroot | first = Morris H. | mr = 0356303 | page = 232 | publisher = McGraw-Hill Book Co., New York-London-Sydney | title = Optimal Statistical Decisions | url = https://books.google.com/books?id=7rDY2_r4bmEC&pg=PA232 | year = 1970 | isbn = 9780471680291 }}</ref> Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique. More generally, a median is defined as a minimum of <math display="block">\operatorname{E}\left[\left|X - c\right| - \left|X\right|\right],</math> as discussed below in the section on [[multivariate median]]s (specifically, the [[spatial median]]). This optimization-based definition of the median is useful in statistical data-analysis, for example, in [[k-medians clustering|''k''-medians clustering]]. ===Inequality relating means and medians=== [[File:Comparison mean median mode.svg|thumb|upright=1.35|Comparison of [[mean]], median and [[mode (statistics)|mode]] of two [[log-normal distribution]]s with different [[skewness]]]] If the distribution has finite variance, then the distance between the median <math>\tilde{X}</math> and the mean <math>\bar{X}</math> is bounded by one [[standard deviation]]. This bound was proved by Book and Sher in 1979 for discrete samples,<ref>{{cite journal |author1=Stephen A. Book |author2=Lawrence Sher |title=How close are the mean and the median? |journal=The Two-Year College Mathematics Journal |date=1979 |volume=10 |issue=3 |pages=202–204 |doi=10.2307/3026748 |jstor=3026748 |url=https://www.jstor.org/stable/3026748 |access-date=12 March 2022}}</ref> and more generally by Page and Murty in 1982.<ref>{{cite journal |author1=Warren Page |author2=Vedula N. Murty |title=Nearness Relations Among Measures of Central Tendency and Dispersion: Part 1 |journal=The Two-Year College Mathematics Journal |date=1982 |volume=13 |issue=5 |pages=315–327 |doi=10.1080/00494925.1982.11972639 |doi-broken-date=1 November 2024 |url=https://www.tandfonline.com/doi/abs/10.1080/00494925.1982.11972639?journalCode=ucmj19 |access-date=12 March 2022}}</ref> In a comment on a subsequent proof by O'Cinneide,<ref>{{cite journal |last1=O'Cinneide |first1=Colm Art |title=The mean is within one standard deviation of any median |journal=The American Statistician |date=1990 |volume=44 |issue=4 |pages=292–293 |doi=10.1080/00031305.1990.10475743 |url=https://www.tandfonline.com/doi/abs/10.1080/00031305.1990.10475743?journalCode=utas20 |access-date=12 March 2022}}</ref> Mallows in 1991 presented a compact proof that uses [[Jensen's inequality]] twice,<ref>{{cite journal |last=Mallows |first=Colin |title=Another comment on O'Cinneide |journal=The American Statistician |date=August 1991 |volume=45 |issue=3 |pages=257 | doi = 10.1080/00031305.1991.10475815}}</ref> as follows. Using |·| for the [[absolute value]], we have <math display="block">\begin{align} \left|\mu - m\right| = \left|\operatorname{E}(X - m)\right| & \leq \operatorname{E}\left(\left|X - m \right|\right) \\[2ex] & \leq \operatorname{E}\left(\left|X - \mu\right|\right) \\[1ex] & \leq \sqrt{\operatorname{E}\left({\left(X - \mu\right)}^2\right)} = \sigma. \end{align}</math> The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the [[absolute deviation]] function <math>a \mapsto \operatorname{E}[|X-a|]</math>. Mallows's proof can be generalized to obtain a multivariate version of the inequality<ref name=PicheRandomVectorsSequences>{{cite book|last=Piché|first=Robert|title=Random Vectors and Random Sequences|year=2012|publisher=Lambert Academic Publishing|isbn=978-3659211966}}</ref> simply by replacing the absolute value with a [[norm (mathematics)|norm]]: <math display="block">\left\|\mu - m\right\| \leq \sqrt{ \operatorname{E}\left({\left\|X - \mu\right\|}^2\right) } = \sqrt{ \operatorname{trace}\left(\operatorname{var}(X)\right) }</math> where ''m'' is a [[spatial median]], that is, a minimizer of the function <math>a \mapsto \operatorname{E}(\|X-a\|).\,</math> The spatial median is unique when the data-set's dimension is two or more.<ref name="Kemperman">{{cite journal |first=Johannes H. B. |last=Kemperman |title=The median of a finite measure on a Banach space: Statistical data analysis based on the L1-norm and related methods |journal=Papers from the First International Conference Held at Neuchâtel, August 31–September 4, 1987 |editor-first=Yadolah |editor-last=Dodge |publisher=North-Holland Publishing Co. |location=Amsterdam |pages=217–230 |mr=949228 |year=1987 }}</ref><ref name="MilasevicDucharme">{{cite journal |first1=Philip |last1=Milasevic |first2=Gilles R. |last2=Ducharme |title=Uniqueness of the spatial median |journal=[[Annals of Statistics]] |volume=15 |year=1987 |number=3 |pages=1332–1333 |mr=902264 |doi=10.1214/aos/1176350511|doi-access=free }}</ref> An alternative proof uses the one-sided Chebyshev inequality; it appears in [[An inequality on location and scale parameters#An application - distance between the mean and the median|an inequality on location and scale parameters]]. This formula also follows directly from [[Cantelli's inequality]].<ref>[http://www.montefiore.ulg.ac.be/~kvansteen/MATH0008-2/ac20112012/Class3/Chapter2_ac1112_vfinalPartII.pdf K.Van Steen ''Notes on probability and statistics'' ]</ref> ====Unimodal distributions==== For the case of [[Unimodality|unimodal]] distributions, one can achieve a sharper bound on the distance between the median and the mean:<ref name="unimodal">{{Cite journal|title=The Mean, Median, and Mode of Unimodal Distributions:A Characterization|journal=Theory of Probability and Its Applications|volume=41|issue=2|pages=210–223|doi=10.1137/S0040585X97975447|year = 1997|last1 = Basu|first1 = S.|last2=Dasgupta|first2=A.|s2cid=54593178}}</ref> <math display="block">\left|\tilde{X} - \bar{X}\right| \le \left(\frac{3}{5}\right)^{1/2} \sigma \approx 0.7746\sigma.</math> A similar relation holds between the median and the mode: <math display="block">\left|\tilde{X} - \mathrm{mode}\right| \le 3^{1/2} \sigma \approx 1.732\sigma.</math> [[File:Proof without words- The mean is greater than the median for monotonic distributions.svg|thumb|The mean is greater than the median for monotonic distributions.]] === Mean, median, and skew === A typical heuristic is that positively skewed distributions have mean > median. This is true for all members of the [[Pearson distribution|Pearson distribution family]]. However this is not always true. For example, the [[Weibull distribution|Weibull distribution family]] has members with positive mean, but mean < median. Violations of the rule are particularly common for discrete distributions. For example, any Poisson distribution has positive skew, but its mean < median whenever <math>\mu \bmod 1>\ln 2</math>.<ref>{{Cite journal |last=von Hippel |first=Paul T. |date=January 2005 |title=Mean, Median, and Skew: Correcting a Textbook Rule |journal=Journal of Statistics Education |language=en |volume=13 |issue=2 |doi=10.1080/10691898.2005.11910556 |issn=1069-1898|doi-access=free }}</ref> See <ref>{{Cite journal |last1=Groeneveld |first1=Richard A. |last2=Meeden |first2=Glen |date=August 1977 |title=The Mode, Median, and Mean Inequality |url=http://www.tandfonline.com/doi/abs/10.1080/00031305.1977.10479215 |journal=The American Statistician |language=en |volume=31 |issue=3 |pages=120–121 |doi=10.1080/00031305.1977.10479215 |issn=0003-1305}}</ref> for a proof sketch. When the distribution has a monotonically decreasing probability density, then the median is less than the mean, as shown in the figure. ==Jensen's inequality for medians== Jensen's inequality states that for any random variable ''X'' with a finite expectation ''E''[''X''] and for any convex function ''f'' <math display="block"> f(\operatorname{E}(x)) \le \operatorname{E}( f(x) ) </math> This inequality generalizes to the median as well. We say a function {{math|''f'': '''R''' → '''R'''}} is a '''C function''' if, for any ''t'', <math display="block"> f^{-1}\left( \,(-\infty, t]\, \right) = \{ x \in \Reals \mid f(x) \le t \} </math> is a [[closed interval]] (allowing the degenerate cases of a [[singleton (mathematics)|single point]] or an [[empty set]]). Every convex function is a C function, but the reverse does not hold. If ''f'' is a C function, then <math display="block"> f(\operatorname{med}[X]) \le \operatorname{med}[ f(X)] </math> If the medians are not unique, the statement holds for the corresponding suprema.<ref name=Merkle2005>{{cite journal |last=Merkle |first=M. |year=2005 |title=Jensen's inequality for medians |journal=Statistics & Probability Letters |volume=71 |issue=3 |pages=277–281 |doi=10.1016/j.spl.2004.11.010 }}</ref> ==Medians for samples== {{hatnote|This section discusses the theory of estimating a population median from a sample. To calculate the median of a sample "by hand," see {{slink||Finite data set of numbers}} above.}} ==={{anchor|Ninther}} {{anchor|Remedian}} Efficient computation of the sample median=== Even though [[sorting algorithm|comparison-sorting]] ''n'' items requires {{math|[[Big O notation|Ω]](''n'' log ''n'')}} operations, [[selection algorithm]]s can compute the [[order statistic|{{mvar|k}}th-smallest of {{mvar|n}} items]] with only {{math|Θ(''n'')}} operations. This includes the median, which is the {{math|{{sfrac|''n''|2}}}}th order statistic (or for an even number of samples, the [[arithmetic mean]] of the two middle order statistics).<ref>{{cite book | isbn=0-201-00029-6 | author=Alfred V. Aho and John E. Hopcroft and Jeffrey D. Ullman | title=The Design and Analysis of Computer Algorithms | location=Reading/MA | publisher=Addison-Wesley | year=1974 | url-access=registration | url=https://archive.org/details/designanalysisof00ahoarich }} Here: Section 3.6 "Order Statistics", p.97-99, in particular Algorithm 3.6 and Theorem 3.9.</ref> Selection algorithms still have the downside of requiring {{math|Ω(''n'')}} memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the [[quicksort]] sorting algorithm, which uses an estimate of its input's median. A more [[robust estimator]] is [[John Tukey|Tukey]]'s ''ninther'', which is the median of three rule applied with limited recursion:<ref>{{cite journal |first1=Jon L. |last1=Bentley |first2=M. Douglas |last2=McIlroy |title=Engineering a sort function |journal=Software: Practice and Experience |volume=23 |issue=11 |pages=1249–1265 |year=1993 |url=http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162 |doi=10.1002/spe.4380231105|s2cid=8822797 }}</ref> if {{mvar|A}} is the sample laid out as an [[array (data structure)|array]], and {{block indent | em = 1.5 | text = {{math|1=med3(''A'') = med(''A''[1], ''A''[{{sfrac|''n''|2}}], ''A''[''n''])}},}} then {{block indent | em = 1.5 | text = {{math|1=ninther(''A'') = med3(med3(''A''[1 ... {{sfrac|1|3}}''n'']), med3(''A''[{{sfrac|1|3}}''n'' ... {{sfrac|2|3}}''n'']), med3(''A''[{{sfrac|2|3}}''n'' ... ''n'']))}}}} The ''remedian'' is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.<ref>{{cite journal |last1=Rousseeuw |first1=Peter J. |last2=Bassett |first2=Gilbert W. Jr.|title=The remedian: a robust averaging method for large data sets |journal=J. Amer. Statist. Assoc. |volume=85 |issue=409 |year=1990 |url=http://wis.kuleuven.be/stat/robust/papers/publications-1990/rousseeuwbassett-remedian-jasa-1990.pdf |pages=97–104 |doi=10.1080/01621459.1990.10475311}}</ref> ===Sampling distribution=== The distributions of both the sample mean and the sample median were determined by [[Pierre-Simon Laplace|Laplace]].<ref name=Stigler1973>{{cite journal | last = Stigler | first = Stephen | author-link = Stephen Stigler |date= December 1973 | title = Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency | journal = Biometrika | volume = 60 | issue = 3 | pages = 439–445 | doi = 10.1093/biomet/60.3.439 | mr = 0326872 | jstor = 2334992 }}</ref> The distribution of the sample median from a population with a density function <math>f(x)</math> is asymptotically normal with mean <math>\mu</math> and variance<ref name="Rider1960">{{cite journal |last=Rider |first=Paul R. |year=1960 |title=Variance of the median of small samples from several special populations |journal=[[Journal of the American Statistical Association|J. Amer. Statist. Assoc.]] |volume=55 |issue=289 |pages=148–150 |doi=10.1080/01621459.1960.10482056 }}</ref> <math display="block"> \frac{ 1 }{ 4n f( m )^2 }</math> where <math>m</math> is the median of <math>f(x)</math> and <math>n</math> is the sample size: <math display="block">\text{Sample median} \sim \mathcal{N}{\left(\mu{=}m, \, \sigma^2{=}\frac{1}{ 4n f(m)^2}\right)}</math> A modern proof follows below. Laplace's result is now understood as a special case of [[Quantile#Estimating quantiles from a sample|the asymptotic distribution of arbitrary quantiles]]. For normal samples, the density is <math>f(m) = 1 / \sqrt{2\pi\sigma^2}</math>, thus for large samples the variance of the median equals <math>({\pi}/{2}) \cdot(\sigma^2/n).</math><ref name="Williams 2001 165"/> (See also section [[#Efficiency]] below.) ==== Derivation of the asymptotic distribution ==== {{unsourced section|date=November 2023}} We take the sample size to be an odd number <math> N = 2n + 1 </math> and assume our variable continuous; the formula for the case of discrete variables is given below in {{slink||Empirical local density}}. The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilities <math> F(v) </math>, <math> f(v) </math> and <math> 1 - F(v) </math>. For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the point <math> v </math> directly from the trinomial distribution: <math display="block"> \Pr[\operatorname{med}=v] \, dv = \frac{(2n+1)!}{n!n!} F(v)^n (1 - F(v))^n f(v)\, dv.</math> Now we introduce the beta function. For integer arguments <math> \alpha </math> and <math> \beta </math>, this can be expressed as <math> \Beta(\alpha,\beta) = \frac{(\alpha - 1)! (\beta - 1)!}{(\alpha + \beta - 1)!} </math>. Also, recall that <math> f(v)\,dv = dF(v) </math>. Using these relationships and setting both <math> \alpha </math> and <math> \beta </math> equal to <math>n+1</math> allows the last expression to be written as <math display="block"> \frac{F(v)^n(1 - F(v))^n}{\Beta(n+1,n+1)} \, dF(v) </math> Hence the density function of the median is a symmetric beta distribution [[pushforward measure|pushed forward]] by <math>F</math>. Its mean, as we would expect, is 0.5 and its variance is <math> 1/(4(N+2)) </math>. By the [[chain rule]], the corresponding variance of the sample median is <math display="block">\frac{ 1 }{ 4(N + 2) f(m)^2 }.</math> The additional 2 is negligible [[limit (mathematics)|in the limit]]. =====Empirical local density===== In practice, the functions <math> f </math> and <math> F </math> above are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: {| class="wikitable" style="margin-left:auto;margin-right:auto;text-align:center;" ! {{mvar|v}} !! 0 !! 0.5 !! 1 !! 1.5 !! 2 !! 2.5 !! 3 !! 3.5 !! 4 !! 4.5 !! 5 |- ! {{math|''f''(''v'')}} | 0.000 || 0.008 || 0.010 || 0.013 || 0.083 || 0.108 || 0.328 || 0.220 || 0.202 || 0.023 || 0.005 |- ! {{math|''F''(''v'')}} | 0.000 || 0.008 || 0.018 || 0.031 || 0.114 || 0.222 || 0.550 || 0.770 || 0.972 || 0.995 || 1.000 |} Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression for <math> \Pr(\operatorname{med} = v) </math>; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: <math display="block"> \Pr(\operatorname{med} = v) = \sum_{i=0}^n \sum_{k=0}^n \frac{N!}{i!(N-i-k)!k!} F(v-1)^i(1 - F(v))^kf(v)^{N-i-k} </math> Here, ''i'' is the number of points strictly less than the median and ''k'' the number strictly greater. Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics. {| class="wikitable" style="margin-left:auto;margin-right:auto;text-align:center;" ! {{Diagonal split header|Statistic|Sample size}} !! 3 !! 9 !! 15 !! 21 |- ! Expected value of median | 3.198 || 3.191 || 3.174 || 3.161 |- ! Standard error of median (above formula) | 0.482 || 0.305 || 0.257 || 0.239 |- ! Standard error of median (asymptotic approximation) | 0.879 || 0.508 || 0.393 || 0.332 |- ! Standard error of mean | 0.421 || 0.243 || 0.188 || 0.159 |} The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error. === Estimation of variance from sample data === The value of <math>(2 f(x))^{-2}</math>—the asymptotic value of <math>n^{-1/2} (\nu - m)</math> where <math>\nu</math> is the population median—has been studied by several authors. The standard "delete one" [[Resampling (statistics)#Jackknife|jackknife]] method produces [[consistent estimator|inconsistent]] results.<ref name=Efron1982>{{cite book |last=Efron |first=B. |year=1982 |title=The Jackknife, the Bootstrap and other Resampling Plans |publisher=SIAM |location=Philadelphia |isbn=0898711797 }}</ref> An alternative—the "delete k" method—where <math>k</math> grows with the sample size has been shown to be asymptotically consistent.<ref name=Shao1989>{{cite journal |last1=Shao |first1=J. |last2=Wu |first2=C. F. |year=1989 |title=A General Theory for Jackknife Variance Estimation |journal=[[Annals of Statistics|Ann. Stat.]] |volume=17 |issue=3 |pages=1176–1197 |jstor=2241717 |doi=10.1214/aos/1176347263|doi-access=free }}</ref> This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent,<ref name=Efron1979>{{cite journal |last=Efron |first=B. |year=1979 |title=Bootstrap Methods: Another Look at the Jackknife |journal=[[Annals of Statistics|Ann. Stat.]] |volume=7 |issue=1 |pages=1–26 |jstor=2958830 |doi=10.1214/aos/1176344552|doi-access=free }}</ref> but converges very slowly ([[computational complexity theory|order]] of <math>n^{-\frac{1}{4}}</math>).<ref name=Hall1988>{{cite journal |last1=Hall |first1=P. |last2=Martin |first2=M. A. |s2cid=119701556 |year=1988 |title=Exact Convergence Rate of Bootstrap Quantile Variance Estimator |journal=Probab Theory Related Fields |volume=80 |issue=2 |pages=261–268 |doi=10.1007/BF00356105 |doi-access=free }}</ref> Other methods have been proposed but their behavior may differ between large and small samples.<ref name=Jimenez-Gamero2004>{{cite journal |last1=Jiménez-Gamero |first1=M. D. |last2=Munoz-García |first2=J. |first3=R. |last3=Pino-Mejías |year=2004 |title=Reduced bootstrap for the median |journal=Statistica Sinica |volume=14 |issue=4 |pages=1179–1198 |url=http://www3.stat.sinica.edu.tw/statistica/password.asp?vol=14&num=4&art=11 }}</ref> ===Efficiency{{anchor|Efficiency}}=== The [[Efficiency (statistics)|efficiency]] of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size <math>N = 2n + 1</math> from the [[normal distribution]], the efficiency for large N is <math display="block"> \frac{2}{\pi} \frac{N+2}{N} </math> The efficiency tends to <math> \frac{2}{\pi} </math> as <math>N</math> tends to infinity. In other words, the relative variance of the median will be <math>\pi/2 \approx 1.57</math>, or 57% greater than the variance of the mean – the relative [[standard error]] of the median will be <math>(\pi/2)^\frac{1}{2} \approx 1.25</math>, or 25% greater than the [[standard error of the mean]], <math>\sigma/\sqrt{n}</math> (see also section [[#Sampling distribution]] above.).<ref>{{Cite book | url=https://books.google.com/books?id=8bMj8m-4RDQC&q=standard%20error%20of%20the%20median&pg=PA104 |title = Data Analysis and Graphics Using R: An Example-Based Approach|isbn = 9781139486675|last1 = Maindonald|first1 = John|last2 = John Braun|first2 = W.|date = 2010-05-06| publisher=Cambridge University Press }}</ref> ===Other estimators=== For univariate distributions that are ''symmetric'' about one median, the [[Hodges–Lehmann estimator]] is a [[robust statistics|robust]] and highly [[Efficiency (statistics)|efficient estimator]] of the population median.<ref name="HM">{{cite book |last1=Hettmansperger |first1=Thomas P. |last2=McKean |first2=Joseph W. |title=Robust nonparametric statistical methods |series=Kendall's Library of Statistics |volume=5 |publisher=Edward Arnold |location=London |year=1998 |isbn=0-340-54937-8 |mr=1604954 }}</ref> If data is represented by a [[statistical model]] specifying a particular family of [[probability distribution]]s, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. [[Pareto interpolation]] is an application of this when the population is assumed to have a [[Pareto distribution]]. ==Multivariate median== Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.<ref name="HM" /><ref>Small, Christopher G. "A survey of multidimensional medians." International Statistical Review/Revue Internationale de Statistique (1990): 263–277. {{doi|10.2307/1403809}} {{JSTOR|1403809}}</ref><ref>Niinimaa, A., and H. Oja. "Multivariate median." Encyclopedia of statistical sciences (1999).</ref><ref>Mosler, Karl. Multivariate Dispersion, Central Regions, and Depth: The Lift Zonoid Approach. Vol. 165. Springer Science & Business Media, 2012.</ref> ===Marginal median=== The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.<ref name="HM" /><ref>Puri, Madan L.; Sen, Pranab K.; ''Nonparametric Methods in Multivariate Analysis'', John Wiley & Sons, New York, NY, 1971. (Reprinted by Krieger Publishing)</ref> ===Geometric median=== The [[geometric median]] of a discrete set of sample points <math>x_1,\ldots x_N</math> in a Euclidean space is the{{efn|The geometric median is unique unless the sample is collinear.<ref>{{cite journal | last1 = Vardi | first1 = Yehuda | last2 = Zhang | first2 = Cun-Hui | doi = 10.1073/pnas.97.4.1423 | issue = 4 | journal = Proceedings of the National Academy of Sciences of the United States of America | mr = 1740461 | pages = 1423–1426 (electronic) | title = The multivariate ''L''<sub>1</sub>-median and associated data depth | volume = 97 | year = 2000 | pmc = 26449 | bibcode = 2000PNAS...97.1423V | pmid=10677477 | doi-access = free }}</ref>}} point minimizing the sum of distances to the sample points. <math display="block">\hat\mu = \underset{\mu\in \mathbb{R}^m}{\operatorname{arg\,min}} \sum_{n=1}^{N} \left \| \mu-x_n \right \|_2</math> In contrast to the marginal median, the geometric median is [[equivariant]] with respect to Euclidean [[Similarity (geometry)|similarity transformations]] such as [[translation (geometry)|translations]] and [[rotation (mathematics)|rotations]]. ===Median in all directions=== If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions".<ref>{{cite journal |first1=Otto A. |last1=Davis |first2=Morris H. |last2=DeGroot |first3=Melvin J. |last3=Hinich |title=Social Preference Orderings and Majority Rule |journal=Econometrica |volume=40 |issue=1 |date=January 1972 |pages=147–157 |doi=10.2307/1909727 |jstor=1909727 |url=https://www.cmu.edu/dietrich/sds/docs/davis/Social%20Preference%20Orderings%20and%20Majority%20Rule.pdf}} The authors, working in a topic in which uniqueness is assumed, actually use the expression "''unique'' median in all directions".</ref> This concept is relevant to voting theory on account of the [[median voter theorem]]. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions). ===Centerpoint=== {{excerpt|Centerpoint (geometry)}} ==Conditional median== The conditional median occurs in the setting where we seek to estimate a random variable <math> X </math> from a random variable <math> Y </math>, which is a noisy version of <math> X </math>. The conditional median in this setting is given by <math display="block"> m(X|Y=y) = F^{-1}_{X|Y=y} \left(\frac{1}{2}\right)</math> where <math>t \mapsto F^{-1}_{X|Y=y}(t) </math> is the inverse of the conditional cdf (i.e., conditional quantile function) of <math>x \mapsto F_{X|Y}(x|y)</math>. For example, a popular model is <math> Y = X+Z </math> where <math>Z</math> is standard normal independent of <math>X</math>. The conditional median is the optimal Bayesian <math>L_1</math> estimator: <math display="block"> m(X|Y=y) = \arg \min_f \operatorname{E} \left[ | X - f(Y) | \right]</math> It is known that for the model <math> Y = X+Z </math> where <math>Z</math> is standard normal independent of <math>X</math>, the estimator is linear if and only if <math>X</math> is Gaussian.<ref>{{Cite journal|last1=Barnes|first1=Leighton |last2=Dytso|first2=Alex J.|last3=Jingbo|first3=Liu |last4=Poor|first4=H.Vincent|date=2024-08-22|title=L1 Estimation: On the Optimality of Linear Estimators|journal=IEEE Transactions on Information Theory|volume=70 |issue=11 |pages=8026–8039 |doi=10.1109/TIT.2024.3440929}}</ref> ==Other median-related concepts== ===Interpolated median=== When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is a [[Likert scale]], on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the median <math> m </math> is 3 since the median is the smallest value of <math> x </math> for which <math> F(x) </math> is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval width <math> w </math> to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the values <math> f(x) </math> are known, the interpolated median can be calculated from <math display="block"> m_\text{int} = m + w\left[\frac{1}{2} - \frac{F( m ) - \frac{1}{2} }{f( m )}\right]. </math> Alternatively, if in an observed sample there are <math> k </math> scores above the median category, <math> j </math> scores in it and <math> i </math> scores below it then the interpolated median is given by <math display="block"> m_\text{int} = m + \frac{w}{2} \left[\frac{k - i} j\right]. </math> ===Pseudo-median=== {{Main|Pseudomedian}} For univariate distributions that are ''symmetric'' about one median, the [[Hodges–Lehmann estimator]] is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population ''pseudo-median'', which is the median of a symmetrized distribution and which is close to the population median.<ref>{{Cite journal|last1=Pratt|first1=William K.|last2=Cooper|first2=Ted J.|last3=Kabir|first3=Ihtisham|s2cid=173183609|editor1-first=Francis J|editor1-last=Corbett|date=1985-07-11|title=Pseudomedian Filter|journal=Architectures and Algorithms for Digital Image Processing II|volume=0534|pages=34|doi=10.1117/12.946562|bibcode=1985SPIE..534...34P}}</ref> The Hodges–Lehmann estimator has been generalized to multivariate distributions.<ref name="Oja 2010 xiv+232">{{cite book|title=Multivariate nonparametric methods with ''R'': An approach based on spatial signs and ranks|last=Oja|first=Hannu|publisher=Springer|year=2010|isbn=978-1-4419-0467-6|series=Lecture Notes in Statistics|volume=199|location=New York, NY|pages=xiv+232|doi=10.1007/978-1-4419-0468-3|mr=2598854}}</ref> ===Variants of regression=== The [[Theil–Sen estimator]] is a method for [[robust statistics|robust]] [[linear regression]] based on finding medians of [[slope]]s.<ref>{{citation | last = Wilcox | first = Rand R. | contribution = Theil–Sen estimator | isbn = 978-0-387-95157-7 | pages = 207–210 | publisher = Springer-Verlag | title = Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy | url = https://books.google.com/books?id=YSFb4QX2UIoC&pg=PA207 | year = 2001}}.</ref> ===Median filter=== The [[median filter]] is an important tool of [[image processing]], that can effectively remove any [[salt and pepper noise]] from [[grayscale]] images. ===Cluster analysis=== {{main|k-medians clustering}} In [[cluster analysis]], the [[k-medians clustering]] algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in [[k-means clustering]], is replaced by maximising the distance between cluster-medians. ===Median–median line=== This is a method of robust regression. The idea dates back to [[Abraham Wald|Wald]] in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter <math>x</math>: a left half with values less than the median and a right half with values greater than the median.<ref name=Wald1940>{{cite journal |last=Wald |first=A. |year=1940 |title=The Fitting of Straight Lines if Both Variables are Subject to Error |journal=[[Annals of Mathematical Statistics]] |volume=11 |issue=3 |pages=282–300 |jstor=2235677 |doi=10.1214/aoms/1177731868 |url=http://dml.cz/bitstream/handle/10338.dmlcz/103573/AplMat_20-1975-2_3.pdf |doi-access=free }}</ref> He suggested taking the means of the dependent <math>y</math> and independent <math>x</math> variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set. Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.<ref name=Nair1942>{{cite journal |title=On a Simple Method of Curve Fitting |first1=K. R. |last1=Nair |first2=M. P. |last2=Shrivastava |journal=Sankhyā: The Indian Journal of Statistics |volume=6 |issue=2 |year=1942 |pages=121–132 |jstor=25047749 }}</ref> Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.<ref name=Brown1951>{{cite book |last1=Brown |first1=G. W. |last2=Mood |first2=A. M. |year=1951 |chapter=On Median Tests for Linear Hypotheses |title=Proc Second Berkeley Symposium on Mathematical Statistics and Probability |location=Berkeley, CA |publisher=University of California Press |pages=159–166 |zbl=0045.08606 }}</ref> Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.<ref name=Tukey1971>{{cite book |last=Tukey |first=J. W. |year=1977 |title=Exploratory Data Analysis |location=Reading, MA |publisher=Addison-Wesley |isbn=0201076160 |url=https://archive.org/details/exploratorydataa00tuke_0 }}</ref> ==Median-unbiased estimators== {{main|Bias of an estimator#Median-unbiased estimators}} Any [[Bias of an estimator|''mean''-unbiased estimator]] minimizes the [[risk]] ([[expected loss]]) with respect to the squared-error [[loss function]], as observed by [[Gauss]]. A [[Bias of an estimator#Median unbiased estimators, and bias with respect to other loss functions|''median''-unbiased estimator]] minimizes the risk with respect to the [[Absolute deviation|absolute-deviation]] loss function, as observed by [[Laplace]]. Other [[loss functions]] are used in [[statistical theory]], particularly in [[robust statistics]]. The theory of median-unbiased estimators was revived by George W. Brown in 1947:<ref name="Brown" /> {{Blockquote|An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.|page 584}} Further properties of median-unbiased estimators have been reported.<ref name="Lehmann" /><ref name="Birnbaum" /><ref name="vdW" /><ref name="Pf" /> There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions having [[monotone likelihood ratio|monotone likelihood-functions]].<ref>Pfanzagl, Johann. "On optimal median unbiased estimators in the presence of nuisance parameters." The Annals of Statistics (1979): 187–193.</ref><ref>{{cite journal | last1 = Brown | first1 = L. D. | last2 = Cohen | first2 = Arthur | last3 = Strawderman | first3 = W. E. | year = 1976 | title = A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications | url = http://projecteuclid.org/euclid.aos/1176343543 | journal = Ann. Statist. | volume = 4 | issue = 4| pages = 712–722 | doi = 10.1214/aos/1176343543 | doi-access = free }}</ref> One such procedure is an analogue of the [[Rao–Blackwell theorem|Rao–Blackwell procedure]] for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class of [[loss function]]s.<ref>{{cite journal | last1 = Page | last2 = Brown | first2 = L. D. | last3 = Cohen | first3 = Arthur | last4 = Strawderman | first4 = W. E. | year = 1976 | title = A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications | url = http://projecteuclid.org/euclid.aos/1176343543 | journal = Ann. Statist. | volume = 4 | issue = 4| pages = 712–722 | doi = 10.1214/aos/1176343543 | doi-access = free }}</ref> ==History== Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena.<ref name=":0">{{Cite journal|last1=Bakker|first1=Arthur|last2=Gravemeijer|first2=Koeno P. E.|s2cid=143708116|date=2006-06-01|title=An Historical Phenomenology of Mean and Median|journal=Educational Studies in Mathematics|language=en|volume=62|issue=2|pages=149–168|doi=10.1007/s10649-006-7099-8|issn=1573-0816}}</ref> Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.) The idea of the median appeared in the 6th century in the [[Talmud]], in order to fairly analyze divergent [[Economic appraisal|appraisals]].<ref>{{Cite web|url=http://danadler.com/blog/2014/12/31/talmud-and-modern-economics/|title=Talmud and Modern Economics|last=Adler|first=Dan|date=31 December 2014|website=Jewish American and Israeli Issues|url-status=dead|archive-url=https://web.archive.org/web/20151206134315/http://danadler.com/blog/2014/12/31/talmud-and-modern-economics/|archive-date=6 December 2015|access-date=22 February 2020}}</ref><ref>[http://www.wisdom.weizmann.ac.il/math/AABeyond12/presentations/Aumann.pdf Modern Economic Theory in the Talmud] by [[Yisrael Aumann]]</ref> However, the concept did not spread to the broader scientific community. Instead, the closest ancestor of the modern median is the [[mid-range]], invented by [[Al-Biruni]]<ref name="Eisenhart">{{Cite speech|last=Eisenhart|first=Churchill|author-link=Churchill Eisenhart|event=131st Annual Meeting of the American Statistical Association|location=Colorado State University|date=24 August 1971|url=https://www.stat.uchicago.edu/~stigler/eisenhart.pdf|title=The Development of the Concept of the Best Mean of a Set of Measurements from Antiquity to the Present Day|format=PDF}}</ref>{{Rp|31}}<ref name=":2">{{Cite web|url=http://priceonomics.com/how-the-average-triumphed-over-the-median/|title=How the Average Triumphed Over the Median|website=Priceonomics|date=5 April 2016|language=en|access-date=2020-02-23}}</ref> Transmission of his work to later scholars is unclear. He applied his technique to [[assay]]ing currency metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear to [[Debasement|cheat]].<ref name="Eisenhart" />{{Rp|35–8}} <ref>{{Cite journal |last=Sangster |first=Alan |date=March 2021 |title=The Life and Works of Luca Pacioli (1446/7–1517), Humanist Educator |url=https://onlinelibrary.wiley.com/doi/10.1111/abac.12218 |journal=Abacus |language=en |volume=57 |issue=1 |pages=126–152 |doi=10.1111/abac.12218 |hdl=2164/16100 |s2cid=233917744 |issn=0001-3072|hdl-access=free }}</ref> However, increased navigation at sea during the [[Age of Discovery]] meant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595".<ref name="Eisenhart" />{{Rp|45–8}} The idea of the median may have first appeared in [[Edward Wright (mathematician)|Edward Wright]]'s 1599 book ''Certaine Errors in Navigation'' on a section about [[compass]] navigation.<ref>{{Cite journal |last1=Wright |first1=Edward |last2=Parsons |first2=E. J. S. |last3=Morris |first3=W. F. |date=1939 |title=Edward Wright and His Work |url=https://www.jstor.org/stable/1149920 |journal=Imago Mundi |volume=3 |issue=1 |pages=61–71 |doi=10.1080/03085693908591862 |jstor=1149920 |issn=0308-5694}}</ref> Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than the [[mid-range]] — was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median.<ref name=":0" /><ref name=":2" />{{Efn|Subsequent scholars appear to concur with Eisenhart that Boroughs' 1580 figures, while suggestive of the median, in fact describe an arithmetic mean.;<ref name="Eisenhart" />{{rp|62–3}} Boroughs is mentioned in no other work.}} The median (in the context of probability) certainly appeared in the correspondence of [[Christiaan Huygens]], but as an example of a statistic that was inappropriate for [[Actuarial science|actuarial practice]].<ref name=":0" /> The earliest recommendation of the median dates to 1757, when [[Roger Joseph Boscovich]] developed a regression method based on the [[L1 norm|''L''<sup>1</sup> norm]] and therefore implicitly on the median.<ref name=":0" /><ref name="Stigler1986">{{cite book|last=Stigler|first=S. M.|url=https://archive.org/details/historyofstatist00stig|title=The History of Statistics: The Measurement of Uncertainty Before 1900|publisher=Harvard University Press|year=1986|isbn=0674403401}}</ref> In 1774, [[Pierre-Simon Laplace|Laplace]] made this desire explicit: he suggested the median be used as the standard estimator of the value of a posterior [[Probability density function|PDF]]. The specific criterion was to minimize the expected magnitude of the error; <math>|\alpha - \alpha^{*}|</math> where <math>\alpha^{*}</math> is the estimate and <math>\alpha</math> is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s.<ref name="Stigler1973" /><ref name="Laplace1818">Laplace PS de (1818) ''Deuxième supplément à la Théorie Analytique des Probabilités'', Paris, Courcier</ref> However, a decade later, [[Carl Friedrich Gauss|Gauss]] and [[Adrien-Marie Legendre|Legendre]] developed the [[least squares]] method, which minimizes <math>(\alpha - \alpha^{*})^{2}</math> to obtain the mean; the strong justification of this estimator by reference to [[maximum likelihood estimation]] based on a [[normal distribution]] means it has mostly replaced Laplace's original suggestion.<ref>{{cite book|last1=Jaynes|first1=E.T.|title=Probability theory : the logic of science|date=2007|publisher=Cambridge Univ. Press|location=Cambridge [u.a.]|isbn=978-0-521-59271-0|page=172|edition=5. print.}}</ref> [[Antoine Augustin Cournot]] in 1843 was the first<ref>{{Cite book|title=Dictionary of Mathematical Geosciences: With Historical Notes|last=Howarth|first=Richard|publisher=Springer|year=2017|pages=374}}</ref> to use the term ''median'' (''valeur médiane'') for the value that divides a probability distribution into two equal halves. [[Gustav Theodor Fechner]] used the median (''Centralwerth'') in sociological and psychological phenomena.<ref name="Keynes1921">Keynes, J.M. (1921) ''[[A Treatise on Probability]]''. Pt II Ch XVII §5 (p 201) (2006 reprint, Cosimo Classics, {{isbn|9781596055308}} : multiple other reprints)</ref> It had earlier been used only in astronomy and related fields. [[Gustav Theodor Fechner|Gustav Fechner]] popularized the median into the formal analysis of data, although it had been used previously by Laplace,<ref name="Keynes1921" /> and the median appeared in a textbook by [[Francis Ysidro Edgeworth|F. Y. Edgeworth]].<ref>{{Cite book|last=Stigler|first=Stephen M.|url=https://books.google.com/books?id=qQusWukdPa4C&q=stigler+%22statistics+on+the+table%22|title=Statistics on the Table: The History of Statistical Concepts and Methods|date=2002|publisher=Harvard University Press|isbn=978-0-674-00979-0|pages=105–7|language=en}}</ref> [[Francis Galton]] used the term ''median'' in 1881,<ref name=Galton1881>Galton F (1881) "Report of the Anthropometric Committee" pp 245–260. [https://www.biodiversitylibrary.org/item/94448 ''Report of the 51st Meeting of the British Association for the Advancement of Science'']</ref><ref>{{Cite journal|last=David|first=H. A.|date=1995|title=First (?) Occurrence of Common Terms in Mathematical Statistics|journal=The American Statistician|volume=49|issue=2|pages=121–133|doi=10.2307/2684625|jstor=2684625|issn=0003-1305}}</ref> having earlier used the terms ''middle-most value'' in 1869, and the ''medium'' in 1880.<ref>[https://www.encyclopediaofmath.org/index.php/Galton,_Francis ''encyclopediaofmath.org'']</ref><ref>[http://www.personal.psu.edu/users/e/c/ecb5/Courses/M475W/WeeklyReadings/Week%2013/DevelopmentOfModernStatistics.pdf ''personal.psu.edu'']</ref> <!-- this isn't why it replaced the median—the reason why is the central limit theorem and ubiquity of normally-distributed data --><!--Statisticians encouraged the use of medians intensely throughout the 19th century for its intuitive clarity. However, the notion of median does not lend itself to the theory of higher moments as well as the [[arithmetic mean]] does, and is much harder to compute. As a result, the median was steadily supplanted as a notion of generic average by the arithmetic mean during the 20th century.<ref name=":0" /><ref name=":2" />--> <!--(Ironically, the same time period saw the rise of term "average" to describe any location statistic, not merely the arithmetic mean.)<ref name="Eisenhart" />{{Rp|7}} --> ==See also== {{Portal|Mathematics}} * {{Annotated link|Absolute deviation}} * {{Annotated link|Bias of an estimator}} * {{Annotated link|Central tendency}} * {{Annotated link|Concentration of measure}} for {{Annotated link|Lipschitz functions}} * {{Annotated link|Median graph}} * {{Annotated link|Median of medians}} – Algorithm to calculate the approximate median in linear time * {{Annotated link|Median search}} * {{Annotated link|Median slope}} * {{Annotated link|Median voter theory}} * {{Annotated link|Medoid}}s – Generalization of the median in higher dimensions * {{Annotated link|Moving average#Moving median}} * {{Annotated link|Median absolute deviation}} == Notes == {{notelist}} == References == {{Reflist |refs = <ref name="Brown">{{cite journal |last=Brown |first=George W. |year=1947 |title=On Small-Sample Estimation |journal=[[Annals of Mathematical Statistics]] |volume=18 |issue=4 |pages=582–585 |jstor=2236236 |doi=10.1214/aoms/1177730349 |doi-access=free }}</ref> <ref name="Lehmann">{{cite journal |author-link = Erich Leo Lehmann |last=Lehmann |first=Erich L. |year=1951 |title = A General Concept of Unbiasedness |journal=[[Annals of Mathematical Statistics]] |volume=22 |issue = 4 |pages=587–592 |jstor=2236928 |doi=10.1214/aoms/1177729549 |doi-access=free }}</ref> <ref name="Birnbaum">{{cite journal |author-link = Allan Birnbaum |last=Birnbaum |first=Allan |year=1961 |title = A Unified Theory of Estimation, I |journal=[[Annals of Mathematical Statistics]] |volume=32 |issue = 1 |pages=112–135 |jstor=2237612 |doi=10.1214/aoms/1177705145 |doi-access=free }}</ref> <ref name="vdW">{{cite journal |last=van der Vaart |first = H. Robert |year=1961 |title=Some Extensions of the Idea of Bias |journal=[[Annals of Mathematical Statistics]] |volume=32 |issue=2 |pages=436–447 |jstor=2237754 |doi=10.1214/aoms/1177705051 |mr=125674 |doi-access=free }}</ref> <ref name="Pf">{{cite book |title = Parametric Statistical Theory |last1=Pfanzagl |first1=Johann |author2 = with the assistance of R. Hamböker |year=1994 |publisher = Walter de Gruyter |isbn=3-11-013863-8 |mr=1291393 }}</ref> }} ==External links== * {{springer|title=Median (in statistics)|id=p/m063310}} * [http://www.accessecon.com/pubs/EB/2004/Volume3/EB-04C10011A.pdf Median as a weighted arithmetic mean of all Sample Observations] * [http://www.poorcity.richcity.org/cgi-bin/inequality.cgi On-line calculator] * [http://www.statcan.gc.ca/edu/power-pouvoir/ch11/median-mediane/5214872-eng.htm Calculating the median] * [http://mathschallenge.net/index.php?section=problems&show=true&titleid=average_problem A problem involving the mean, the median, and the mode.] * {{MathWorld | urlname= StatisticalMedian | title= Statistical Median}} * [http://www.poorcity.richcity.org/oei/#GiniHooverTheil Python script] for Median computations and [[income inequality metrics]] * [https://arxiv.org/abs/0806.3301 Fast Computation of the Median by Successive Binning] * [http://www.celiagreen.com/charlesmccreery/statistics/meanmedianmode.pdf 'Mean, median, mode and skewness'], A tutorial devised for first-year psychology students at Oxford University, based on a worked example. * [https://www.popularmechanics.com/science/math/a28614640/complex-sat-math-problem/ The Complex SAT Math Problem Even the College Board Got Wrong]: Andrew Daniels in ''[[Popular Mechanics]]'' {{PlanetMath attribution|id=5900|title=Median of a distribution}} {{Statistics|descriptive}} [[Category:Means]] [[Category:Robust statistics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Anchor
(
edit
)
Template:Annotated link
(
edit
)
Template:Block indent
(
edit
)
Template:Blockquote
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite speech
(
edit
)
Template:Cite web
(
edit
)
Template:Comma separated entries
(
edit
)
Template:Diagonal split header
(
edit
)
Template:Doi
(
edit
)
Template:Efn
(
edit
)
Template:Excerpt
(
edit
)
Template:Hatnote
(
edit
)
Template:Isbn
(
edit
)
Template:JSTOR
(
edit
)
Template:Main
(
edit
)
Template:Main other
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:Mvar
(
edit
)
Template:Notelist
(
edit
)
Template:Nowrap
(
edit
)
Template:PlanetMath attribution
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Statistics
(
edit
)
Template:Unsourced section
(
edit
)
Template:Webarchive
(
edit
)