Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Order statistic
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Kth smallest value in a statistical sample}}{{more footnotes needed|date=December 2010}}[[File:Order Statistics Exponential PDF.svg|thumb|[[Probability density function]]s of the order statistics for a sample of size ''n'' = 5 from an [[exponential distribution]] with unit scale parameter]] In [[statistics]], the ''k''th '''order statistic''' of a [[statistical sample]] is equal to its ''k''th-smallest value.<ref name=david>{{Cite book | last1 = David | first1 = H. A. | last2 = Nagaraja | first2 = H. N. | doi = 10.1002/0471722162 | title = Order Statistics | series = Wiley Series in Probability and Statistics | year = 2003 | isbn = 9780471722168 }}</ref> Together with [[Ranking (statistics)|rank statistics]], order statistics are among the most fundamental tools in [[non-parametric statistics]] and [[non-parametric inference|inference]]. Important special cases of the order statistics are the [[minimum]] and [[maximum]] value of a sample, and (with some qualifications discussed below) the [[sample median]] and other [[quantile|sample quantiles]]. When using [[probability theory]] to analyze order statistics of [[random sample]]s from a [[continuous probability distribution|continuous distribution]], the [[cumulative distribution function]] is used to reduce the analysis to the case of order statistics of the [[uniform distribution (continuous)|uniform distribution]]. == Notation and examples == For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are :6, 9, 3, 7, the order statistics would be denoted :<math>x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=7,\ \ x_{(4)}=9,\,</math> where the subscript {{math|({{italics correction|''i''}})}} enclosed in parentheses indicates the {{math|{{italics correction|''i''}}}}th order statistic of the sample. The '''first order statistic''' (or '''smallest order statistic''') is always the [[minimum]] of the sample, that is, :<math>X_{(1)}=\min\{\,X_1,\ldots,X_n\,\}</math> where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values. Similarly, for a sample of size {{math|''n''}}, the {{math|{{italics correction|''n''}}}}th order statistic (or '''largest order statistic''') is the [[maximum]], that is, :<math>X_{(n)}=\max\{\,X_1,\ldots,X_n\,\}.</math> The [[sample range]] is the difference between the maximum and minimum. It is a function of the order statistics: :<math>{\rm Range}\{\,X_1,\ldots,X_n\,\} = X_{(n)}-X_{(1)}.</math> A similar important statistic in [[exploratory data analysis]] that is simply related to the order statistics is the sample [[interquartile range]]. The sample median may or may not be an order statistic, since there is a single middle value only when the number {{math|''n''}} of observations is [[Even and odd numbers|odd]]. More precisely, if {{math|1=''n'' = 2''m''+1}} for some integer {{math|''m''}}, then the sample median is <math>X_{(m+1)}</math> and so is an order statistic. On the other hand, when {{math|''n''}} is [[even and odd numbers|even]], {{math|1=''n'' = 2''m''}} and there are two middle values, <math>X_{(m)}</math> and <math>X_{(m+1)}</math>, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles. == Probabilistic analysis == Given any random variables ''X''<sub>1</sub>, ''X''<sub>2</sub>, ..., ''X''<sub>''n''</sub>, the order statistics X<sub>(1)</sub>, X<sub>(2)</sub>, ..., X<sub>(''n'')</sub> are also random variables, defined by sorting the values ([[realization (probability)|realizations]]) of ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> in increasing order. When the random variables ''X''<sub>1</sub>, ''X''<sub>2</sub>, ..., ''X''<sub>''n''</sub> form a [[sample (statistics)|sample]] they are [[independent and identically distributed]]. This is the case treated below. In general, the random variables ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> can arise by sampling from more than one population. Then they are [[independent (statistics)|independent]], but not necessarily identically distributed, and their [[joint probability distribution]] is given by the [[Bapat–Beg theorem]]. From now on, we will assume that the random variables under consideration are [[continuous probability distribution|continuous]] and, where convenient, we will also assume that they have a [[probability density function]] (PDF), that is, they are [[absolute continuity|absolutely continuous]]. The peculiarities of the analysis of distributions assigning mass to points (in particular, [[discrete distribution]]s) are discussed at the end. === Cumulative distribution function of order statistics === For a random sample as above, with cumulative distribution <math>F_X(x)</math>, the order statistics for that sample have cumulative distributions as follows<ref>{{cite book |last1=Casella |first1=George |last2=Berger |first2=Roger |title=Statistical Inference |year=2002 |publisher=Cengage Learning |isbn=9788131503942 |page=229 |edition=2nd |url={{Google books|0x_vAAAAMAAJ|page=228|plainurl=yes}} }}</ref> (where ''r'' specifies which order statistic): <math display="block"> F_{X_{(r)}}(x) = \sum_{j=r}^{n} \binom{n}{j} [ F_{X}(x) ]^{j} [ 1 - F_{X}(x) ]^{n-j} </math> The proof of this formula is pure [[combinatorics]]: for the <math>r</math>th order statistic to be <math> \leq x </math>, the number of samples that are <math> > x </math> has to be between <math> 0 </math> and <math> n-r </math>. In the case that <math> X_{(j)} </math> is the largest order statistic <math> \leq x </math>, there has to be <math> j </math> samples <math> \leq x </math> (each with an independent probability of <math> F_X(x) </math>) and <math> n-j </math> samples <math> >x </math> (each with an independent probability of <math> 1 - F_X(x) </math>). Finally there are <math> \textstyle \binom{n}{j} </math> different ways of choosing which of the <math> n </math> samples are of the <math> \leq x </math> kind. The corresponding probability density function may be derived from this result, and is found to be :<math>f_{X_{(r)}}(x) = \frac{n!}{(r-1)!(n-r)!} f_{X}(x) [ F_{X}(x) ]^{r-1} [ 1 - F_{X}(x) ]^{n-r}.</math> Moreover, there are two special cases, which have CDFs that are easy to compute. :<math>F_{X_{(n)}}(x) = \operatorname{Prob}(\max\{\,X_1,\ldots,X_n\,\} \leq x) = [ F_{X}(x) ]^n</math> :<math>F_{X_{(1)}}(x) = \operatorname{Prob}(\min\{\,X_1,\ldots,X_n\,\} \leq x) = 1- [ 1 - F_{X}(x) ]^n</math> Which can be derived by careful consideration of probabilities. === Probability distributions of order statistics === ==== Order statistics sampled from a uniform distribution ==== In this section we show that the order statistics of the [[uniform distribution (continuous)|uniform distribution]] on the [[unit interval]] have [[marginal distribution]]s belonging to the [[beta distribution]] family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the [[cumulative distribution function|cdf]]. We assume throughout this section that <math>X_1, X_2, \ldots, X_n</math> is a [[random sample]] drawn from a continuous distribution with cdf <math>F_X</math>. Denoting <math>U_i=F_X(X_i)</math> we obtain the corresponding random sample <math>U_1,\ldots,U_n</math> from the standard [[uniform distribution (continuous)|uniform distribution]]. Note that the order statistics also satisfy <math>U_{(i)}=F_X(X_{(i)})</math>. The probability density function of the order statistic <math>U_{(k)}</math> is equal to<ref name="gentle">{{citation|title=Computational Statistics|first=James E.|last=Gentle|publisher=Springer|year=2009|isbn=9780387981444|page=63|url=https://books.google.com/books?id=mQ5KAAAAQBAJ&pg=PA63}}.</ref> :<math>f_{U_{(k)}}(u)={n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}</math> that is, the ''k''th order statistic of the uniform distribution is a [[beta distribution|beta-distributed]] random variable.<ref name="gentle"/><ref>{{citation|title=Kumaraswamy's distribution: A beta-type distribution with some tractability advantages|first=M. C.|last=Jones|journal=Statistical Methodology|volume=6|issue=1|year=2009|pages=70–81|doi=10.1016/j.stamet.2008.04.001|quote=As is well known, the beta distribution is the distribution of the ''m'' ’th order statistic from a random sample of size ''n'' from the uniform distribution (on (0,1)).}}</ref> :<math>U_{(k)} \sim \operatorname{Beta}(k,n+1\mathbf{-}k).</math> The proof of these statements is as follows. For <math>U_{(k)}</math> to be between ''u'' and ''u'' + ''du'', it is necessary that exactly ''k'' − 1 elements of the sample are smaller than ''u'', and that at least one is between ''u'' and ''u'' + d''u''. The probability that more than one is in this latter interval is already <math>O(du^2)</math>, so we have to calculate the probability that exactly ''k'' − 1, 1 and ''n'' − ''k'' observations fall in the intervals <math>(0,u)</math>, <math>(u,u+du)</math> and <math>(u+du,1)</math> respectively. This equals (refer to [[multinomial distribution]] for details) :<math>{n!\over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot(1-u-du)^{n-k}</math> and the result follows. The mean of this distribution is ''k'' / (''n'' + 1). ==== The joint distribution of the order statistics of the uniform distribution ==== Similarly, for ''i'' < ''j'', the [[joint probability distribution|joint probability density function]] of the two order statistics ''U''<sub>(''i'')</sub> < ''U''<sub>(''j'')</sub> can be shown to be :<math>f_{U_{(i)},U_{(j)}}(u,v) = n!{u^{i-1}\over (i-1)!}{(v-u)^{j-i-1}\over(j-i-1)!}{(1-v)^{n-j}\over (n-j)!}</math> which is (up to terms of higher order than <math>O(du\,dv)</math>) the probability that ''i'' − 1, 1, ''j'' − 1 − ''i'', 1 and ''n'' − ''j'' sample elements fall in the intervals <math>(0,u)</math>, <math>(u,u+du)</math>, <math>(u+du,v)</math>, <math>(v,v+dv)</math>, <math>(v+dv,1)</math> respectively. One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the ''n'' order statistics turns out to be ''constant'': :<math>f_{U_{(1)},U_{(2)},\ldots,U_{(n)}}(u_{1},u_{2},\ldots,u_{n}) = n!.</math> One way to understand this is that the unordered sample does have constant density equal to 1, and that there are ''n''! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/''n''! is the volume of the region <math>0<u_1<\cdots<u_n<1</math>. It is also related with another particularity of order statistics of uniform random variables: It follows from the [[BRS-inequality]] that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding <math>0 <s <n/2</math> is bounded above by <math> \sqrt{2sn} </math>, which is thus invariant on the set of all <math> s, n </math> with constant product <math> s n </math>. Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of <math>U_{(n)}-U_{(1)}</math>, i.e. maximum minus the minimum. More generally, for <math>n\geq k>j\geq 1</math>, <math>U_{(k)}-U_{(j)} </math> also has a beta distribution: <math display="block">U_{(k)}-U_{(j)}\sim \operatorname{Beta}(k-j, n-(k-j)+1)</math>From these formulas we can derive the covariance between two order statistics:<math display="block">\operatorname{Cov}(U_{(k)},U_{(j)})=\frac{j(n-k+1)}{(n+1)^2(n+2)}</math>The formula follows from noting that <math display="block">\operatorname{Var}(U_{(k)}-U_{(j)})=\operatorname{Var}(U_{(k)}) + \operatorname{Var}(U_{(j)})-2\cdot \operatorname{Cov}(U_{(k)},U_{(j)}) =\frac{k(n-k+1)}{(n+1)^2(n+2)}+\frac{j(n-j+1)}{(n+1)^2(n+2)}-2\cdot \operatorname{Cov}(U_{(k)},U_{(j)})</math>and comparing that with <math display="block">\operatorname{Var}(U)=\frac{(k-j)(n-(k-j)+1)}{(n+1)^2(n+2)}</math>where <math>U\sim \operatorname{Beta}(k-j,n-(k-j)+1)</math>, which is the actual distribution of the difference. ==== Order statistics sampled from an exponential distribution ==== For <math>X_1, X_2, .., X_n</math> a random sample of size ''n'' from an [[exponential distribution]] with parameter ''λ'', the order statistics ''X''<sub>(''i'')</sub> for ''i'' = 1,2,3, ..., ''n'' each have distribution ::<math>X_{(i)} \stackrel{d}{=} \frac{1}{\lambda}\left( \sum_{j=1}^i \frac{Z_j}{n-j+1} \right)</math> where the ''Z''<sub>''j''</sub> are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by [[Alfréd Rényi]].<ref>{{Citation | last1 = David | first1 = H. A. | last2 = Nagaraja | first2 = H. N. | title = Order Statistics | pages = 9 | year = 2003 | chapter = Chapter 2. Basic Distribution Theory | doi = 10.1002/0471722162.ch2 | series = Wiley Series in Probability and Statistics | isbn = 9780471722168 }}</ref><ref>{{cite journal |last = Rényi |first = Alfréd | author-link = Alfréd Rényi |title = On the theory of order statistics |journal = [[Acta Mathematica Hungarica]] |volume = 4 |issue = 3 |pages = 191–231 |date = 1953 |doi = 10.1007/BF02127580 | doi-access=free }}</ref> ==== Order statistics sampled from an Erlang distribution ==== The [[Laplace transform]] of order statistics may be sampled from an [[Erlang distribution]] via a path counting method {{Clarify|reason=Unclear: Scope and relevance. Order statistics of Erlang RVs? Are we sampling from Erlang RVs to learn simulate order statistics of other RVs. Why do we care?|date=February 2019}}.<ref>{{Cite journal | last1 = Hlynka | first1 = M. | last2 = Brill | first2 = P. H. | last3 = Horn | first3 = W. | title = A method for obtaining Laplace transforms of order statistics of Erlang random variables | doi = 10.1016/j.spl.2009.09.006 | journal = Statistics & Probability Letters | volume = 80 | pages = 9–18 | year = 2010 }}</ref> ==== The joint distribution of the order statistics of an absolutely continuous distribution ==== If ''F''<sub>''X''</sub> is [[absolute continuity|absolutely continuous]], it has a density such that <math>dF_X(x)=f_X(x)\,dx</math>, and we can use the substitutions :<math>u=F_X(x)</math> and :<math>du=f_X(x)\,dx</math> to derive the following probability density functions for the order statistics of a sample of size ''n'' drawn from the distribution of ''X'': :<math>f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x)</math> :<math>f_{X_{(j)},X_{(k)}}(x,y) = \frac{n!}{(j-1)!(k-j-1)!(n-k)!}[F_X(x)]^{j-1}[F_X(y)-F_X(x)]^{k-1-j}[1-F_X(y)]^{n-k}f_X(x)f_X(y)</math> where <math>x\le y</math> :<math>f_{X_{(1)},\ldots,X_{(n)}}(x_1,\ldots,x_n)=n!f_X(x_1)\cdots f_X(x_n)</math> where <math>x_1\le x_2\le \dots \le x_n.</math> == Application: confidence intervals for quantiles == An interesting question is how well the order statistics perform as estimators of the [[quantile]]s of the underlying distribution. === A small-sample-size example === The simplest case to consider is how well the sample median estimates the population median. As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is {{Clarify|reason=In which way does the "preceding discussion" explain that the probability is that one?|date=September 2021}} :<math>{6\choose 3}(1/2)^{6} = {5\over 16} \approx 31\%.</math> Although the sample median is probably among the best distribution-independent [[point estimate]]s of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability :<math>\left[{6\choose 2}+{6\choose 3}+{6\choose 4}\right](1/2)^{6} = {25\over 32} \approx 78\%.</math> With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median. === Large sample sizes === For the uniform distribution, as ''n'' tends to infinity, the ''p''<sup>th</sup> sample quantile is asymptotically [[normal distribution|normally distributed]], since it is approximated by : <math>U_{(\lceil np \rceil)} \sim AN\left(p,\frac{p(1-p)}{n}\right).</math> For a general distribution ''F'' with a continuous non-zero density at ''F''<sup> −1</sup>(''p''), a similar asymptotic normality applies: : <math>X_{(\lceil np \rceil)} \sim AN\left(F^{-1}(p),\frac{p(1-p)}{n[f(F^{-1}(p))]^2}\right)</math> where ''f'' is the [[density function]], and ''F''<sup> −1</sup> is the [[quantile function]] associated with ''F''. One of the first people to mention and prove this result was [[Frederick Mosteller]] in his seminal paper in 1946.<ref name = "Mosteller">{{cite journal|last = Mosteller| first = Frederick| author-link = Frederick Mosteller| year = 1946| title = On Some Useful "Inefficient" Statistics| url = http://projecteuclid.org/euclid.aoms/1177730881| journal = [[Annals of Mathematical Statistics]]| volume = 17| issue = 4| pages = 377–408| doi = 10.1214/aoms/1177730881| access-date = February 26, 2015| doi-access = free}}</ref> Further research led in the 1960s to the [[Raghu Raj Bahadur|Bahadur]] representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in [[Kullback–Leibler divergence|relative entropy or KL divergence]].<ref>M. Cardone, A. Dytso and C. Rush, "Entropic Central Limit Theorem for Order Statistics," in IEEE Transactions on Information Theory, vol. 69, no. 4, pp. 2193-2205, April 2023, doi: 10.1109/TIT.2022.3219344.</ref> An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the [[sample mean]], by the [[central limit theorem]], is also asymptotically normally distributed, but with variance σ<sup>2</sup>''/n'' instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low [[kurtosis]], and vice versa. For example, the median achieves better confidence intervals for the [[Laplace distribution]], while the mean performs better for ''X'' that are normally distributed. ==== Proof ==== It can be shown that : <math>B(k,n+1-k)\ \stackrel{\mathrm{d}}{=}\ \frac{X}{X + Y},</math> where : <math> X = \sum_{i=1}^{k} Z_i, \quad Y = \sum_{i=k+1}^{n+1} Z_i,</math> with ''Z<sub>i</sub>'' being independent identically distributed [[exponential distribution|exponential]] random variables with rate 1. Since ''X''/''n'' and ''Y''/''n'' are asymptotically normally distributed by the CLT, our results follow by application of the [[delta method]]. == Mutual Information of Order Statistics == The [[mutual information]] and [[f-divergence]] between order statistics have also been considered.<ref> A. Dytso, M. Cardone and C. Rush, "Measuring Dependencies of Order Statistics: An Information Theoretic Perspective," in 2020 IEEE Information Theory Workshop, 2021, doi: 10.1109/ITW46852.2021.9457617.</ref> For example, if the parent distribution is continuous, then for all <math>1 \le r, m\le n</math> : <math> I(X_{(r)}; X_{(m)}) = I(U_{(r)}; U_{(m)}), </math> In other words, mutual information is independent of the parent distribution. For discrete random variables, the equality need not to hold and we only have : <math> I(X_{(r)}; X_{(m)}) \le I(U_{(r)}; U_{(m)}), </math> The mutual information between uniform order statistics is given by : <math> I(U_{(r)}; U_{(m)}) = T_{m-1} + T_{n-r} - T_{m-r+1} - T_n </math> where : <math> T_k = \log(k!) - kH_k </math> where <math>H_k</math> is the <math>k</math>-th harmonic number. == Application: Non-parametric density estimation == Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.<ref>{{cite journal |last1= Garg|first1= Vikram V.|last2= Tenorio|first2= Luis|last3= Willcox|first3= Karen|author3-link=Karen Willcox| date= 2017|title= Minimum local distance density estimation.|journal= Communications in Statistics - Theory and Methods|volume= 46|issue= 1|pages= 148–164|doi= 10.1080/03610926.2014.988260|arxiv= 1412.2851|s2cid= 14334678}}</ref> Suppose, we want to estimate the density <math>f_{X}</math> at the point <math>x^*</math>. Consider the random variables <math>Y_i = |X_i - x^*|</math>, which are i.i.d with distribution function <math>g_Y(y) = f_X(y + x^*) + f_X(x^* - y)</math>. In particular, <math>f_X(x^*) = \frac{g_Y(0)}{2}</math>. The expected value of the first order statistic <math>Y_{(1)}</math> given a sample of <math>N</math> total observations yields, : <math> E(Y_{(1)}) = \frac{1}{(N+1) g(0)} + \frac{1}{(N+1)(N+2)} \int_{0}^{1} Q''(z) \delta_{N+1}(z) \, dz</math> where <math>Q</math> is the quantile function associated with the distribution <math>g_{Y}</math>, and <math>\delta_N(z) = (N+1)(1-z)^N</math>. This equation in combination with a [[Jackknife resampling|jackknifing]] technique becomes the basis for the following density estimation algorithm, Input: A sample of <math>N</math> observations. <math>\{x_\ell\}_{\ell=1}^M</math> points of density evaluation. Tuning parameter <math>a \in (0,1)</math> (usually 1/3). Output: <math>\{\hat{f}_\ell\}_{\ell=1}^M</math> estimated density at the points of evaluation. 1: Set <math>m_N = \operatorname{round}(N^{1-a})</math> 2: Set <math>s_N = \frac{N}{m_N}</math> 3: Create an <math>s_N \times m_N</math> matrix <math>M_{ij}</math> which holds <math>m_N</math> subsets with <math>s_N</math> observations each. 4: Create a vector <math>\hat{f}</math> to hold the density evaluations. 5: '''for''' <math>\ell = 1 \to M</math> '''do''' 6: '''for''' <math>k = 1 \to m_N</math> '''do''' 7: Find the nearest distance <math>d_{\ell k}</math> to the current point <math>x_\ell</math> within the <math>k</math>th subset 8: '''end for''' 9: Compute the subset average of distances to <math>x_\ell:d_\ell = \sum_{k=1}^{m_N} \frac{d_{\ell k}}{m_N}</math> 10: Compute the density estimate at <math>x_\ell:\hat{f}_\ell = \frac{1}{2 (1+ s_N) d_\ell}</math> 11: '''end for''' 12: '''return''' <math>\hat{f}</math> In contrast to the bandwidth/length based tuning parameters for [[histogram]] and [[Kernel density estimation|kernel]] based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as [[Freedman-Diaconis rule|IQR based bandwidths]]. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.<ref>{{Citation | last1 = David | first1 = H. A. | last2 = Nagaraja | first2 = H. N. | title = Order Statistics | pages = 34 | year = 2003 | chapter = Chapter 3. Expected Values and Moments | doi = 10.1002/0471722162.ch3 | series = Wiley Series in Probability and Statistics | isbn = 9780471722168 }}</ref> == Dealing with discrete variables == Suppose <math>X_1,X_2,\ldots,X_n</math> are i.i.d. random variables from a discrete distribution with cumulative distribution function <math>F(x)</math> and [[probability mass function]] <math>f(x)</math>. To find the probabilities of the <math>k^\text{th}</math> order statistics, three values are first needed, namely :<math>p_1=P(X<x)=F(x)-f(x), \ p_2=P(X=x)=f(x),\text{ and }p_3=P(X>x)=1-F(x).</math> The cumulative distribution function of the <math>k^\text{th}</math> order statistic can be computed by noting that :<math> \begin{align} P(X_{(k)}\leq x)& =P(\text{there are at least }k\text{ observations less than or equal to }x) ,\\ & =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\ & =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} . \end{align} </math> Similarly, <math>P(X_{(k)}<x)</math> is given by :<math> \begin{align} P(X_{(k)}< x)& =P(\text{there are at least }k\text{ observations less than }x) ,\\ & =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\ & =\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} . \end{align} </math> Note that the probability mass function of <math>X_{(k)}</math> is just the difference of these values, that is to say :<math> \begin{align} P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}< x) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right). \end{align} </math> == Computing order statistics == {{main|Selection algorithm|Sampling in order}} The problem of computing the ''k''th smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log ''n''). In many applications all order statistics are required, in which case a [[sorting algorithm]] can be used and the time taken is O(''n'' log ''n''). == Applications == Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields. For example, a recent application in actuarial risk can be found in,<ref>{{Cite journal |last1=Castaño-Martínez |first1=A. |last2=López-Blázquez |first2=F. |last3=Pigueiras |first3=G. |last4=Sordo |first4=M.A. |year=2020 |title=A method for constructing and interpreting some weighted premium principles |journal=ASTIN Bulletin |volume=50 |issue=3 |pages=1037–1064 |doi=10.1017/asb.2020.15}}</ref> where some weighted premium principles in terms of record claims and kth record claims are provided. == See also == * [[Rankit]] * [[Box plot]] * [[BRS-inequality]] * [[Concomitant (statistics)]] * [[Fisher–Tippett distribution]] * [[Bapat–Beg theorem]] for the order statistics of independent but not necessarily identically distributed random variables * [[Bernstein polynomial]] * [[L-estimator]] – linear combinations of order statistics * [[Rank-size distribution]] * [[Selection algorithm]] === Examples of order statistics === * [[Sample maximum and minimum]] * [[Quantile]] * [[Percentile]] * [[Descriptive statistics|Decile]] * [[Quartile]] * [[Median]] * [[Mean]] * [[Sample mean and covariance]] == References == {{Reflist}} == External links == * {{PlanetMath | urlname=OrderStatistics | title=Order statistics}} Retrieved Feb 02, 2005 * {{MathWorld | urlname=OrderStatistic | title=Order Statistic}} Retrieved Feb 02, 2005 * C++ source [https://github.com/xtaci/algorithms/blob/master/include/dos_tree.h Dynamic Order Statistics] {{Statistics|inference}} {{Authority control}} {{DEFAULTSORT:Order Statistic}} [[Category:Nonparametric statistics]] [[Category:Summary statistics]] [[Category:Permutations]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Clarify
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:More footnotes needed
(
edit
)
Template:PlanetMath
(
edit
)
Template:Reflist
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)