Quantile

Revision as of 20:22, 24 May 2025 by imported>OAbot (Open access bot: url-access updated in citation with #oabot.)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description

File:Iqr with quantile.png
Probability density of a normal distribution, with quantiles, Template:Math, Template:Math, and Template:Math, are shown. The area below the red curve is the same in the intervals Template:Math, Template:Math, Template:Math, and Template:Math.

In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. There is one fewer quantile than the number of groups created. Common quantiles have special names, such as quartiles (four groups), deciles (ten groups), and percentiles (100 groups). The groups created are termed halves, thirds, quarters, etc., though sometimes the terms for the quantile are used for the groups created, rather than for the cut points.

Template:Mvar-quantiles are values that partition a finite set of values into Template:Mvar subsets of (nearly) equal sizes. There are Template:Math partitions of the Template:Mvar-quantiles, one for each integer Template:Mvar satisfying Template:Math. In some cases the value of a quantile may not be uniquely determined, as can be the case for the median (2-quantile) of a uniform probability distribution on a set of even size. Quantiles can also be applied to continuous distributions, providing a way to generalize rank statistics to continuous variables (see percentile rank). When the cumulative distribution function of a random variable is known, the Template:Mvar-quantiles are the application of the quantile function (the inverse function of the cumulative distribution function) to the values Template:Math}.

Quantiles of a populationEdit

As in the computation of, for example, standard deviation, the estimation of a quantile depends upon whether one is operating with a statistical population or with a sample drawn from it. For a population, of discrete values or for a continuous population density, the Template:Mvar-th Template:Mvar-quantile is the data value where the cumulative distribution function crosses Template:Math. That is, Template:Mvar is a Template:Mvar-th Template:Mvar-quantile for a variable Template:Mvar if

Template:Math or, equivalently, Template:Math

and

Template:Math

where Template:Math is the probability function. For a finite population of Template:Mvar equally probable values indexed Template:Math from lowest to highest, the Template:Mvar-th Template:Mvar-quantile of this population can equivalently be computed via the value of Template:Mvar. If Template:Mvar is not an integer, then round up to the next integer to get the appropriate index; the corresponding data value is the Template:Mvar-th Template:Mvar-quantile. On the other hand, if Template:Mvar is an integer then any number from the data value at that index to the data value of the next index can be taken as the quantile, and it is conventional (though arbitrary) to take the average of those two values (see Estimating quantiles from a sample).

If, instead of using integers Template:Mvar and Template:Mvar, the "Template:Mvar-quantile" is based on a real number Template:Mvar with Template:Math then Template:Mvar replaces Template:Math in the above formulas. This broader terminology is used when quantiles are used to parameterize continuous probability distributions. Moreover, some software programs (including Microsoft Excel) regard the minimum and maximum as the 0th and 100th percentile, respectively. However, this broader terminology is an extension beyond traditional statistics definitions.

ExamplesEdit

The following two examples use the Nearest Rank definition of quantile with rounding. For an explanation of this definition, see percentiles.

Even-sized populationEdit

Consider an ordered population of 10 data values [3, 6, 7, 8, 8, 10, 13, 15, 16, 20]. What are the 4-quantiles (the "quartiles") of this dataset?

Quartile Calculation Result
Zeroth quartile Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. 3
First quartile The rank of the first quartile is 10×(1/4) = 2.5, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. 7
Second quartile The rank of the second quartile (same as the median) is 10×(2/4) = 5, which is an integer, while the number of values (10) is an even number, so the average of both the fifth and sixth values is taken—that is (8+10)/2 = 9, though any value from 8 through to 10 could be taken to be the median. 9
Third quartile The rank of the third quartile is 10×(3/4) = 7.5, which rounds up to 8. The eighth value in the population is 15. 15
Fourth quartile Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 10. 20

So the first, second and third 4-quantiles (the "quartiles") of the dataset [3, 6, 7, 8, 8, 10, 13, 15, 16, 20] are [7, 9, 15]. If also required, the zeroth quartile is 3 and the fourth quartile is 20.

Odd-sized populationEdit

Consider an ordered population of 11 data values [3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20]. What are the 4-quantiles (the "quartiles") of this dataset?

Quartile Calculation Result
Zeroth quartile Although not universally accepted, one can also speak of the zeroth quartile. This is the minimum value of the set, so the zeroth quartile in this example would be 3. 3
First quartile The first quartile is determined by 11×(1/4) = 2.75, which rounds up to 3, meaning that 3 is the rank in the population (from least to greatest values) at which approximately 1/4 of the values are less than the value of the first quartile. The third value in the population is 7. 7
Second quartile The second quartile value (same as the median) is determined by 11×(2/4) = 5.5, which rounds up to 6. Therefore, 6 is the rank in the population (from least to greatest values) at which approximately 2/4 of the values are less than the value of the second quartile (or median). The sixth value in the population is 9. 9
Third quartile The third quartile value for the original example above is determined by 11×(3/4) = 8.25, which rounds up to 9. The ninth value in the population is 15. 15
Fourth quartile Although not universally accepted, one can also speak of the fourth quartile. This is the maximum value of the set, so the fourth quartile in this example would be 20. Under the Nearest Rank definition of quantile, the rank of the fourth quartile is the rank of the biggest number, so the rank of the fourth quartile would be 11. 20

So the first, second and third 4-quantiles (the "quartiles") of the dataset [3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20] are [7, 9, 15]. If also required, the zeroth quartile is 3 and the fourth quartile is 20.

Relationship to the meanEdit

For any population probability distribution on finitely many values, and generally for any probability distribution with a mean and variance, it is the case that <math display="block">\mu - \sigma\cdot\sqrt{\frac{1-p}{p}} \le Q(p) \le \mu + \sigma\cdot\sqrt{\frac{p}{1-p}}\,,</math> where Template:Mvar is the value of the Template:Mvar-quantile for Template:Math (or equivalently is the Template:Mvar-th Template:Mvar-quantile for Template:Math), where Template:Mvar is the distribution's arithmetic mean, and where Template:Mvar is the distribution's standard deviation.<ref>Template:Cite journal</ref> In particular, the median Template:Math is never more than one standard deviation from the mean.

The above formula can be used to bound the value Template:Math in terms of quantiles. When Template:Math, the value that is [[standard score|Template:Math standard deviations above the mean]] has a lower bound <math display="block">\mu + z \sigma \ge Q\left(\frac{z^2}{1+z^2}\right)\,,\mathrm{~for~} z \ge 0.</math> For example, the value that is Template:Math standard deviation above the mean is always greater than or equal to Template:Math, the median, and the value that is Template:Math standard deviations above the mean is always greater than or equal to Template:Math, the fourth quintile.

When Template:Math, there is instead an upper bound <math display="block">\mu + z \sigma \le Q\left(\frac{1}{1+z^2}\right)\,,\mathrm{~for~} z \le 0.</math> For example, the value Template:Math for Template:Math will never exceed Template:Math, the first decile.

Estimating quantiles from a sampleEdit

One problem which frequently arises is estimating a quantile of a (very large or infinite) population based on a finite sample of size Template:Mvar.

Modern statistical packages rely on a number of techniques to estimate the quantiles.

Hyndman and Fan compiled a taxonomy of nine algorithms<ref>Template:Cite journal</ref> used by various software packages. All methods compute Template:Mvar, the estimate for the Template:Mvar-quantile (the Template:Mvar-th Template:Mvar-quantile, where Template:Math) from a sample of size Template:Mvar by computing a real valued index Template:Mvar. When Template:Mvar is an integer, the Template:Mvar-th smallest of the Template:Mvar values, Template:Mvar, is the quantile estimate. Otherwise a rounding or interpolation scheme is used to compute the quantile estimate from Template:Mvar, Template:Math, and Template:Math. (For notation, see floor and ceiling functions).

The first three are piecewise constant, changing abruptly at each data point, while the last six use linear interpolation between data points, and differ only in how the index Template:Mvar used to choose the point along the piecewise linear interpolation curve, is chosen.

Mathematica,<ref>Mathematica Documentation See 'Details' section</ref> Matlab,<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> R<ref>Template:Cite book</ref> and GNU Octave<ref name="Function Reference: quantile – Octave-Forge – SourceForge">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> programming languages support all nine sample quantile methods. SAS includes five sample quantile methods, SciPy<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> and Maple<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> both include eight, EViews<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> and Julia<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> include the six piecewise linear functions, Stata<ref>Stata documentation for the pctile and xtile commands See 'Methods and formulas' section.</ref> includes two, Python<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> includes two, and Microsoft Excel includes two. Mathematica, SciPy and Julia support arbitrary parameters for methods which allow for other, non-standard, methods.

The estimate types and interpolation schemes used include:

Type Template:Mvar Template:Mvar Notes
R‑1, SAS‑3, Maple‑1 Template:Mvar Template:Math Inverse of empirical distribution function.
R‑2, SAS‑5, Maple‑2, Stata Template:Math Template:Math The same as R-1, but with averaging at discontinuities.
R‑3, SAS‑2 Template:Math Template:Math The observation numbered closest to Template:Mvar. Here, Template:Math indicates rounding to the nearest integer, choosing the even integer in the case of a tie.
R‑4, SAS‑1, SciPy‑(0,1), Julia‑(0,1), Maple‑3 Template:Mvar Template:Math Linear interpolation of the inverse of the empirical distribution function.
R‑5, SciPy‑(1/2,1/2), Julia‑(1/2,1/2), Maple‑4 Template:Math Piecewise linear function where the knots are the values midway through the steps of the empirical distribution function.
R‑6, Excel, Python, SAS‑4, SciPy‑(0,0), Julia-(0,0), Maple‑5, Stata‑altdef Template:Math Linear interpolation of the expectations for the order statistics for the uniform distribution on [0,1]. That is, it is the linear interpolation between points Template:Math, where Template:Math is the probability that the last of (Template:Math) randomly drawn values will not exceed the Template:Mvar-th smallest of the first Template:Mvar randomly drawn values.
R‑7, Excel, Python, SciPy‑(1,1), Julia-(1,1), Maple‑6, NumPy Template:Math Linear interpolation of the modes for the order statistics for the uniform distribution on [0,1].
R‑8, SciPy‑(1/3,1/3), Julia‑(1/3,1/3), Maple‑7 Template:Math Linear interpolation of the approximate medians for order statistics.
R‑9, SciPy‑(3/8,3/8), Julia‑(3/8,3/8), Maple‑8 Template:Math The resulting quantile estimates are approximately unbiased for the expected order statistics if Template:Mvar is normally distributed.

Notes:

  • R‑1 through R‑3 are piecewise constant, with discontinuities.
  • R‑4 and following are piecewise linear, without discontinuities, but differ in how Template:Mvar is computed.
  • R‑3 and R‑4 are not symmetric in that they do not give Template:Math when Template:Math.
  • Excel's PERCENTILE.EXC and Python's default "exclusive" method are equivalent to R‑6.
  • Excel's PERCENTILE and PERCENTILE.INC and Python's optional "inclusive" method are equivalent to R‑7. This is R's and Julia's default method.
  • Packages differ in how they estimate quantiles beyond the lowest and highest values in the sample, i.e. Template:Math and Template:Math. Choices include returning an error value, computing linear extrapolation, or assuming a constant value.

Of the techniques, Hyndman and Fan recommend R-8, but most statistical software packages have chosen R-6 or R-7 as the default.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

The standard error of a quantile estimate can in general be estimated via the bootstrap. The Maritz–Jarrett method can also be used.<ref>Template:Cite book</ref>

The asymptotic distribution of the sample medianEdit

The sample median is the most examined one amongst quantiles, being an alternative to estimate a location parameter, when the expected value of the distribution does not exist, and hence the sample mean is not a meaningful estimator of a population characteristic. Moreover, the sample median is a more robust estimator than the sample mean.

One peculiarity of the sample median is its asymptotic distribution: when the sample comes from a continuous distribution, then the sample median has the anticipated Normal asymptotic distribution,

<math>\text{Sample median m} \sim \mathcal{N}\left(\mu=m, \sigma^2=\frac{1}{4Nf(m)^2}\right)</math>

This extends to the other quantiles,

<math>\text{Sample quantile p} \sim \mathcal{N}\left(\mu=x_p, \sigma^2=\frac{p( 1 - p )}{Nf(x_p)^2}\right)</math>

where Template:Math is the value of the distribution density at the Template:Mvar-th population quantile (<math>x_p=F^{-1}(p)</math>).<ref name="Stuart1994">Template:Cite book</ref>

But when the distribution is discrete, then the distribution of the sample median and the other quantiles fails to be Normal (see examples in https://stats.stackexchange.com/a/86638/28746).

A solution to this problem is to use an alternative definition of sample quantiles through the concept of the "mid-distribution" function, which is defined as

<math>F_\text{mid}(x) = P(X\le x) - \frac 12P(X=x)</math>

The definition of sample quantiles through the concept of mid-distribution function can be seen as a generalization that can cover as special cases the continuous distributions. For discrete distributions the sample median as defined through this concept has an asymptotically Normal distribution, see Ma, Y., Genton, M. G., & Parzen, E. (2011). Asymptotic properties of sample quantiles of discrete distributions. Annals of the Institute of Statistical Mathematics, 63(2), 227–243.

Approximate quantiles from a streamEdit

Computing approximate quantiles from data arriving from a stream can be done efficiently using compressed data structures. The most popular methods are t-digest<ref name="Dunning2019">Template:Cite arXiv</ref> and KLL.<ref name="Karnin2016">Template:Cite arXiv</ref> These methods read a stream of values in a continuous fashion and can, at any time, be queried about the approximate value of a specified quantile.

Both algorithms are based on a similar idea: compressing the stream of values by summarizing identical or similar values with a weight. If the stream is made of a repetition of 100 times v1 and 100 times v2, there is no reason to keep a sorted list of 200 elements, it is enough to keep two elements and two counts to be able to recover the quantiles. With more values, these algorithms maintain a trade-off between the number of unique values stored and the precision of the resulting quantiles. Some values may be discarded from the stream and contribute to the weight of a nearby value without changing the quantile results too much. The t-digest maintains a data structure of bounded size using an approach motivated by k-means clustering to group similar values. The KLL algorithm uses a more sophisticated "compactor" method that leads to better control of the error bounds at the cost of requiring an unbounded size if errors must be bounded relative to Template:Mvar.

Both methods belong to the family of data sketches that are subsets of Streaming Algorithms with useful properties: t-digest or KLL sketches can be combined. Computing the sketch for a very large vector of values can be split into trivially parallel processes where sketches are computed for partitions of the vector in parallel and merged later.

The algorithms described so far directly approximate the empirical quantiles without any particular assumptions on the data, in essence the data are simply numbers or more generally, a set of items that can be ordered. These algorithms are computer science derived methods. Another class of algorithms exist which assume that the data are realizations of a random process. These are statistics derived methods, sequential nonparametric estimation algorithms in particular. There are a number of such algorithms such as those based on stochastic approximation<ref name="tierney1983">Template:Cite journal</ref><ref name="chen2000">Template:Cite book</ref> or Hermite series estimators.<ref name="stephanou2017">Template:Cite journal</ref>

These statistics based algorithms typically have constant update time and space complexity, but have different error bound guarantees compared to computer science type methods and make more assumptions. The statistics based algorithms do present certain advantages however, particularly in the non-stationary streaming setting i.e. time-varying data. The algorithms of both classes, along with some respective advantages and disadvantages have been recently surveyed.<ref name="StephanouHermiter2022">Template:Cite journal</ref>

DiscussionEdit

Standardized test results are commonly reported as a student scoring "in the 80th percentile", for example. This uses an alternative meaning of the word percentile as the interval between (in this case) the 80th and the 81st scalar percentile.<ref>Template:Cite journal</ref> This separate meaning of percentile is also used in peer-reviewed scientific research articles.<ref>Template:Cite journal</ref> The meaning used can be derived from its context.

If a distribution is symmetric, then the median is the mean (so long as the latter exists). But, in general, the median and the mean can differ. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values but is zero for negative numbers.

Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics.

Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available.

The quantiles of a random variable are preserved under increasing transformations, in the sense that, for example, if Template:Mvar is the median of a random variable Template:Mvar, then Template:Math is the median of Template:Math, unless an arbitrary choice has been made from a range of values to specify a particular quantile. (See quantile estimation, above, for examples of such interpolation.) Quantiles can also be used in cases where only ordinal data are available.

Other quantificationsEdit

Values that divide sorted data into equal subsets other than four have different names.

See alsoEdit

Template:Div col

Template:Div col end

ReferencesEdit

Template:Reflist

Further readingEdit

External linksEdit

Template:Authority control