Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Unimodality
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Property of having a unique mode or maximum value}} {{redirect|Unimodal|the company that promotes personal rapid transit|SkyTran}} In [[mathematics]], '''unimodality''' means possessing a unique [[mode (statistics)|mode]]. More generally, unimodality means there is only a single highest value, somehow defined, of some [[mathematical object]].<ref>{{MathWorld|urlname=Unimodal|title=Unimodal}}</ref> == Unimodal probability distribution == [[File:Normal Distribution PDF.svg|thumb|'''Figure 1.''' [[Probability density function]] of normal distributions, an example of unimodal distribution.]] [[File:Bimodal.png|thumb|'''Figure 2.''' A simple bimodal distribution.]] [[File:Bimodal geological.PNG|thumb|'''Figure 3.''' A bimodal distribution. Note that only the largest peak would correspond to a mode in the strict sense of the definition of mode]] In [[statistics]], a '''unimodal probability distribution''' or '''unimodal distribution''' is a [[probability distribution]] which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of [[mode (statistics)|mode]] which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal".<ref>{{MathWorld|urlname=Mode|title=Mode}}</ref> Figure 1 illustrates [[normal distribution]]s, which are unimodal. Other examples of unimodal distributions include [[Cauchy distribution]], [[Student's t-distribution|Student's ''t''-distribution]], [[chi-squared distribution]] and [[exponential distribution]]. Among discrete distributions, the [[binomial distribution]] and [[Poisson distribution]] can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions. ===Other definitions=== Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the [[cumulative distribution function]] (cdf).<ref name=Khinchin>{{cite journal|author=A.Ya. Khinchin|title=On unimodal distributions|journal=Trams. Res. Inst. Math. Mech.|publisher=University of Tomsk|volume=2|issue=2|year=1938|pages=1–7|language=ru}}</ref> If the cdf is [[convex function|convex]] for ''x'' < ''m'' and [[concave function|concave]] for ''x'' > ''m'', then the distribution is unimodal, ''m'' being the mode. Note that under this definition the [[uniform distribution (continuous)|uniform distribution]] is unimodal,<ref>{{Springer|title=Unimodal distribution|id=U/u095330|first=N.G.|last=Ushakov}}</ref> as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the [[characteristic function (probability theory)|characteristic function]] of the distribution<ref name=Khinchin/> or through its [[Laplace–Stieltjes transform]].<ref>{{cite book|title=Random summation: limit theorems and applications|author=Vladimirovich Gnedenko and Victor Yu Korolev|isbn=0-8493-2875-6|publisher=CRC-Press|year=1996}} p. 31</ref> Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities.<ref>{{cite journal|title=On the unimodality of discrete distributions |journal=Periodica Mathematica Hungarica|first=P. |last=Medgyessy|volume= 2| issue = 1–4 |pages=245–257|date=March 1972|url=http://www.akademiai.com/content/j5012306777g764n/ |doi=10.1007/bf02018665|s2cid=119817256 }}</ref> A discrete distribution with a [[probability mass function]], <math>\{p_n : n = \dots, -1, 0, 1, \dots\}</math>, is called unimodal if the sequence <math>\dots, p_{-2} - p_{-1}, p_{-1} - p_0, p_0 - p_1, p_1 - p_2, \dots</math> has exactly one sign change (when zeroes don't count). ===Uses and results=== One reason for the importance of distribution unimodality is that it allows for several important results. Several [[inequality (mathematics)|inequalities]] are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on [[multimodal distribution]]. ===Inequalities=== {{See also|Chebyshev's inequality#Unimodal distributions}} ====Gauss's inequality==== A first important result is [[Gauss's inequality]].<ref>{{cite journal|last=Gauss|first=C. F.|author-link=Carl Friedrich Gauss|year=1823|title=Theoria Combinationis Observationum Erroribus Minimis Obnoxiae, Pars Prior|journal=Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores|volume=5}}</ref> Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality. ====Vysochanskiï–Petunin inequality==== A second is the [[Vysochanskiï–Petunin inequality]],<ref>{{cite journal |author=D. F. Vysochanskij, Y. I. Petunin |year=1980 |title=Justification of the 3σ rule for unimodal distributions |journal=Theory of Probability and Mathematical Statistics |volume=21 |pages=25–36}}</ref> a refinement of the [[Chebyshev inequality]]. The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke.<ref>{{Cite journal | last1 = Sellke | first1 = T.M. | last2 = Sellke | first2 = S.H. | title = Chebyshev inequalities for unimodal distributions | jstor = 2684690 | year = 1997 | journal = [[American Statistician]] | volume = 51 | issue = 1 | pages = 34–40 | publisher = American Statistical Association | doi=10.2307/2684690 }}</ref> ====Mode, median and mean==== Gauss also showed in 1823 that for a unimodal distribution<ref name=Gauss1823>Gauss C.F. Theoria Combinationis Observationum Erroribus Minimis Obnoxiae. Pars Prior. Pars Posterior. Supplementum. Theory of the Combination of Observations Least Subject to Errors. Part One. Part Two. Supplement. 1995. Translated by G.W. Stewart. Classics in Applied Mathematics Series, Society for Industrial and Applied Mathematics, Philadelphia</ref> : <math>\sigma \le \omega \le 2 \sigma</math> and : <math>|\nu - \mu| \le \sqrt{\frac{3}{4}} \omega ,</math> where the [[median]] is ''ν'', the mean is ''μ'' and ''ω'' is the [[root mean square deviation]] from the mode. It can be shown for a unimodal distribution that the median ''ν'' and the mean ''μ'' lie within (3/5)<sup>1/2</sup> ≈ 0.7746 [[standard deviation]]s of each other.<ref name="unimodal">{{cite journal | url=http://epubs.siam.org/doi/pdf/10.1137/S0040585X97975447 | doi=10.1137/S0040585X97975447 | title=The Mean, Median, and Mode of Unimodal Distributions: A Characterization | year=1997 | last1=Basu | first1=S. | last2=Dasgupta | first2=A. | journal=Theory of Probability & Its Applications | volume=41 | issue=2 | pages=210–223 }}</ref> In symbols, : <math>\frac{|\nu - \mu|}{\sigma} \le \sqrt{\frac{3}{5}}</math> where | . | is the [[absolute value]]. In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average <math>\frac{ q_\alpha + q_{(1-\alpha)} }{ 2 } </math> and the mean,<ref name="unimodalbounds">{{cite journal | doi=10.1016/j.insmatheco.2020.05.013 | title=Range Value-at-Risk bounds for unimodal distributions under partial information | year=2020 | last1=Bernard | first1=Carole | last2=Kazzi | first2=Rodrigue | last3=Vanduffel | first3=Steven | journal=Insurance: Mathematics and Economics | volume=94 | pages=9–24 | doi-access=free }}</ref> : <math>\frac{ \left| \frac{ q_\alpha + q_{(1-\alpha)} }{2} - \mu \right| }{ \sigma } \le \left\{ \begin{array}{cl} \frac{\sqrt[]{\frac{4}{9(1-\alpha)}-1} \text{ } + \text{ } \sqrt[]{\frac{1-\alpha}{1/3+\alpha}}}{2} & \text{for }\alpha \in \left[\frac{5}{6},1\right)\!, \\ \frac{\sqrt[]{\frac{3 \alpha}{4-3\alpha}} \text{ } + \text{ } \sqrt[]{\frac{1-\alpha}{1/3+\alpha}}}{2} & \text{for }\alpha \in \left(\frac{1}{6},\frac{5}{6}\right)\!,\\ \frac{\sqrt[]{\frac{3 \alpha}{4-3\alpha}} \text{ } + \text{ } \sqrt[]{\frac{4}{9 \alpha} -1}}{2} & \text{for }\alpha \in \left(0,\frac{1}{6}\right]\!. \end{array} \right.</math> The maximum distance is minimized at <math>\alpha=0.5</math> (i.e., when the symmetric quantile average is equal to <math>q_{0.5} = \nu</math>), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when <math>\alpha = 0.5</math>, the bound is equal to <math>\sqrt{3/5}</math>, which is the maximum distance between the median and the mean of a unimodal distribution. A similar relation holds between the median and the mode ''θ'': they lie within 3<sup>1/2</sup> ≈ 1.732 standard deviations of each other: : <math>\frac{|\nu - \theta|}{\sigma} \le \sqrt{3}.</math> It can also be shown that the mean and the mode lie within 3<sup>1/2</sup> of each other: : <math>\frac{|\mu - \theta|}{\sigma} \le \sqrt{3}.</math> ====Skewness and kurtosis==== Rohatgi and Szekely claimed that the [[skewness]] and [[kurtosis]] of a unimodal distribution are related by the inequality:<ref name=Rohatgi1989>{{cite journal | doi=10.1016/0167-7152(89)90035-7 | title=Sharp inequalities between skewness and kurtosis | year=1989 | last1=Rohatgi | first1=Vijay K. | last2=Székely | first2=Gábor J. | journal=Statistics & Probability Letters | volume=8 | issue=4 | pages=297–299 }}</ref> : <math> \gamma^2 - \kappa \le \frac{ 6 }{ 5 } = 1.2 </math> where ''κ'' is the kurtosis and ''γ'' is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide.<ref name=Klaassen2000>{{cite journal | doi=10.1016/S0167-7152(00)00090-0 | title=Squared skewness minus kurtosis bounded by 186/125 for unimodal distributions | year=2000 | last1=Klaassen | first1=Chris A.J. | last2=Mokveld | first2=Philip J. | last3=Van Es | first3=Bert | journal=Statistics & Probability Letters | volume=50 | issue=2 | pages=131–135 }}</ref> They derived a weaker inequality which applies to all unimodal distributions:<ref name=Klaassen2000 /> : <math> \gamma^2 - \kappa \le \frac{ 186 }{ 125 } = 1.488 </math> This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}. ==Unimodal function== As the term "modal" applies to data sets and probability distribution, and not in general to [[function (mathematics)|functions]], the definitions above do not apply. The definition of "unimodal" was extended to functions of [[real number]]s as well. A common definition is as follows: a function ''f''(''x'') is a '''unimodal function''' if for some value ''m'', it is [[monotonic]]ally increasing for ''x'' ≤ ''m'' and monotonically decreasing for ''x'' ≥ ''m''. In that case, the [[maximum]] value of ''f''(''x'') is ''f''(''m'') and there are no other local maxima. Proving unimodality is often hard. One way consists in using the definition of that property, but it turns out to be suitable for simple functions only. A general method based on [[derivative]]s exists,<ref>{{cite web|url=http://homepage.univie.ac.at/thibaut.barthelemy/METRIC.pdf|title=On the unimodality of METRIC Approximation subject to normally distributed demands.|work=Method in appendix D, Example in theorem 2 page 5|access-date=2013-08-28}}</ref> but it does not succeed for every function despite its simplicity. Examples of unimodal functions include [[quadratic polynomial]] functions with a negative quadratic coefficient, [[tent map]] functions, and more. The above is sometimes related to as '''{{visible anchor|strong unimodality}}''', from the fact that the monotonicity implied is ''strong monotonicity''. A function ''f''(''x'') is a '''weakly unimodal function''' if there exists a value ''m'' for which it is weakly monotonically increasing for ''x'' ≤ ''m'' and weakly monotonically decreasing for ''x'' ≥ ''m''. In that case, the maximum value ''f''(''m'') can be reached for a continuous range of values of ''x''. An example of a weakly unimodal function which is not strongly unimodal is every other row in [[Pascal's triangle]]. Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than maximum.<ref>{{cite web|url=https://glossary.informs.org/indexVer1.php?page=U.html|title=Mathematical Programming Glossary.|access-date=2020-03-29}}</ref> For example, [[local unimodal sampling]], a method for doing numerical optimization, is often demonstrated with such a function. It can be said that a unimodal function under this extension is a function with a single local [[extremum]]. One important property of unimodal functions is that the extremum can be found using [[search algorithm]]s such as [[golden section search]], [[ternary search]] or [[successive parabolic interpolation]].<ref>{{Cite book |last1=Demaine |first1=Erik D. |last2=Langerman |first2=Stefan |title=Algorithms – ESA 2005 |chapter=Optimizing a 2D Function Satisfying Unimodality Properties |series=Lecture Notes in Computer Science |date=2005 |volume=3669 |editor-last=Brodal |editor-first=Gerth Stølting |editor2-last=Leonardi |editor2-first=Stefano |chapter-url=https://link.springer.com/chapter/10.1007/11561071_78 |language=en |location=Berlin, Heidelberg |publisher=Springer |pages=887–898 |doi=10.1007/11561071_78 |isbn=978-3-540-31951-1}}</ref> ==Other extensions== A function ''f''(''x'') is "S-unimodal" (often referred to as "S-unimodal map") if its [[Schwarzian derivative]] is negative for all <math>x \ne c</math>, where <math>c</math> is the critical point.<ref>See e.g. {{cite journal|title=Distortion of S-Unimodal Maps|author1=John Guckenheimer |author2=Stewart Johnson |journal=Annals of Mathematics |series=Second Series|volume=132|number=1|date=July 1990|pages=71–130|doi=10.2307/1971501|jstor=1971501 }}</ref> In [[computational geometry]] if a function is unimodal it permits the design of efficient algorithms for finding the extrema of the function.<ref>{{cite journal|author=Godfried T. Toussaint|title=Complexity, convexity, and unimodality|journal=International Journal of Computer and Information Sciences|volume=13|number=3|date=June 1984|pages=197–217|doi=10.1007/bf00979872|s2cid=11577312 }}</ref> A more general definition, applicable to a function ''f''(''X'') of a vector variable ''X'' is that ''f'' is unimodal if there is a [[one-to-one function|one-to-one]] [[differentiable]] mapping ''X'' = ''G''(''Z'') such that ''f''(''G''(''Z'')) is convex. Usually one would want ''G''(''Z'') to be [[continuously differentiable]] with nonsingular Jacobian matrix. [[Quasiconvex function]]s and quasiconcave functions extend the concept of unimodality to functions whose arguments belong to higher-dimensional [[Euclidean space]]s. ==See also== *[[Bimodal distribution]] *[[Read's conjecture]] ==References== {{Reflist|2}} [[Category:Functions and mappings]] [[Category:Mathematical relations]] [[Category:Theory of probability distributions]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:MathWorld
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)
Template:Visible anchor
(
edit
)