Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Histogram
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Mathematical definitions== [[File:Cumulative vs normal histogram.svg|thumb|350px|An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1.]] The data used to construct a histogram are generated via a function ''m''<sub>''i''</sub> that counts the number of observations that fall into each of the disjoint categories (known as ''bins''). Thus, if we let ''n'' be the total number of observations and ''k'' be the total number of bins, the histogram data ''m''<sub>''i''</sub> meet the following conditions: : <math>n = \sum_{i=1}^k{m_i}.</math> A histogram can be thought of as a simplistic [[kernel density estimation]], which uses a [[kernel (statistics)|kernel]] to smooth frequencies over the bins. This yields a [[smooth function|smoother]] probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently. An alternative to kernel density estimation is the average shifted histogram,<ref>{{cite journal |author=David W. Scott |date=December 2009 |title=Averaged shifted histogram |url=https://www.researchgate.net/publication/229760716 |journal=Wiley Interdisciplinary Reviews: Computational Statistics |volume=2 |issue=2 |pages=160–164 |doi=10.1002/wics.54|s2cid=122986682 }}</ref> which is fast to compute and gives a smooth curve estimate of the density without using kernels. ===Cumulative histogram=== A cumulative histogram: a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram ''M''<sub>''i''</sub> of a histogram ''m''<sub>''j''</sub> can be defined as: : <math>M_i = \sum_{j=1}^i{m_j}.</math> ===Number of bins and width=== There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old as [[John Graunt|Graunt]]'s work in the 17th century, but no systematic guidelines were given<ref name=scott92/> until [[Herbert Sturges|Sturges]]'s work in 1926.<ref name=sturges/> Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb.<ref>''e.g.'' § 5.6 "Density Estimation", W. N. Venables and B. D. Ripley, ''Modern Applied Statistics with S'' (2002), Springer, 4th edition. {{ISBN|0-387-95457-0}}.</ref> The number of bins ''k'' can be assigned directly or can be calculated from a suggested bin width ''h'' as: [[File:Untitled document (1).jpg|thumb|333x333px|Histogram data represented with different bin widths]] :<math>k = \left \lceil \frac{\max x - \min x}{h} \right \rceil.</math> The braces indicate the [[Floor and ceiling functions|ceiling function]]. ==== Square-root choice ==== :<math>k = \lceil \sqrt{n} \rceil \, </math> which takes the square root of the number of data points in the sample and rounds to the next [[integer]]. This rule is suggested by a number of elementary statistics textbooks <ref>{{Cite web |last=Lohaka |first=H.O. |year=2007 |url=https://etd.ohiolink.edu/acprod/odb_etd/ws/send_file/send?accession=ohiou1194981215 |title=Making a grouped-data frequency table: development and examination of the iteration algorithm |publisher=Doctoral dissertation, Ohio University |page=87}}</ref> and widely implemented in many software packages.<ref>{{cite web | url=https://www.mathworks.com/help/matlab/ref/matlab.graphics.chart.primitive.histogram.html|title=MathWorks: Histogram}}</ref> ==== Sturges's formula ==== [[Sturges's rule]]<ref name=sturges>{{cite journal |last=Sturges |first=H. A. |year=1926 |title=The choice of a class interval |journal=Journal of the American Statistical Association |volume=21 |issue=153 | pages=65–66 | jstor=2965501 | doi = 10.1080/01621459.1926.10502161 }}</ref> is derived from a [[binomial distribution]] and implicitly assumes an approximately normal distribution. :<math>k = \lceil \log_2 n \rceil+ 1 , \, </math> Sturges's formula implicitly bases bin sizes on the range of the data, and can perform poorly if {{math|''n'' < 30}}, because the number of bins will be small—less than seven—and unlikely to show trends in the data well. On the other extreme, Sturges's formula may overestimate bin width for very large datasets, resulting in oversmoothed histograms.<ref name="Scott2009WIRE">{{cite journal|last=Scott|first=David W.|title=Sturges' rule|journal = WIREs Computational Statistics|volume=1|issue=3|year=2009|pages=303–306 |doi=10.1002/wics.35|s2cid=197483064 }}</ref> It may also perform poorly if the data are not normally distributed. When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram bins, the output of Sturges's formula is closest when {{math|''n'' ≈ 100}}.<ref name="Scott2009WIRE"/> ==== Rice rule ==== :<math>k = \lceil 2 \sqrt[3]{n}\rceil</math> The [[Scott's rule#Terrell–Scott rule|Rice rule]]<ref>Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University (chapter 2 "Graphing Distributions", section "Histograms")</ref> is presented as a simple alternative to Sturges's rule. ==== Doane's formula ==== [[Sturges's rule#Doane's formula|Doane's formula]]<ref name=Doane1976>Doane DP (1976) Aesthetic frequency classification. American Statistician, 30: 181–183</ref> is a modification of Sturges's formula which attempts to improve its performance with non-normal data. :<math> k = 1 + \log_2( n ) + \log_2 \left( 1 + \frac { |g_1| }{\sigma_{g_1}} \right) </math> where <math>g_1</math> is the estimated 3rd-moment-[[skewness]] of the distribution and :<math> \sigma_{g_1} = \sqrt { \frac { 6(n-2) }{ (n+1)(n+3) } } </math> ==== Scott's normal reference rule ==== Bin width <math>h</math> is given by :<math>h = \frac{3.49 \hat \sigma}{\sqrt[3]{n}},</math> where <math>\hat \sigma</math> is the sample [[standard deviation]]. [[Scott's Rule|Scott's normal reference rule]]<ref name=scott79>{{cite journal |last=Scott |first=David W. |year=1979 |title=On optimal and data-based histograms |journal=Biometrika |volume=66 |issue=3|pages=605–610 |doi=10.1093/biomet/66.3.605}}</ref> is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate.<ref name=scott92>{{cite book|last=Scott|first=David W.|title=Multivariate Density Estimation: Theory, Practice, and Visualization|publisher=John Wiley|location=New York|year=1992}}</ref> This is the default rule used in Microsoft Excel.<ref>{{Cite web| url=https://support.microsoft.com/en-gb/office/create-a-histogram-85680173-064b-4024-b39d-80f17ff2f4e8#bkmk_scottrefrule| title=Excel:Create a histogram}} </ref> ==== Terrell–Scott rule ==== :<math>k = \sqrt[3]{2n}</math> The [[Scott's rule#Terrell–Scott rule|Terrell–Scott rule]]<ref name="Scott2009WIRE"/><ref>Terrell, G.R. and Scott, D.W., 1985. Oversmoothed nonparametric density estimates. Journal of the American Statistical Association, 80(389), pp.209-214.</ref> is not a normal reference rule. It gives the minimum number of bins required for an asymptotically optimal histogram, where optimality is measured by the integrated mean squared error. The bound is derived by finding the 'smoothest' possible density, which turns out to be <math>\frac 3 4 (1-x^2)</math>. Any other density will require more bins, hence the above estimate is also referred to as the 'oversmoothed' rule. The similarity of the formulas and the fact that Terrell and Scott were at Rice University when the proposed it suggests that this is also the origin of the Rice rule. ==== Freedman–Diaconis rule ==== The [[Freedman–Diaconis rule]] gives bin width <math>h</math> as:<ref>{{cite journal |last=Freedman |first=David |author2=Diaconis, P. |year=1981 |title=On the histogram as a density estimator: ''L''<sub>2</sub> theory |journal=Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete |volume=57 |issue=4 |pages=453–476 |doi=10.1007/BF01025868|url=http://bayes.wustl.edu/Manual/FreedmanDiaconis1_1981.pdf |citeseerx=10.1.1.650.2473 |s2cid=14437088 }}</ref><ref name=scott92/> :<math>h = 2\frac{\operatorname{IQR}(x)}{\sqrt[3]{n}},</math> which is based on the [[interquartile range]], denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data. ==== Minimizing cross-validation estimated squared error ==== This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation:<ref>{{Cite book|title=All of Statistics|last=Wasserman|first=Larry|publisher=Springer|year=2004|isbn=978-1-4419-2322-6|location=New York|pages=310}}</ref><ref>{{cite conference|url=http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/34.pdf |title=An asymptotically optimal histogram selection rule |first=Charles J. |last=Stone |date=1984 |book-title=Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer }}</ref> :<math>\underset{h}{\operatorname{arg\,min}} \hat{J}(h) = \underset{h}{\operatorname{arg\,min}} \left( \frac{2}{(n-1)h} - \frac{n+1}{n^2(n-1)h} \sum_k N_k^2 \right)</math> Here, <math>N_k</math> is the number of datapoints in the ''k''th bin, and choosing the value of ''h'' that minimizes ''J'' will minimize integrated mean squared error. ==== Shimazaki and Shinomoto's choice ==== The choice is based on minimization of an estimated ''L''<sup>2</sup> [[risk function]]<ref>{{cite journal | last = Shimazaki | first =H. |author2=Shinomoto, S. | title =A method for selecting the bin size of a time histogram | journal = Neural Computation | volume =19 | issue = 6 | pages =1503–1527 | year =2007 | pmid = 17444758 | doi = 10.1162/neco.2007.19.6.1503| citeseerx =10.1.1.304.6404| s2cid =7781236 }}</ref> :<math> \underset{h}{\operatorname{arg\,min}} \frac{ 2 \bar{m} - v } {h^2} </math> where <math>\textstyle \bar{m}</math> and <math>\textstyle v</math> are mean and biased variance of a histogram with bin-width <math>\textstyle h</math>, <math>\textstyle \bar{m}=\frac{1}{k} \sum_{i=1}^{k} m_i</math> and <math>\textstyle v= \frac{1}{k} \sum_{i=1}^{k} (m_i - \bar{m})^2 </math>. ==== Variable bin widths ==== Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to choose ''equiprobable bins'', where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has <math>\approx n/k</math> samples. When plotting the histogram, the ''frequency density'' is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution. For equiprobable bins, the following rule for the number of bins is suggested:<ref>{{cite web |author1=Jack Prins |author2=Don McCormack |author3=Di Michelson |author4=Karen Horrell |title=Chi-square goodness-of-fit test |url=https://itl.nist.gov/div898/handbook/prc/section2/prc211.htm |website=NIST/SEMATECH e-Handbook of Statistical Methods |publisher=NIST/SEMATECH |access-date=29 March 2019 |page=7.2.1.1}}</ref> :<math>k = 2 n^{2/5}</math> This choice of bins is motivated by maximizing the power of a [[Pearson chi-squared test]] testing whether the bins do contain equal numbers of samples. More specifically, for a given confidence interval <math>\alpha</math> it is recommended to choose between 1/2 and 1 times the following equation:<ref>{{cite book |last1=Moore |first1=David |editor1-last=D'Agostino |editor1-first=Ralph |editor2-last=Stephens |editor2-first=Michael |title=Goodness-of-Fit Techniques |date=1986 |publisher=Marcel Dekker Inc. |location=New York, NY, US |isbn=0-8247-7487-6 |page=70 |chapter=3}}</ref> :<math>k = 4 \left( \frac{2 n^2}{\Phi^{-1}(\alpha)} \right)^\frac{1}{5}</math> Where <math>\Phi^{-1}</math> is the [[probit]] function. Following this rule for <math>\alpha = 0.05</math> would give between <math>1.88n^{2/5}</math> and <math>3.77n^{2/5}</math>; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum. ==== Remark ==== A good reason why the number of bins should be proportional to <math>\sqrt[3]{n}</math> is the following: suppose that the data are obtained as <math>n</math> independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" as <math>n</math> tends to infinity. If <math>s</math> is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of order <math>n h/s</math> and the ''relative'' standard error is of order <math>\sqrt{s/(n h)}</math>. Compared to the next bin, the relative change of the frequency is of order <math>h/s</math> provided that the derivative of the density is non-zero. These two are of the same order if <math>h</math> is of order <math>s/\sqrt[3]{n}</math>, so that <math>k</math> is of order <math>\sqrt[3]{n}</math>. This simple cubic root choice can also be applied to bins with non-constant widths.{{cn|date=May 2024}} [[File:Gumbel distribtion.png|thumb|300px|Histogram and density function for a [[Gumbel distribution]]<ref>[https://www.waterlog.info/cumfreq.htm A calculator for probability distributions and density functions]</ref>]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)