Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Algorithms for calculating variance
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Important algorithms in numerical statistics}} {{Use dmy dates|date=July 2020}} '''Algorithms for calculating variance''' play a major role in [[computational statistics]]. A key difficulty in the design of good [[algorithm]]s for this problem is that formulas for the [[variance]] may involve sums of squares, which can lead to [[numerical instability]] as well as to [[arithmetic overflow]] when dealing with large values. ==Naïve algorithm== A formula for calculating the variance of an entire [[statistical population|population]] of size ''N'' is: :<math>\sigma^2 = \overline{(x^2)} - \bar x^2 = \frac{\sum_{i=1}^N x_i^2}{N} - \left(\frac{\sum_{i=1}^N x_i}{N}\right)^2</math> Using [[Bessel's correction]] to calculate an [[estimator bias|unbiased]] estimate of the population variance from a finite [[statistical sample|sample]] of ''n'' observations, the formula is: :<math>s^2 = \left(\frac {\sum_{i=1}^n x_i^2} n - \left( \frac {\sum_{i=1}^n x_i} n \right)^2\right) \cdot \frac {n}{n-1}. </math> Therefore, a naïve algorithm to calculate the estimated variance is given by the following: <div style="margin-left: 35px; width: 600px"> {{framebox|blue}} * Let {{math|''n'' ← 0, Sum ← 0, SumSq ← 0}} * For each datum {{mvar|x}}: ** {{math|''n'' ← ''n'' + 1}} ** {{math|Sum ← Sum + ''x''}} ** {{math|SumSq ← SumSq + ''x'' × ''x''}} * {{math|Var {{=}} (SumSq − (Sum × Sum) / n) / (n − 1)}} {{frame-footer}} </div> This algorithm can easily be adapted to compute the variance of a finite population: simply divide by ''n'' instead of ''n'' − 1 on the last line. Because {{math|SumSq}} and {{math|(Sum×Sum)/''n''}} can be very similar numbers, [[Catastrophic cancellation|cancellation]] can lead to the [[precision (arithmetic)|precision]] of the result to be much less than the inherent precision of the [[floating-point arithmetic]] used to perform the computation. Thus this algorithm should not be used in practice,<ref name="Einarsson2005">{{cite book|first=Bo|last=Einarsson|title=Accuracy and Reliability in Scientific Computing|url=https://books.google.com/books?id=8hrDV5EbrEsC|year=2005|publisher=SIAM|isbn=978-0-89871-584-2|page=47}}</ref><ref name="Chan1983">{{cite journal|url=http://cpsc.yale.edu/sites/default/files/files/tr222.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://cpsc.yale.edu/sites/default/files/files/tr222.pdf |archive-date=2022-10-09 |url-status=live|first1=Tony F.|last1=Chan|author1-link=Tony F. Chan|first2=Gene H.|last2=Golub|author2-link=Gene H. Golub|first3=Randall J.|last3=LeVeque|title=Algorithms for computing the sample variance: Analysis and recommendations|journal=The American Statistician|volume=37|issue=3|pages=242–247|year=1983|jstor=2683386|doi=10.1080/00031305.1983.10483115}}</ref> and several alternate, numerically stable, algorithms have been proposed.<ref name=":1">{{Cite book|last1=Schubert|first1=Erich|last2=Gertz|first2=Michael|date=2018-07-09|title=Numerically stable parallel computation of (co-)variance|url=http://dl.acm.org/citation.cfm?id=3221269.3223036|publisher=ACM|pages=10|doi=10.1145/3221269.3223036|isbn=9781450365055|s2cid=49665540}}</ref> This is particularly bad if the standard deviation is small relative to the mean. ===Computing shifted data=== The variance is [[Invariant (mathematics)|invariant]] with respect to changes in a [[location parameter]], a property which can be used to avoid the catastrophic cancellation in this formula. :<math>\operatorname{Var}(X-K)=\operatorname{Var}(X).</math> with <math>K</math> any constant, which leads to the new formula :<math>\sigma^2 = \frac {\sum_{i=1}^n (x_i-K)^2 - (\sum_{i=1}^n (x_i-K))^2/n}{n-1}. </math> the closer <math>K</math> is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values <math>(x_i - K)</math> are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.<ref name="Chan1983" /> If just the first sample is taken as <math>K</math> the algorithm can be written in [[Python (programming language)|Python programming language]] as <syntaxhighlight lang="python"> def shifted_data_variance(data): if len(data) < 2: return 0.0 K = data[0] n = Ex = Ex2 = 0.0 for x in data: n += 1 Ex += x - K Ex2 += (x - K) ** 2 variance = (Ex2 - Ex**2 / n) / (n - 1) # use n instead of (n-1) if want to compute the exact variance of the given data # use (n-1) if data are samples of a larger population return variance </syntaxhighlight> This formula also facilitates the incremental computation that can be expressed as <syntaxhighlight lang="python"> K = Ex = Ex2 = 0.0 n = 0 def add_variable(x): global K, n, Ex, Ex2 if n == 0: K = x n += 1 Ex += x - K Ex2 += (x - K) ** 2 def remove_variable(x): global K, n, Ex, Ex2 n -= 1 Ex -= x - K Ex2 -= (x - K) ** 2 def get_mean(): global K, n, Ex return K + Ex / n def get_variance(): global n, Ex, Ex2 return (Ex2 - Ex**2 / n) / (n - 1) </syntaxhighlight> ==Two-pass algorithm== An alternative approach, using a different formula for the variance, first computes the sample mean, :<math>\bar x = \frac {\sum_{j=1}^n x_j} n,</math> and then computes the sum of the squares of the differences from the mean, :<math>\text{sample variance} = s^2 = \dfrac {\sum_{i=1}^n (x_i - \bar x)^2}{n-1}, </math> where ''s'' is the standard deviation. This is given by the following code: <syntaxhighlight lang="python"> def two_pass_variance(data): n = len(data) mean = sum(data) / n variance = sum((x - mean) ** 2 for x in data) / (n - 1) return variance </syntaxhighlight> This algorithm is numerically stable if ''n'' is small.<ref name="Einarsson2005"/><ref>{{cite book |last=Higham |first=Nicholas J. |url=https://epubs.siam.org/doi/book/10.1137/1.9780898718027 |title=Accuracy and Stability of Numerical Algorithms |publisher=Society for Industrial and Applied Mathematics |year=2002 |edition=2nd |publication-place=Philadelphia, PA |chapter=Problem 1.10 |doi= 10.1137/1.9780898718027|isbn=978-0-898715-21-7 |id=e{{ISBN|978-0-89871-802-7}}, 2002075848 }} Metadata also listed at [https://dl.acm.org/doi/10.5555/579525 ACM Digital Library].</ref> However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as [[compensated summation]] can be used to combat this error to a degree. ==Welford's online algorithm== It is often useful to be able to compute the variance in a [[One-pass algorithm|single pass]], inspecting each value <math>x_i</math> only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an [[online algorithm]], a [[recurrence relation]] is required between quantities from which the required statistics can be calculated in a numerically stable fashion. The following formulas can be used to update the [[mean]] and (estimated) variance of the sequence, for an additional element ''x''<sub>''n''</sub>. Here, <math display="inline">\overline{x}_n = \frac{1}{n} \sum_{i=1}^n x_i </math> denotes the sample mean of the first ''n'' samples <math>(x_1,\dots,x_n)</math>, <math display="inline">\sigma^2_n = \frac{1}{n} \sum_{i=1}^n \left(x_i - \overline{x}_n \right)^2</math> their [[Variance#Biased_sample_variance|biased sample variance]], and <math display="inline">s^2_n = \frac{1}{n - 1} \sum_{i=1}^n \left(x_i - \overline{x}_n \right)^2</math> their [[Variance#Unbiased_sample_variance|unbiased sample variance]]. :<math>\bar x_n = \frac{(n-1) \, \bar x_{n-1} + x_n}{n} = \bar x_{n-1} + \frac{x_n - \bar x_{n-1}}{n} </math> :<math>\sigma^2_n = \frac{(n-1) \, \sigma^2_{n-1} + (x_n - \bar x_{n-1})(x_n - \bar x_n)}{n} = \sigma^2_{n-1} + \frac{(x_n - \bar x_{n-1})(x_n - \bar x_n) - \sigma^2_{n-1}}{n}.</math> :<math>s^2_n = \frac{n-2}{n-1} \, s^2_{n-1} + \frac{(x_n - \bar x_{n-1})^2}{n} = s^2_{n-1} + \frac{(x_n - \bar x_{n-1})^2}{n} - \frac{s^2_{n-1}}{n-1}, \quad n>1 </math> These formulas suffer from numerical instability {{cn|date=March 2023}}, as they repeatedly subtract a small number from a big number which scales with ''n''. A better quantity for updating is the sum of squares of differences from the current mean, <math display="inline">\sum_{i=1}^n (x_i - \bar x_n)^2</math>, here denoted <math>M_{2,n}</math>: : <math>\begin{align} M_{2,n} & = M_{2,n-1} + (x_n - \bar x_{n-1})(x_n - \bar x_n) \\[4pt] \sigma^2_n & = \frac{M_{2,n}}{n} \\[4pt] s^2_n & = \frac{M_{2,n}}{n-1} \end{align}</math> This algorithm was found by Welford,<ref>{{cite journal |first=B. P. |last=Welford |year=1962 |title=Note on a method for calculating corrected sums of squares and products |journal=[[Technometrics]] |volume=4 |issue=3 |pages=419–420 |jstor=1266577 |doi=10.2307/1266577}}</ref><ref>[[Donald E. Knuth]] (1998). ''[[The Art of Computer Programming]]'', volume 2: ''Seminumerical Algorithms'', 3rd edn., p. 232. Boston: Addison-Wesley.</ref> and it has been thoroughly analyzed.<ref name="Chan1983" /><ref>{{cite journal |last=Ling |first=Robert F. |year=1974 |title=Comparison of Several Algorithms for Computing Sample Means and Variances |journal=Journal of the American Statistical Association |volume=69 |issue=348 |pages=859–866 |doi=10.2307/2286154|jstor=2286154 }}</ref> It is also common to denote <math>M_k = \bar x_k</math> and <math>S_k = M_{2,k}</math>.<ref>{{cite web |last=Cook |first=John D. |date=30 September 2022 |orig-date=1 November 2014 |title=Accurately computing sample variance |url=http://www.johndcook.com/standard_deviation.html |website=John D. Cook Consulting: Expert consulting in applied mathematics & data privacy}}</ref> An example Python implementation for Welford's algorithm is given below. <syntaxhighlight lang="python"> # For a new value new_value, compute the new count, new mean, the new M2. # mean accumulates the mean of the entire dataset # M2 aggregates the squared distance from the mean # count aggregates the number of samples seen so far def update(existing_aggregate, new_value): (count, mean, M2) = existing_aggregate count += 1 delta = new_value - mean mean += delta / count delta2 = new_value - mean M2 += delta * delta2 return (count, mean, M2) # Retrieve the mean, variance and sample variance from an aggregate def finalize(existing_aggregate): (count, mean, M2) = existing_aggregate if count < 2: return float("nan") else: (mean, variance, sample_variance) = (mean, M2 / count, M2 / (count - 1)) return (mean, variance, sample_variance) </syntaxhighlight> This algorithm is much less prone to loss of precision due to [[catastrophic cancellation]], but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals. The [[#Parallel algorithm|parallel algorithm]] below illustrates how to merge multiple sets of statistics calculated online. ==Weighted incremental algorithm== The algorithm can be extended to handle unequal sample weights, replacing the simple counter ''n'' with the sum of weights seen so far. West (1979)<ref>{{cite journal |last=West |first=D. H. D. |year=1979 |title=Updating Mean and Variance Estimates: An Improved Method |journal=[[Communications of the ACM]] |volume=22 |issue=9 |pages=532–535 |doi=10.1145/359146.359153 |s2cid=30671293 |doi-access=free}}</ref> suggests this [[incremental computing|incremental algorithm]]: <syntaxhighlight lang="python"> def weighted_incremental_variance(data_weight_pairs): w_sum = w_sum2 = mean = S = 0 for x, w in data_weight_pairs: w_sum = w_sum + w w_sum2 = w_sum2 + w**2 mean_old = mean mean = mean_old + (w / w_sum) * (x - mean_old) S = S + w * (x - mean_old) * (x - mean) population_variance = S / w_sum # Bessel's correction for weighted samples # Frequency weights sample_frequency_variance = S / (w_sum - 1) # Reliability weights sample_reliability_variance = S / (1 - w_sum2 / (w_sum**2)) </syntaxhighlight> {{further|Weighted arithmetic mean#Weighted sample variance}} ==Parallel algorithm== Chan et al.<ref name=":0">{{Cite web |last1=Chan |first1=Tony F. |author1-link=Tony F. Chan |last2=Golub |first2=Gene H. |author2-link=Gene H. Golub |last3=LeVeque |first3=Randall J. |date=November 1979 |title=Updating Formulae and a Pairwise Algorithm for Computing Sample Variances |url=http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf |publisher=Department of Computer Science, Stanford University |id=Technical Report STAN-CS-79-773, supported in part by Army contract No. DAAGEI-‘E-G-013 |contribution= |contribution-url=}}</ref> note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets <math>A</math> and <math>B</math>: :<math>\begin{align} n_{AB} & = n_A + n_B \\ \delta & = \bar x_B - \bar x_A \\ \bar x_{AB} & = \bar x_A + \delta\cdot\frac{n_B}{n_{AB}} \\ M_{2,AB} & = M_{2,A} + M_{2,B} + \delta^2\cdot\frac{n_A n_B}{n_{AB}} \\ \end{align}</math>. This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input. Chan's method for estimating the mean is numerically unstable when <math>n_A \approx n_B</math> and both are large, because the numerical error in <math>\delta = \bar x_B - \bar x_A</math> is not scaled down in the way that it is in the <math>n_B = 1</math> case. In such cases, prefer <math display="inline">\bar x_{AB} = \frac{n_A \bar x_A + n_B \bar x_B}{n_{AB}}</math>. <syntaxhighlight lang="python"> def parallel_variance(n_a, avg_a, M2_a, n_b, avg_b, M2_b): n = n_a + n_b delta = avg_b - avg_a M2 = M2_a + M2_b + delta**2 * n_a * n_b / n var_ab = M2 / (n - 1) return var_ab </syntaxhighlight> This can be generalized to allow parallelization with [[Advanced Vector Extensions|AVX]], with [[Graphics processing unit|GPUs]], and [[Computer cluster|computer clusters]], and to covariance.<ref name=":1" /> ==Example== Assume that all floating point operations use standard [[IEEE 754#Double-precision 64 bit|IEEE 754 double-precision]] arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly. Next consider the sample ({{nowrap|10<sup>8</sup> + 4}}, {{nowrap|10<sup>8</sup> + 7}}, {{nowrap|10<sup>8</sup> + 13}}, {{nowrap|10<sup>8</sup> + 16}}), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample ({{nowrap|10<sup>9</sup> + 4}}, {{nowrap|10<sup>9</sup> + 7}}, {{nowrap|10<sup>9</sup> + 13}}, {{nowrap|10<sup>9</sup> + 16}}). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due to [[catastrophic cancellation]] in the subtraction of two similar numbers at the final stage of the algorithm. ==Higher-order statistics== Terriberry<ref>{{Cite web |last=Terriberry |first=Timothy B. |date=15 October 2008 |orig-date=9 December 2007 |title=Computing Higher-Order Moments Online |url=http://people.xiph.org/~tterribe/notes/homs.html |url-status=dead |archive-url=https://web.archive.org/web/20140423031833/http://people.xiph.org/~tterribe/notes/homs.html |archive-date=23 April 2014 |access-date=5 May 2008}}</ref> extends Chan's formulae to calculating the third and fourth [[central moment]]s, needed for example when estimating [[skewness]] and [[kurtosis]]: :<math> \begin{align} M_{3,X} = M_{3,A} + M_{3,B} & {} + \delta^3\frac{n_A n_B (n_A - n_B)}{n_X^2} + 3\delta\frac{n_AM_{2,B} - n_BM_{2,A}}{n_X} \\[6pt] M_{4,X} = M_{4,A} + M_{4,B} & {} + \delta^4\frac{n_A n_B \left(n_A^2 - n_A n_B + n_B^2\right)}{n_X^3} \\[6pt] & {} + 6\delta^2\frac{n_A^2 M_{2,B} + n_B^2 M_{2,A}}{n_X^2} + 4\delta\frac{n_AM_{3,B} - n_BM_{3,A}}{n_X} \end{align}</math> Here the <math>M_k</math> are again the sums of powers of differences from the mean <math display="inline">\sum(x - \overline{x})^k</math>, giving : <math> \begin{align} & \text{skewness} = g_1 = \frac{\sqrt{n} M_3}{M_2^{3/2}}, \\[4pt] & \text{kurtosis} = g_2 = \frac{n M_4}{M_2^2}-3. \end{align} </math> For the incremental case (i.e., <math>B = \{x\}</math>), this simplifies to: : <math> \begin{align} \delta & = x - m \\[5pt] m' & = m + \frac{\delta}{n} \\[5pt] M_2' & = M_2 + \delta^2 \frac{n-1}{n} \\[5pt] M_3' & = M_3 + \delta^3 \frac{ (n - 1) (n - 2)}{n^2} - \frac{3\delta M_2}{n} \\[5pt] M_4' & = M_4 + \frac{\delta^4 (n - 1) (n^2 - 3n + 3)}{n^3} + \frac{6\delta^2 M_2}{n^2} - \frac{4\delta M_3}{n} \end{align} </math> By preserving the value <math>\delta / n</math>, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost. An example of the online algorithm for kurtosis implemented as described is: <syntaxhighlight lang="python"> def online_kurtosis(data): n = mean = M2 = M3 = M4 = 0 for x in data: n1 = n n = n + 1 delta = x - mean delta_n = delta / n delta_n2 = delta_n**2 term1 = delta * delta_n * n1 mean = mean + delta_n M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3 M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2 M2 = M2 + term1 # Note, you may also calculate variance using M2, and skewness using M3 # Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0. kurtosis = (n * M4) / (M2**2) - 3 return kurtosis </syntaxhighlight> Pébaÿ<ref>{{Cite web |last=Pébay |first=Philippe Pierre |date=September 2008 |others=Sponsoring Org.: USDOE |title=Formulas for Robust, One-Pass Parallel Computation of Covariances and Arbitrary-Order Statistical Moments |url=https://digital.library.unt.edu/ark:/67531/metadc837537/ |publisher=Sandia National Laboratories (SNL) |publication-place=Albuquerque, NM, and Livermore, CA (United States) |via=UNT Digital Library |doi=10.2172/1028931 |osti=1028931 |id=Technical Report SAND2008-6212, TRN: US201201%%57, DOE Contract Number: AC04-94AL85000 |osti-access=free}}</ref> further extends these results to arbitrary-order [[central moment]]s, for the incremental and the pairwise cases, and subsequently Pébaÿ et al.<ref>{{Cite journal | last1=Pébaÿ | first1=Philippe | last2=Terriberry | first2=Timothy | last3=Kolla | first3=Hemanth | last4=Bennett | first4=Janine | year=2016 | title=Numerically Stable, Scalable Formulas for Parallel and Online Computation of Higher-Order Multivariate Central Moments with Arbitrary Weights | journal=Computational Statistics | volume=31 | issue=4 | pages=1305–1325 | publisher=Springer | doi=10.1007/s00180-015-0637-z | s2cid=124570169 | url=https://zenodo.org/record/1232635 }}</ref> for weighted and compound moments. One can also find there similar formulas for [[covariance]]. Choi and Sweetman<ref name="Choi2010">{{Cite journal |last1 = Choi |first1 = Myoungkeun |last2 = Sweetman |first2 = Bert |s2cid = 17534100 |year = 2010 |title = Efficient Calculation of Statistical Moments for Structural Health Monitoring |journal= Journal of Structural Health Monitoring |volume=9 |issue=1 |pages=13–24 |doi=10.1177/1475921709341014 }}</ref> offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a [[one-pass algorithm]] for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin: : <math> H(x_k)=\frac{h(x_k)}{A}</math> where <math>h(x_k)</math> and <math>H(x_k)</math> represent the frequency and the relative frequency at bin <math>x_k</math> and <math display="inline">A= \sum_{k=1}^K h(x_k) \,\Delta x_k</math> is the total area of the histogram. After this normalization, the <math>n</math> raw moments and central moments of <math>x(t)</math> can be calculated from the relative histogram: : <math> m_n^{(h)} = \sum_{k=1}^{K} x_k^n H(x_k) \, \Delta x_k = \frac{1}{A} \sum_{k=1}^K x_k^n h(x_k) \, \Delta x_k </math> : <math> \theta_n^{(h)}= \sum_{k=1}^{K} \Big(x_k-m_1^{(h)}\Big)^n \, H(x_k) \, \Delta x_k = \frac{1}{A} \sum_{k=1}^{K} \Big(x_k-m_1^{(h)}\Big)^n h(x_k) \, \Delta x_k </math> where the superscript <math>^{(h)}</math> indicates the moments are calculated from the histogram. For constant bin width <math>\Delta x_k=\Delta x</math> these two expressions can be simplified using <math>I= A/\Delta x</math>: : <math> m_n^{(h)}= \frac{1}{I} \sum_{k=1}^K x_k^n \, h(x_k) </math> : <math> \theta_n^{(h)}= \frac{1}{I} \sum_{k=1}^K \Big(x_k-m_1^{(h)}\Big)^n h(x_k) </math> The second approach from Choi and Sweetman<ref name="Choi2010" /> is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times. If <math>Q</math> sets of statistical moments are known: <math>(\gamma_{0,q},\mu_{q},\sigma^2_{q},\alpha_{3,q},\alpha_{4,q}) \quad </math> for <math>q=1,2,\ldots,Q </math>, then each <math>\gamma_n</math> can be expressed in terms of the equivalent <math>n</math> raw moments: : <math> \gamma_{n,q}= m_{n,q} \gamma_{0,q} \qquad \quad \textrm{for} \quad n=1,2,3,4 \quad \text{ and } \quad q = 1,2, \dots ,Q </math> where <math>\gamma_{0,q}</math> is generally taken to be the duration of the <math>q^{th}</math> time-history, or the number of points if <math>\Delta t</math> is constant. The benefit of expressing the statistical moments in terms of <math>\gamma</math> is that the <math>Q</math> sets can be combined by addition, and there is no upper limit on the value of <math>Q</math>. : <math> \gamma_{n,c}= \sum_{q=1}^Q \gamma_{n,q} \quad \quad \text{for } n=0,1,2,3,4 </math> where the subscript <math>_c</math> represents the concatenated time-history or combined <math>\gamma</math>. These combined values of <math>\gamma</math> can then be inversely transformed into raw moments representing the complete concatenated time-history : <math> m_{n,c}=\frac{\gamma_{n,c}}{\gamma_{0,c}} \quad \text{for } n=1,2,3,4 </math> Known relationships between the raw moments (<math>m_n</math>) and the central moments (<math> \theta_n = \operatorname E[(x-\mu)^n])</math>) are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments: : <math> \mu_c=m_{1,c} \qquad \sigma^2_c=\theta_{2,c} \qquad \alpha_{3,c}=\frac{\theta_{3,c}}{\sigma_c^3} \qquad \alpha_{4,c}={\frac{\theta_{4,c}}{\sigma_c^4}}-3 </math> ==Covariance== Very similar algorithms can be used to compute the [[covariance]]. ===Naïve algorithm=== The naïve algorithm is :<math>\operatorname{Cov}(X,Y) = \frac {\sum_{i=1}^n x_i y_i - (\sum_{i=1}^n x_i)(\sum_{i=1}^n y_i)/n}{n}. </math> For the algorithm above, one could use the following Python code: <syntaxhighlight lang="python"> def naive_covariance(data1, data2): n = len(data1) sum1 = sum(data1) sum2 = sum(data2) sum12 = sum([i1 * i2 for i1, i2 in zip(data1, data2)]) covariance = (sum12 - sum1 * sum2 / n) / n return covariance </syntaxhighlight> ===With estimate of the mean=== As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values <math>k_x</math> and <math>k_y,</math> it can be written: :<math>\operatorname{Cov}(X,Y) = \operatorname{Cov}(X-k_x,Y-k_y) = \dfrac {\sum_{i=1}^n (x_i-k_x) (y_i-k_y) - (\sum_{i=1}^n (x_i-k_x))(\sum_{i=1}^n (y_i-k_y))/n}{n}. </math> and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as: <syntaxhighlight lang="python"> def shifted_data_covariance(data_x, data_y): n = len(data_x) if n < 2: return 0 kx = data_x[0] ky = data_y[0] Ex = Ey = Exy = 0 for ix, iy in zip(data_x, data_y): Ex += ix - kx Ey += iy - ky Exy += (ix - kx) * (iy - ky) return (Exy - Ex * Ey / n) / n </syntaxhighlight> ===Two-pass=== The two-pass algorithm first computes the sample means, and then the covariance: :<math>\bar x = \sum_{i=1}^n x_i/n</math> :<math>\bar y = \sum_{i=1}^n y_i/n</math> :<math>\operatorname{Cov}(X,Y) = \frac {\sum_{i=1}^n (x_i - \bar x)(y_i - \bar y)}{n}. </math> The two-pass algorithm may be written as: <syntaxhighlight lang="python"> def two_pass_covariance(data1, data2): n = len(data1) mean1 = sum(data1) / n mean2 = sum(data2) / n covariance = 0 for i1, i2 in zip(data1, data2): a = i1 - mean1 b = i2 - mean2 covariance += a * b / n return covariance </syntaxhighlight> A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums <math display="inline">\sum_i x_i</math> and <math display="inline">\sum_i y_i</math> ''should'' be zero, but the second pass compensates for any small error. ===Online=== A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment <math display="inline"> C_n = \sum_{i=1}^n (x_i - \bar x_n)(y_i - \bar y_n)</math>: :<math>\begin{alignat}{2} \bar x_n &= \bar x_{n-1} &\,+\,& \frac{x_n - \bar x_{n-1}}{n} \\[5pt] \bar y_n &= \bar y_{n-1} &\,+\,& \frac{y_n - \bar y_{n-1}}{n} \\[5pt] C_n &= C_{n-1} &\,+\,& (x_n - \bar x_n)(y_n - \bar y_{n-1}) \\[5pt] &= C_{n-1} &\,+\,& (x_n - \bar x_{n-1})(y_n - \bar y_n) \end{alignat}</math> The apparent asymmetry in that last equation is due to the fact that <math display="inline"> (x_n - \bar x_n) = \frac{n-1}{n}(x_n - \bar x_{n-1})</math>, so both update terms are equal to <math display="inline"> \frac{n-1}{n}(x_n - \bar x_{n-1})(y_n - \bar y_{n-1})</math>. Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals. Thus the covariance can be computed as :<math>\begin{align} \operatorname{Cov}_N(X,Y) = \frac{C_N}{N} &= \frac{\operatorname{Cov}_{N-1}(X,Y)\cdot(N-1) + (x_n - \bar x_n)(y_n - \bar y_{n-1})}{N}\\ &= \frac{\operatorname{Cov}_{N-1}(X,Y)\cdot(N-1) + (x_n - \bar x_{n-1})(y_n - \bar y_n)}{N}\\ &= \frac{\operatorname{Cov}_{N-1}(X,Y)\cdot(N-1) + \frac{N-1}{N}(x_n - \bar x_{n-1})(y_n - \bar y_{n-1})}{N}\\ &= \frac{\operatorname{Cov}_{N-1}(X,Y)\cdot(N-1) + \frac{N}{N-1}(x_n - \bar x_{n})(y_n - \bar y_{n})}{N}. \end{align}</math> <syntaxhighlight lang="python"> def online_covariance(data1, data2): meanx = meany = C = n = 0 for x, y in zip(data1, data2): n += 1 dx = x - meanx meanx += dx / n meany += (y - meany) / n C += dx * (y - meany) population_covar = C / n # Bessel's correction for sample variance sample_covar = C / (n - 1) </syntaxhighlight> A small modification can also be made to compute the weighted covariance: <syntaxhighlight lang="python"> def online_weighted_covariance(data1, data2, data3): meanx = meany = 0 wsum = wsum2 = 0 C = 0 for x, y, w in zip(data1, data2, data3): wsum += w wsum2 += w * w dx = x - meanx meanx += (w / wsum) * dx meany += (w / wsum) * (y - meany) C += w * dx * (y - meany) population_covar = C / wsum # Bessel's correction for sample variance # Frequency weights sample_frequency_covar = C / (wsum - 1) # Reliability weights sample_reliability_covar = C / (wsum - wsum2 / wsum) </syntaxhighlight> Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:<ref name=":1" /> :<math>C_X = C_A + C_B + (\bar x_A - \bar x_B)(\bar y_A - \bar y_B)\cdot\frac{n_A n_B}{n_X}. </math> ===Weighted batched version=== A version of the weighted online algorithm that does batched updated also exists: let <math>w_1, \dots w_N</math> denote the weights, and write :<math>\begin{alignat}{2} \bar x_{n+k} &= \bar x_n &\,+\,& \frac{\sum_{i=n+1}^{n+k} w_i (x_i - \bar x_n)}{\sum_{i=1}^{n+k} w_i} \\ \bar y_{n+k} &= \bar y_n &\,+\,& \frac{\sum_{i=n+1}^{n+k} w_i (y_i - \bar y_n)}{\sum_{i=1}^{n+k} w_i} \\ C_{n+k} &= C_n &\,+\,& \sum_{i=n+1}^{n+k} w_i (x_i - \bar x_{n+k})(y_i - \bar y_n) \\ &= C_n &\,+\,& \sum_{i=n+1}^{n+k} w_i (x_i - \bar x_n)(y_i - \bar y_{n+k}) \\ \end{alignat}</math> The covariance can then be computed as :<math>\operatorname{Cov}_N(X,Y) = \frac{C_N}{\sum_{i=1}^{N} w_i}</math> ==See also== *[[Kahan summation algorithm]] *[[Squared deviations from the mean]] *[[Yamartino method]] ==References== <references /> ==External links== * {{MathWorld|title=Sample Variance Computation|urlname=SampleVarianceComputation}} {{DEFAULTSORT:Algorithms For Calculating Variance}} [[Category:Statistical algorithms]] [[Category:Statistical deviation and dispersion]] [[Category:Articles with example pseudocode]] [[Category:Articles with example Python (programming language) code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Cn
(
edit
)
Template:Frame-footer
(
edit
)
Template:Framebox
(
edit
)
Template:Further
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Use dmy dates
(
edit
)