Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Weighted arithmetic mean
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Related concepts== ===Weighted sample variance=== {{see also|#Correcting for over- or under-dispersion}} Typically when a mean is calculated it is important to know the [[variance]] and [[standard deviation]] about that mean. When a weighted mean <math>\mu^*</math> is used, the variance of the weighted sample is different from the variance of the unweighted sample. The ''biased'' weighted [[sample variance]] <math>\hat \sigma^2_\mathrm{w}</math> is defined similarly to the normal ''biased'' sample variance <math>\hat \sigma^2</math>: :<math> \begin{align} \hat \sigma^2\ &= \frac{\sum\limits_{i=1}^N \left(x_i - \mu\right)^2} N \\ \hat \sigma^2_\mathrm{w} &= \frac{\sum\limits_{i=1}^N w_i \left(x_i - \mu^{*}\right)^2 }{\sum_{i=1}^N w_i} \end{align} </math> where <math>\sum_{i=1}^N w_i = 1</math> for normalized weights. If the weights are ''frequency weights'' (and thus are random variables), it can be shown{{Citation needed|date=March 2022}} that <math>\hat \sigma^2_\mathrm{w}</math> is the maximum likelihood estimator of <math>\sigma^2</math> for [[Independent and identically distributed random variables|iid]] Gaussian observations. For small samples, it is customary to use an [[unbiased estimator]] for the population variance. In normal unweighted samples, the ''N'' in the denominator (corresponding to the sample size) is changed to ''N'' − 1 (see [[Bessel's correction]]). In the weighted setting, there are actually two different unbiased estimators, one for the case of ''frequency weights'' and another for the case of ''reliability weights''. ====Frequency weights==== If the weights are ''frequency weights'' (where a weight equals the number of occurrences), then the unbiased estimator is: :<math> s^2\ = \frac {\sum\limits_{i=1}^N w_i \left(x_i - \mu^*\right)^2} {\sum_{i=1}^N w_i - 1} </math> This effectively applies Bessel's correction for frequency weights. For example, if values <math>\{2, 2, 4, 5, 5, 5\}</math> are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample <math>\{2, 4, 5\}</math> with corresponding weights <math>\{2, 1, 3\}</math>, and we get the same result either way. If the frequency weights <math>\{w_i\}</math> are normalized to 1, then the correct expression after Bessel's correction becomes :<math>s^2\ = \frac {\sum_{i=1}^N w_i} {\sum_{i=1}^N w_i - 1}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2</math> where the total number of samples is <math>\sum_{i=1}^N w_i</math> (not <math>N</math>). In any case, the information on total number of samples is necessary in order to obtain an unbiased correction, even if <math>w_i</math> has a different meaning other than frequency weight. The estimator can be unbiased only if the weights are not [[Standard score|standardized]] nor [[Normalization (statistics)|normalized]], these processes changing the data's mean and variance and thus leading to a [[Base rate fallacy|loss of the base rate]] (the population count, which is a requirement for Bessel's correction). ====Reliability weights==== If the weights are instead ''reliability weights'' (non-random values reflecting the sample's relative trustworthiness, often derived from sample variance), we can determine a correction factor to yield an unbiased estimator. Assuming each random variable is sampled from the same distribution with mean <math>\mu</math> and actual variance <math>\sigma_{\text{actual}}^2</math>, taking expectations we have, :<math> \begin{align} \operatorname{E} [\hat \sigma^2] &= \frac{ \sum\limits_{i=1}^N \operatorname{E} [(x_i - \mu)^2]} N \\ &= \operatorname{E} [(X - \operatorname{E}[X])^2] - \frac{1}{N} \operatorname{E} [(X - \operatorname{E}[X])^2] \\ &= \left( \frac{N - 1} N \right) \sigma_{\text{actual}}^2 \\ \operatorname{E} [\hat \sigma^2_\mathrm{w}] &= \frac{\sum\limits_{i=1}^N w_i \operatorname{E} [(x_i - \mu^*)^2] }{V_1} \\ &= \operatorname{E}[(X - \operatorname{E}[X])^2] - \frac{V_2}{V_1^2} \operatorname{E}[(X - \operatorname{E}[X])^2] \\ &= \left(1 - \frac{V_2 }{ V_1^2}\right) \sigma_{\text{actual}}^2 \end{align} </math> where <math>V_1 = \sum_{i=1}^N w_i</math> and <math>V_2 = \sum_{i=1}^N w_i^2</math>. Therefore, the bias in our estimator is <math>\left(1 - \frac{V_2 }{ V_1^2}\right) </math>, analogous to the <math> \left( \frac{N - 1} {N} \right)</math> bias in the unweighted estimator (also notice that <math>\ V_1^2 / V_2 = N_{eff} </math> is the [[effective sample size#weighted samples|effective sample size]]). This means that to unbias our estimator we need to pre-divide by <math>1 - \left(V_2 / V_1^2\right) </math>, ensuring that the expected value of the estimated variance equals the actual variance of the sampling distribution. The final unbiased estimate of sample variance is: :<math> \begin{align} s^2_{\mathrm{w}}\ &= \frac{\hat \sigma^2_\mathrm{w}} {1 - (V_2 / V_1^2)} \\[4pt] &= \frac {\sum\limits_{i=1}^N w_i (x_i - \mu^*)^2} {V_1 - (V_2 / V_1)}, \end{align} </math><ref>{{cite web|url=https://www.gnu.org/software/gsl/manual/html_node/Weighted-Samples.html|title=GNU Scientific Library – Reference Manual: Weighted Samples|website=Gnu.org|access-date=22 December 2017}}</ref> where <math>\operatorname{E}[s^2_{\mathrm{w}}] = \sigma_{\text{actual}}^2</math>. The degrees of freedom of this weighted, unbiased sample variance vary accordingly from ''N'' − 1 down to 0. The standard deviation is simply the square root of the variance above. As a side note, other approaches have been described to compute the weighted sample variance.<ref>{{cite web |url=http://www.analyticalgroup.com/download/WEIGHTED_MEAN.pdf |title=Weighted Standard Error and its Impact on Significance Testing (WinCross vs. Quantum & SPSS), Dr. Albert Madansky| website=Analyticalgroup.com| access-date=22 December 2017}}</ref> ===Weighted sample covariance=== In a weighted sample, each row vector <math> \mathbf{x}_{i}</math> (each set of single observations on each of the ''K'' random variables) is assigned a weight <math>w_i \geq0</math>. Then the [[weighted mean]] vector <math> \mathbf{\mu^*}</math> is given by :<math> \mathbf{\mu^*}=\frac{\sum_{i=1}^N w_i \mathbf{x}_i}{\sum_{i=1}^N w_i}.</math> And the weighted covariance matrix is given by:<ref name="PRICE-1972">{{cite journal |last1=Price |first1=George R. |title=Extension of covariance selection mathematics |journal=Annals of Human Genetics |date=April 1972 |volume=35 |issue=4 |pages=485–490 |doi=10.1111/j.1469-1809.1957.tb01874.x|pmid=5073694 |s2cid=37828617 |url=http://www.dynamics.org/Altenberg/LIBRARY/REPRINTS/Price_extension_AnnHumGenetLond.1972.pdf}}</ref> :<math>\mathbf{C} = \frac {\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)} {V_1}.</math> Similarly to weighted sample variance, there are two different unbiased estimators depending on the type of the weights. ====Frequency weights==== If the weights are ''frequency weights'', the ''unbiased'' weighted estimate of the covariance matrix <math>\textstyle \mathbf{C}</math>, with Bessel's correction, is given by:<ref name="PRICE-1972"/> :<math>\mathbf{C} = \frac {\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)} {V_1 - 1}.</math> This estimator can be unbiased only if the weights are not [[Standard score|standardized]] nor [[Normalization (statistics)|normalized]], these processes changing the data's mean and variance and thus leading to a [[Base rate fallacy|loss of the base rate]] (the population count, which is a requirement for Bessel's correction). ==== Reliability weights ==== In the case of ''reliability weights'', the weights are [[Normalizing constant|normalized]]: : <math> V_1 = \sum_{i=1}^N w_i = 1. </math> (If they are not, divide the weights by their sum to normalize prior to calculating <math>V_1</math>: : <math> w_i' = \frac{w_i}{\sum_{i=1}^N w_i} </math> Then the [[weighted mean]] vector <math> \mathbf{\mu^*}</math> can be simplified to :<math> \mathbf{\mu^*}=\sum_{i=1}^N w_i \mathbf{x}_i.</math> and the ''unbiased'' weighted estimate of the covariance matrix <math> \mathbf{C}</math> is:<ref name="Galassi-2007-GSL">Mark Galassi, Jim Davies, James Theiler, Brian Gough, Gerard Jungman, Michael Booth, and Fabrice Rossi. [https://www.gnu.org/software/gsl/manual GNU Scientific Library - Reference manual, Version 1.15], 2011. [https://www.gnu.org/software/gsl/manual/html_node/Weighted-Samples.html Sec. 21.7 Weighted Samples]</ref> :<math> \begin{align} \mathbf{C} &= \frac{\sum_{i=1}^N w_i}{\left(\sum_{i=1}^N w_i\right)^2-\sum_{i=1}^N w_i^2} \sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right) \\ &= \frac {\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)} {V_1 - (V_2 / V_1)}. \end{align} </math> The reasoning here is the same as in the previous section. Since we are assuming the weights are normalized, then <math>V_1 = 1</math> and this reduces to: : <math>\mathbf{C}=\frac{\sum_{i=1}^N w_i \left(\mathbf{x}_i - \mu^*\right)^T \left(\mathbf{x}_i - \mu^*\right)}{1-V_2}.</math> If all weights are the same, i.e. <math> w_{i} / V_1=1/N</math>, then the weighted mean and covariance reduce to the unweighted sample mean and covariance above. ===Vector-valued estimates=== The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide a [[maximum likelihood]] estimate. We simply replace the variance <math>\sigma^2</math> by the [[covariance matrix]] <math>\mathbf{C}</math> and the [[arithmetic inverse]] by the [[matrix inverse]] (both denoted in the same way, via superscripts); the weight matrix then reads:<ref>{{cite book |last=James |first=Frederick |title=Statistical Methods in Experimental Physics|year=2006|publisher=World Scientific|location=Singapore|isbn=981-270-527-9| edition=2nd|page=324}}</ref> <math display="block"> \mathbf{W}_i = \mathbf{C}_i^{-1}.</math> The weighted mean in this case is: <math display="block">\bar{\mathbf{x}} = \mathbf{C}_{\bar{\mathbf{x}}} \left(\sum_{i=1}^n \mathbf{W}_i \mathbf{x}_i\right),</math> (where the order of the [[matrix–vector product]] is not [[commutative]]), in terms of the covariance of the weighted mean: <math display="block">\mathbf{C}_{\bar{\mathbf{x}}} = \left(\sum_{i=1}^n \mathbf{W}_i\right)^{-1},</math> For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then : <math>\mathbf{x}_1 := \begin{bmatrix}1 & 0\end{bmatrix}^\top, \qquad \mathbf{C}_1 := \begin{bmatrix}1 & 0\\ 0 & 100\end{bmatrix}</math> : <math>\mathbf{x}_2 := \begin{bmatrix}0 & 1\end{bmatrix}^\top, \qquad \mathbf{C}_2 := \begin{bmatrix}100 & 0\\ 0 & 1\end{bmatrix}</math> then the weighted mean is: : <math> \begin{align} \bar{\mathbf{x}} & = \left(\mathbf{C}_1^{-1} + \mathbf{C}_2^{-1}\right)^{-1} \left(\mathbf{C}_1^{-1} \mathbf{x}_1 + \mathbf{C}_2^{-1} \mathbf{x}_2\right) \\[5pt] & =\begin{bmatrix} 0.9901 &0\\ 0& 0.9901\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}0.9901 \\ 0.9901\end{bmatrix} \end{align} </math> which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1]. ===Accounting for correlations=== {{see also|Generalized least squares|Variance#Sum of correlated variables}} In the general case, suppose that <math>\mathbf{X}=[x_1,\dots,x_n]^T</math>, <math>\mathbf{C}</math> is the [[covariance matrix]] relating the quantities <math>x_i</math>, <math>\bar{x}</math> is the common mean to be estimated, and <math>\mathbf{J}</math> is a [[design matrix]] equal to a [[vector of ones]] <math>[1, \dots, 1]^T</math> (of length <math>n</math>). The [[Gauss–Markov theorem]] states that the estimate of the mean having minimum variance is given by: :<math>\sigma^2_\bar{x}=(\mathbf{J}^T \mathbf{W} \mathbf{J})^{-1},</math> and :<math>\bar{x} = \sigma^2_\bar{x} (\mathbf{J}^T \mathbf{W} \mathbf{X}),</math> where: :<math>\mathbf{W} = \mathbf{C}^{-1}.</math> ===Decreasing strength of interactions=== Consider the time series of an independent variable <math>x</math> and a dependent variable <math>y</math>, with <math>n</math> observations sampled at discrete times <math>t_i</math>. In many common situations, the value of <math>y</math> at time <math>t_i</math> depends not only on <math>x_i</math> but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean <math>z</math> for a window size <math>m</math>. :<math>z_k=\sum_{i=1}^m w_i x_{k+1-i}.</math> ===Exponentially decreasing weights=== {{see also|Exponentially weighted moving average}} In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction <math>0<\Delta<1</math> at each time step. Setting <math>w=1-\Delta</math> we can define <math>m</math> normalized weights by : <math>w_i=\frac {w^{i-1}}{V_1},</math> where <math>V_1</math> is the sum of the unnormalized weights. In this case <math>V_1</math> is simply : <math>V_1=\sum_{i=1}^m{w^{i-1}} = \frac {1-w^{m}}{1-w},</math> approaching <math>V_1=1/(1-w)</math> for large values of <math>m</math>. The damping constant <math>w</math> must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step <math>(1-w)^{-1}</math>, the weight approximately equals <math>{e^{-1}}(1-w)=0.39(1-w)</math>, the tail area the value <math>e^{-1}</math>, the head area <math>{1-e^{-1}}=0.61</math>. The tail area at step <math>n</math> is <math>\le {e^{-n(1-w)}}</math>. Where primarily the closest <math>n</math> observations matter and the effect of the remaining observations can be ignored safely, then choose <math>w</math> such that the tail area is sufficiently small. ===Weighted averages of functions=== The concept of weighted average can be extended to functions.<ref>G. H. Hardy, J. E. Littlewood, and G. Pólya. ''Inequalities'' (2nd ed.), Cambridge University Press, {{ISBN|978-0-521-35880-4}}, 1988.</ref> Weighted averages of functions play an important role in the systems of weighted differential and integral calculus.<ref>Jane Grossman, Michael Grossman, Robert Katz. [https://books.google.com/books?as_brr=0&q=%22The+First+Systems+of+Weighted+Differential+and+Integral+Calculus%E2%80%8E%22&btnG=Search+Books,''The First Systems of Weighted Differential and Integral Calculus''], {{ISBN|0-9771170-1-4}}, 1980.</ref> ===Correcting for over- or under-dispersion=== {{see also|#Weighted sample variance}} Weighted means are typically used to find the weighted mean of historical data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that <math>\chi^2</math> is too large. The correction that must be made is :<math>\hat{\sigma}_{\bar{x}}^2 = \sigma_{\bar{x}}^2 \chi^2_\nu </math> where <math>\chi^2_\nu</math> is the [[reduced chi-squared]]: :<math>\chi^2_\nu = \frac{1}{(n-1)} \sum_{i=1}^n \frac{ (x_i - \bar{x} )^2}{ \sigma_i^2 };</math> The square root <math>\hat{\sigma}_{\bar{x}}</math> can be called the ''standard error of the weighted mean (variance weights, scale corrected)''. When all data variances are equal, <math>\sigma_i = \sigma_0</math>, they cancel out in the weighted mean variance, <math>\sigma_{\bar{x}}^2</math>, which again reduces to the [[standard error of the mean]] (squared), <math>\sigma_{\bar{x}}^2 = \sigma^2/n</math>, formulated in terms of the [[sample standard deviation]] (squared), :<math>\sigma^2 = \frac {\sum_{i=1}^n (x_i - \bar{x} )^2} {n-1}. </math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)