Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multivariate analysis of variance
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Procedure for comparing multivariate sample means}} [[File:Univariate vs. Multivariate.jpg|thumb|The image above depicts a visual comparison between multivariate analysis of variance (MANOVA) and univariate analysis of variance (ANOVA). In MANOVA, researchers are examining the group differences of a singular independent variable across multiple outcome variables, whereas in an ANOVA, researchers are examining the group differences of sometimes multiple independent variables on a singular outcome variable. In the provided example, the levels of the IV might include high school, college, and graduate school. The results of a MANOVA can tell us whether an individual who completed graduate school showed higher life AND job satisfaction than an individual who completed only high school or college. Results of an ANOVA can only tell us this information for life satisfaction. Analyzing group differences across multiple outcome variables often provides more accurate information as a pure relationship between only X and only Y rarely exists in nature. ]] In [[statistics]], '''multivariate analysis of variance''' ('''MANOVA''') is a procedure for comparing [[multivariate random variable|multivariate]] sample means. As a multivariate procedure, it is used when there are two or more [[dependent variables]],<ref name="Warne2014">{{cite journal |last=Warne |first=R. T. |year=2014 |title=A primer on multivariate analysis of variance (MANOVA) for behavioral scientists |journal=Practical Assessment, Research & Evaluation |volume=19 |issue=17 |pages=1β10 |url=https://scholarworks.umass.edu/pare/vol19/iss1/17/ }}</ref> and is often followed by significance tests involving individual dependent variables separately.<ref>Stevens, J. P. (2002). ''Applied multivariate statistics for the social sciences.'' Mahwah, NJ: Lawrence Erblaum.</ref> Without relation to the image, the dependent variables may be k life satisfactions scores measured at sequential time points and p job satisfaction scores measured at sequential time points. In this case there are k+p dependent variables whose linear combination follows a multivariate normal distribution, multivariate variance-covariance matrix homogeneity, and linear relationship, no multicollinearity, and each without outliers. == Model == Assume <math display="inline">n</math> <math display="inline">q</math>-dimensional observations, where the <math display="inline">i</math>βth observation <math display="inline">y_i</math> is assigned to the group <math display="inline">g(i)\in \{1,\dots,m\}</math> and is distributed around the group center <math display="inline">\mu^{(g(i))}\in \mathbb R^q</math> with [[Multivariate normal distribution|multivariate Gaussian]] noise: <math display="block"> y_i = \mu^{(g(i))} + \varepsilon_i\quad \varepsilon_i \overset{\text{i.i.d.}}{\sim} \mathcal N_q (0, \Sigma) \quad \text{ for } i=1,\dots, n, </math> where <math display="inline">\Sigma</math> is the [[covariance matrix]]. Then we formulate our [[null hypothesis]] as <math display="block">H_0\!:\;\mu^{(1)}=\mu^{(2)}=\dots =\mu^{(m)}.</math> ==Relationship with ANOVA== MANOVA is a generalized form of univariate [[analysis of variance]] (ANOVA),<ref name="Warne2014" /> although, unlike [[Analysis of variance|univariate ANOVA]], it uses the [[covariance]] between outcome variables in testing the statistical significance of the mean differences. Where [[Partition of sums of squares|sums of squares]] appear in univariate analysis of variance, in multivariate analysis of variance certain [[positive-definite matrix|positive-definite matrices]] appear. The diagonal entries are the same kinds of sums of squares that appear in univariate ANOVA. The off-diagonal entries are corresponding sums of products. Under normality assumptions about [[errors and residuals in statistics|error]] distributions, the counterpart of the sum of squares due to error has a [[Wishart distribution]]. == Hypothesis Testing == First, define the following <math display="inline">n\times q</math> matrices: * <math display="inline">Y</math>: where the <math display="inline">i</math>-th row is equal to <math display="inline">y_i</math> * <math display="inline">\hat Y</math>: where the <math display="inline">i</math>-th row is the best prediction given the group membership <math display="inline">g(i)</math>. That is the mean over all observation in group <math display="inline">g(i)</math>: <math display="inline">\frac{1}{\text{size of group }g(i)}\sum_{k: g(k)=g(i)}y_k</math>. * <math display="inline">\bar Y</math>: where the <math display="inline">i</math>-th row is the best prediction given no information. That is the [[Sample mean and covariance|empirical mean]] over all <math display="inline">n</math> observations <math display="inline">\frac{1}{n}\sum_{k=1}^n y_k</math> Then the matrix <math display="inline">S_{\text{model}} := (\hat Y - \bar Y)^T(\hat Y - \bar Y)</math> is a generalization of the sum of squares explained by the group, and <math display="inline">S_{\text{res}} := (Y - \hat Y)^T(Y - \hat Y)</math> is a generalization of the [[residual sum of squares]].<ref name="Anderson1994">{{cite book |last=Anderson |first=T. W. |title=An Introduction to Multivariate Statistical Analysis |year=1994 |publisher=Wiley}}</ref><ref name="Krzanowski1988">{{cite book |last=Krzanowski |first=W. J. |title=Principles of Multivariate Analysis. A User's Perspective |year=1988 |publisher=Oxford University Press}}</ref> Note that alternatively one could also speak about covariances when the abovementioned matrices are scaled by 1/(n-1) since the subsequent test statistics do not change by multiplying <math display="inline">S_{\text{model}}</math> and <math display="inline">S_{\text{res}}</math> by the same non-zero constant. The most common<ref name="Anderson1994"></ref><ref>{{cite web|last=UCLA: Academic Technology Services, Statistical Consulting Group.|title=Stata Annotated Output β MANOVA|url=https://stats.oarc.ucla.edu/stata/output/manova/|access-date=2024-02-10}}</ref> statistics are summaries based on the roots (or eigenvalues) <math display="inline">\lambda_p</math> of the matrix <math display="inline">A:= S_{\text{model}}S_{\text{res}}^{-1}</math> * [[Samuel Stanley Wilks]]' <math>\Lambda_\text{Wilks} = \prod_{1,\ldots,p}(1/(1 + \lambda_{p})) = \det(I + A)^{-1} = \det(S_\text{res})/\det(S_\text{res} + S_\text{model})</math> distributed as [[Wilks' lambda distribution|lambda]] (Ξ) * the [[K. C. Sreedharan Pillai]]β[[M. S. Bartlett]] [[trace of a matrix|trace]], <math>\Lambda_\text{Pillai} = \sum_{1,\ldots,p}(\lambda_p/(1 + \lambda_p)) = \operatorname{tr}(A(I + A)^{-1})</math><ref>{{cite web|url=http://www.real-statistics.com/multivariate-statistics/multivariate-analysis-of-variance-manova/manova-basic-concepts/|title=MANOVA Basic Concepts β Real Statistics Using Excel|website=www.real-statistics.com|access-date=5 April 2018}}</ref> * the [[Derrick Norman Lawley|Lawley]]β[[Harold Hotelling|Hotelling]] trace, <math>\Lambda_\text{LH} = \sum_{1,\ldots,p}(\lambda_{p}) = \operatorname{tr}(A)</math> * [[Roy's greatest root]] (also called ''Roy's largest root''), <math>\Lambda_\text{Roy} = \max_p(\lambda_p) </math> Discussion continues over the merits of each,<ref name="Warne2014" /> although the greatest root leads only to a bound on significance which is not generally of practical interest. A further complication is that, except for the Roy's greatest root, the distribution of these statistics under the [[null hypothesis]] is not straightforward and can only be approximated except in a few low-dimensional cases. An algorithm for the distribution of the Roy's largest root under the [[null hypothesis]] was derived in <ref>{{Citation |last=Chiani | first=M. |year=2016 |title = Distribution of the largest root of a matrix for Roy's test in multivariate analysis of variance |journal=[[Journal of Multivariate Analysis]] |volume=143 |pages=467β471 |arxiv=1401.3987v3 | doi=10.1016/j.jmva.2015.10.007 | s2cid=37620291 }}</ref> while the distribution under the alternative is studied in.<ref>I.M. Johnstone, B. Nadler "Roy's largest root test under rank-one alternatives" arXiv preprint arXiv:1310.6581 (2013)</ref> The best-known [[approximation]] for Wilks' lambda was derived by [[C. R. Rao]]. In the case of two groups, all the statistics are equivalent and the test reduces to [[Hotelling's T-square]]. == Introducing covariates (MANCOVA) == {{main|Multivariate analysis of covariance}} One can also test if there is a group effect after adjusting for covariates. For this, follow the procedure above but substitute <math display="inline">\hat Y</math> with the predictions of the [[general linear model]], containing the group and the covariates, and substitute <math display="inline">\bar Y</math> with the predictions of the general linear model containing only the covariates (and an intercept). Then <math display="inline">S_{\text{model}}</math> are the additional sum of squares explained by adding the grouping information and <math display="inline">S_{\text{res}}</math> is the residual sum of squares of the model containing the grouping and the covariates.<ref name="Krzanowski1988" /> Note that in case of unbalanced data, the order of adding the covariates matters. ==Correlation of dependent variables== [[File:Outcome Variables.jpg|thumb|This is a graphical depiction of the required relationship amongst outcome variables in a multivariate analysis of variance. Part of the analysis involves creating a composite variable, which the group differences of the independent variable are analyzed against. The composite variables, as there can be multiple, are different combinations of the outcome variables. The analysis then determines which combination shows the greatest group differences for the independent variable. A descriptive discriminant analysis is then used as a post hoc test to determine what the makeup of that composite variable is that creates the greatest group differences.]] [[File:MANOVAs and Highly Correlated Dependent Variables.png|thumb|This is a simple visual representation of the effect of two highly correlated dependent variables within a MANOVA. If two (or more) dependent variables are highly correlated, the chances of a Type I error occurring is reduced, but the trade-off is that the power of the MANOVA test is also reduced.]] MANOVA's power is affected by the correlations of the dependent variables and by the effect sizes associated with those variables. For example, when there are two groups and two dependent variables, MANOVA's power is lowest when the correlation equals the ratio of the smaller to the larger standardized effect size.<ref>{{cite journal|last1=Frane|first1=Andrew|title=Power and Type I Error Control for Univariate Comparisons in Multivariate Two-Group Designs|journal=Multivariate Behavioral Research|volume=50|issue=2|pages=233β247|date=2015|doi=10.1080/00273171.2014.968836|pmid=26609880|s2cid=1532673 }}</ref> ==See also== *[[Permutational analysis of variance]] for a non-parametric alternative *[[Discriminant function analysis]] *[[Canonical correlation analysis]] *[[v:Multivariate analysis of variance|Multivariate analysis of variance]] (Wikiversity) *[[Repeated measures design]] ==References== {{reflist}} ==External links== {{wikiversity}} {{Statistics}} {{Experimental design}} [[Category:Analysis of variance]] [[Category:Design of experiments]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Experimental design
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:Wikiversity
(
edit
)