Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Analysis of variance
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Associated analysis== Some analysis is required in support of the ''design'' of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. ===Preparatory analysis=== ====The number of experimental units==== In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions."<ref>Wilkinson (1999, p 596)</ref> The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval.<ref>Montgomery (2001, Section 3-7: Determining sample size)</ref> ====Power analysis==== [[Statistical power|Power analysis]] is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.<ref>Howell (2002, Chapter 8: Power)</ref><ref>Howell (2002, Section 11.12: Power (in ANOVA))</ref><ref>Howell (2002, Section 13.7: Power analysis for factorial experiments)</ref><ref>Moore and McCabe (2003, pp 778β780)</ref> [[File:Effect_size.png|thumb|Effect size]] ====Effect size==== {{Main|Effect size}} Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes.<ref name="Wilkinson">Wilkinson (1999, p 599)</ref> ====Model confirmation==== Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirm [[homoscedasticity]] and gross normality.<ref>Montgomery (2001, Section 3-4: Model adequacy checking)</ref> Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. ====Follow-up tests==== A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are "planned" ([[A priori and a posteriori|a priori]]) or [[Post-hoc analysis|"post hoc]]." Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term "post hoc" is inconsistently used). The follow-up tests may be "simple" pairwise comparisons of individual group means or may be "compound" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for the [[multiple comparisons problem]]. Follow-up tests to identify which specific groups, variables, or factors have statistically different means include the [[Tukey's range test]], and [[Duncan's new multiple range test]]. In turn, these tests are often followed with a [[Compact Letter Display (CLD)]] methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)