Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
F-test
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistical hypothesis test, mostly using multiple restrictions}} {{DISPLAYTITLE:''F''-test}} [[File:F-test_plot.svg|thumb|An f-test pdf with d1 and d2 = 10, at a significance level of 0.05. (Red shaded region indicates the critical region)]] An '''F-test''' is a [[statistical test]] that compares variances. It is used to determine if the variances of two samples, or if the ratios of variances among multiple samples, are significantly different. The test calculates a [[Test statistic|statistic]], represented by the random variable F, and checks if it follows an [[F-distribution]]. This check is valid if the [[null hypothesis]] is true and standard assumptions about the errors (ε) in the data hold.<ref name=":0">{{Cite book |last1=Berger |first1=Paul D. |url=http://link.springer.com/10.1007/978-3-319-64583-4 |title=Experimental Design |last2=Maurer |first2=Robert E. |last3=Celli |first3=Giovana B. |date=2018 |publisher=Springer International Publishing |isbn=978-3-319-64582-7 |location=Cham |pages=108 |language=en |doi=10.1007/978-3-319-64583-4}}</ref> F-tests are frequently used to compare different statistical models and find the one that best describes the [[population (statistics)|population]] the data came from. When models are created using the [[least squares]] method, the resulting F-tests are often called "exact" F-tests. The F-statistic was developed by [[Ronald Fisher]] in the 1920s as the variance ratio and was later named in his honor by [[George W. Snedecor]].<ref>{{cite book |last=Lomax |first=Richard G. |url=https://archive.org/details/introductiontost0000loma_j6h1 |title=Statistical Concepts: A Second Course |publisher=Lawrence Erlbaum Associates |year=2007 |isbn=978-0-8058-5850-1 |page=[https://archive.org/details/introductiontost0000loma_j6h1/page/10 10] |url-access=registration}}</ref> ==Common examples== Common examples of the use of ''F''-tests include the study of the following cases * [[File:One-way ANOVA Table generated using Matlab.jpg|thumb|One-way ANOVA table with 3 random groups that each has 30 observations. F value is being calculated in the second to last column]]The hypothesis that the [[Arithmetic mean|means]] of a given set of [[normal distribution|normally distributed]] populations, all having the same [[standard deviation]], are equal. This is perhaps the best-known ''F''-test, and plays an important role in the [[analysis of variance]] (ANOVA). ** F test of [[analysis of variance]] (ANOVA) follows three assumptions **# [[Normality (statistics)]] **# [[Homogeneity of variance]] **# [[Independence (probability theory)|Independence of errors]] and [[Randomness|random sampling]] * The hypothesis that a proposed regression model fits the [[data]] well. See [[Lack-of-fit sum of squares]]. * The hypothesis that a data set in a [[regression analysis]] follows the simpler of two proposed linear models that are [[Statistical model#Nested models|nested]] within each other. * Multiple-comparison testing is conducted using needed data in already completed F-test, if F-test leads to rejection of null hypothesis and the factor under study has an impact on the dependent variable.<ref name=":0" /> ** "''a priori'' comparisons"/ "planned comparisons"- a particular set of comparisons ** "pairwise comparisons"-all possible comparisons *** i.e. Fisher's least significant difference (LSD) test, [[Tukey's Honestly Significant Difference|Tukey's honestly significant difference (HSD) test]], [[Newman-Keuls test|Newman Keuls test]], Ducan's test ** "[[Post hoc analysis|''a posteriori'' comparisons]]"/ "[[Post hoc comparison|''post hoc'' comparisons]]"/ "[[Post hoc comparison|exploratory comparisons]]"- choose comparisons after examining the data *** i.e. [[Scheffé's method]] ===''F''-test of the equality of two variances=== {{Main|F-test of equality of variances}} The ''F''-test is [[robust statistics|sensitive]] to [[normal distribution|non-normality]].<ref>{{cite journal | last=Box | first=G. E. P. |author-link= George E. P. Box| journal=Biometrika | year=1953 | title=Non-Normality and Tests on Variances | pages=318–335 | volume=40 | jstor=2333350 | issue=3/4 | doi=10.1093/biomet/40.3-4.318}}</ref><ref>{{cite journal | last=Markowski | first=Carol A |author2=Markowski, Edward P. | year = 1990 | title=Conditions for the Effectiveness of a Preliminary Test of Variance | journal=[[The American Statistician]] | pages=322–326 | volume=44 | jstor=2684360 | doi=10.2307/2684360 | issue=4}}</ref> In the [[analysis of variance]] (ANOVA), alternative tests include [[Levene's test]], [[Bartlett's test]], and the [[Brown–Forsythe test]]. However, when any of these tests are conducted to test the underlying assumption of [[homoscedasticity]] (''i.e.'' homogeneity of variance), as a preliminary step to testing for mean effects, there is an increase in the experiment-wise [[Type I error]] rate.<ref>{{cite journal |last=Sawilowsky |first=S. |year=2002 |title=Fermat, Schubert, Einstein, and Behrens–Fisher: The Probable Difference Between Two Means When σ<sub>1</sub><sup>2</sup> ≠ σ<sub>2</sub><sup>2</sup> |journal=Journal of Modern Applied Statistical Methods |volume=1 |issue=2 |pages=461–472 |doi=10.22237/jmasm/1036109940 |url=http://digitalcommons.wayne.edu/jmasm/vol1/iss2/55 |access-date=2015-03-30 |archive-url=https://web.archive.org/web/20150403095901/http://digitalcommons.wayne.edu/jmasm/vol1/iss2/55/ |archive-date=2015-04-03 |url-status=live |doi-access=free }}</ref> ==Formula and calculation== Most ''F''-tests arise by considering a decomposition of the [[variance|variability]] in a collection of data in terms of [[Partition of sums of squares|sums of squares]]. The [[test statistic]] in an ''F''-test is the ratio of two scaled sums of squares reflecting different sources of variability. These sums of squares are constructed so that the statistic tends to be greater when the null hypothesis is not true. In order for the statistic to follow the [[F-distribution|''F''-distribution]] under the null hypothesis, the sums of squares should be [[independence (probability theory)|statistically independent]], and each should follow a scaled [[chi-squared distribution|χ²-distribution]]. The latter condition is guaranteed if the data values are independent and [[normal distribution|normally distributed]] with a common [[variance]]. === One-way analysis of variance === The formula for the one-way '''ANOVA''' ''F''-test [[test statistic|statistic]] is :<math>F = \frac{\text{explained variance}}{\text{unexplained variance}} ,</math> or :<math>F = \frac{\text{between-group variability}}{\text{within-group variability}}.</math> The "explained variance", or "between-group variability" is :<math> \sum_{i=1}^{K} n_i(\bar{Y}_{i\cdot} - \bar{Y})^2/(K-1) </math> where <math>\bar{Y}_{i\cdot}</math> denotes the [[average|sample mean]] in the ''i''-th group, <math>n_i</math> is the number of observations in the ''i''-th group, <math>\bar{Y}</math> denotes the overall mean of the data, and <math>K</math> denotes the number of groups. The "unexplained variance", or "within-group variability" is :<math> \sum_{i=1}^{K}\sum_{j=1}^{n_{i}} \left( Y_{ij}-\bar{Y}_{i\cdot} \right)^2/(N-K), </math> where <math>Y_{ij}</math> is the ''j''<sup>th</sup> observation in the ''i''<sup>th</sup> out of <math>K</math> groups and <math>N</math> is the overall sample size. This ''F''-statistic follows the [[F-distribution|''F''-distribution]] with degrees of freedom <math>d_1=K-1</math> and <math>d_2=N-K</math> under the null hypothesis. The statistic will be large if the between-group variability is large relative to the within-group variability, which is unlikely to happen if the [[expected value|population means]] of the groups all have the same value. [[File:5% F table.jpg|thumb|F Table: Level 5% Critical values, containing degrees of freedoms for both denominator and numerator ranging from 1-20]] The result of the F test can be determined by comparing calculated F value and critical F value with specific significance level (e.g. 5%). The F table serves as a reference guide containing critical F values for the distribution of the F-statistic under the assumption of a true null hypothesis. It is designed to help determine the threshold beyond which the F statistic is expected to exceed a controlled percentage of the time (e.g., 5%) when the null hypothesis is accurate. To locate the critical F value in the F table, one needs to utilize the respective degrees of freedom. This involves identifying the appropriate row and column in the F table that corresponds to the significance level being tested (e.g., 5%).<ref>{{Citation |last=Siegel |first=Andrew F. |title=Chapter 15 - ANOVA: Testing for Differences Among Many Samples and Much More |date=2016-01-01 |url=https://www.sciencedirect.com/science/article/pii/B9780128042502000158 |work=Practical Business Statistics (Seventh Edition) |pages=469–492 |editor-last=Siegel |editor-first=Andrew F. |access-date=2023-12-10 |publisher=Academic Press |doi=10.1016/b978-0-12-804250-2.00015-8 |isbn=978-0-12-804250-2|url-access=subscription }}</ref> How to use critical F values: If the F statistic < the critical F value * Fail to reject null hypothesis * Reject alternative hypothesis * There is no significant differences among sample averages * The observed differences among sample averages could be reasonably caused by random chance itself * The result is not statistically significant If the F statistic > the critical F value * Accept alternative hypothesis * Reject null hypothesis * There is significant differences among sample averages * The observed differences among sample averages could not be reasonably caused by random chance itself * The result is statistically significant Note that when there are only two groups for the one-way ANOVA ''F''-test, <math>F = t^{2}</math>where ''t'' is the [[Student's t-test|Student's <math>t</math> statistic]]. ==== Advantages ==== * Multi-group comparison efficiency: facilitating simultaneous comparison of multiple groups, enhancing efficiency particularly in situations involving more than two groups. * Clarity in variance comparison: offering a straightforward interpretation of variance differences among groups, contributing to a clear understanding of the observed data patterns. * Versatility across disciplines: demonstrating broad applicability across diverse fields, including social sciences, natural sciences, and engineering. ==== Disadvantages ==== * Sensitivity to assumptions: the F-test is highly sensitive to certain assumptions, such as homogeneity of variance and normality which can affect the accuracy of test results. * Limited scope to group comparisons: the F-test is tailored for comparing variances between groups, making it less suitable for analyses beyond this specific scope. * Interpretation challenges: the F-test does not pinpoint specific group pairs with distinct variances. Careful interpretation is necessary, and additional post hoc tests are often essential for a more detailed understanding of group-wise differences. ===Multiple-comparison ANOVA problems=== The ''F''-test in one-way analysis of variance ([[ANOVA]]) is used to assess whether the [[expected value]]s of a quantitative variable within several pre-defined groups differ from each other. For example, suppose that a medical trial compares four treatments. The ANOVA ''F''-test can be used to assess whether any of the treatments are on average superior, or inferior, to the others versus the null hypothesis that all four treatments yield the same mean response. This is an example of an "omnibus" test, meaning that a single test is performed to detect any of several possible differences. Alternatively, we could carry out pairwise tests among the treatments (for instance, in the medical trial example with four treatments we could carry out six tests among pairs of treatments). The advantage of the ANOVA ''F''-test is that we do not need to pre-specify which treatments are to be compared, and we do not need to adjust for making [[multiple comparisons]]. The disadvantage of the ANOVA ''F''-test is that if we reject the [[null hypothesis]], we do not know which treatments can be said to be significantly different from the others, nor, if the ''F''-test is performed at level α, can we state that the treatment pair with the greatest mean difference is significantly different at level α. ===Regression problems=== {{further|Stepwise regression}} Consider two models, 1 and 2, where model 1 is 'nested' within model 2. Model 1 is the restricted model, and model 2 is the unrestricted one. That is, model 1 has ''p''<sub>1</sub> parameters, and model 2 has ''p''<sub>2</sub> parameters, where ''p''<sub>1</sub> < ''p''<sub>2</sub>, and for any choice of parameters in model 1, the same regression curve can be achieved by some choice of the parameters of model 2. One common context in this regard is that of deciding whether a model fits the data significantly better than does a naive model, in which the only explanatory term is the intercept term, so that all predicted values for the dependent variable are set equal to that variable's sample mean. The naive model is the restricted model, since the coefficients of all potential explanatory variables are restricted to equal zero. Another common context is deciding whether there is a structural break in the data: here the restricted model uses all data in one regression, while the unrestricted model uses separate regressions for two different subsets of the data. This use of the F-test is known as the [[Chow test]]. The model with more parameters will always be able to fit the data at least as well as the model with fewer parameters. Thus typically model 2 will give a better (i.e. lower error) fit to the data than model 1. But one often wants to determine whether model 2 gives a ''significantly'' better fit to the data. One approach to this problem is to use an ''F''-test. If there are ''n'' data points to estimate parameters of both models from, then one can calculate the ''F'' statistic, given by :<math>F=\frac{\left(\frac{\text{RSS}_1 - \text{RSS}_2 }{p_2 - p_1}\right)}{\left(\frac{\text{RSS}_2}{n - p_2}\right)} = \frac{\text{RSS}_1 - \text{RSS}_2 }{\text{RSS}_2} \cdot \frac{n - p_2}{p_2 - p_1},</math> where RSS<sub>''i''</sub> is the [[residual sum of squares]] of model ''i''. If the regression model has been calculated with weights, then replace RSS<sub>''i''</sub> with χ<sup>2</sup>, the weighted sum of squared residuals. Under the null hypothesis that model 2 does not provide a significantly better fit than model 1, ''F'' will have an ''F'' distribution, with (''p''<sub>2</sub>−''p''<sub>1</sub>, ''n''−''p''<sub>2</sub>) [[Degrees of freedom (statistics)|degrees of freedom]]. The null hypothesis is rejected if the ''F'' calculated from the data is greater than the critical value of the [[F-distribution|''F''-distribution]] for some desired false-rejection probability (e.g. 0.05). Since ''F'' is a monotone function of the likelihood ratio statistic, the ''F''-test is a [[likelihood ratio test]]. ==See also== *[[Goodness of fit]] ==References== {{Reflist|30em}} ==Further reading== * {{cite book |last=Fox |first=Karl A. |title=Intermediate Economic Statistics |location=New York |publisher=John Wiley & Sons |year=1980 |edition=Second |isbn=0-88275-521-8 |pages=290–310 |url={{Google books |plainurl=yes |id=V6YrAAAAYAAJ |page=290 }} }} * {{cite book |last=Johnston |first=John |author-link=John Johnston (econometrician) |title=Econometric Methods |location=New York |publisher=McGraw-Hill |edition=Second |year=1972 |pages=35–38 |url={{Google books |plainurl=yes |id=BZtvwZAGyV0C |page=35 }} }} * {{cite book |last=Kmenta |first=Jan |author-link=Jan Kmenta |title=Elements of Econometrics |location=New York |publisher=Macmillan |edition=Second |year=1986 |isbn=0-02-365070-2 |pages=147–148 |url={{Google books |plainurl=yes |id=Bxq7AAAAIAAJ |page=147 }} }} * {{cite book |last1=Maddala |first1=G. S. |author-link=G. S. Maddala |last2=Lahiri |first2=Kajal |title=Introduction to Econometrics |location=Chichester |publisher=Wiley |edition=Fourth |year=2009 |isbn=978-0-470-01512-4 |pages=155–160 |url={{Google books |plainurl=yes |id=vkQvQgAACAAJ |page=155 }} }} ==External links== * [http://www.itl.nist.gov/div898/handbook/eda/section3/eda3673.htm Table of ''F''-test critical values] * [http://www.waterlog.info/f-test.htm Free calculator for ''F''-testing] * [http://facweb.cs.depaul.edu/sjost/csc423/documents/f-test-reg.htm The ''F''-test for Linear Regression] * {{YouTube|id=sajXLvfolmg&list=PLD15D38DC7AA3B737&index=2#t=35m01s|title=Econometrics lecture (topic: hypothesis testing)}} by [[Mark Thoma]] {{Statistics|inference}} [[Category:Analysis of variance]] [[Category:Statistical ratios]] [[Category:Statistical tests]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Further
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:YouTube
(
edit
)