Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Test statistic
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Common test statistics== '''One-sample tests''' are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population. '''Two-sample tests''' are appropriate for comparing two samples, typically experimental and control samples from a scientifically controlled experiment. '''Paired tests''' are appropriate for comparing two samples where it is impossible to control important variables. Rather than comparing two sets, members are paired between samples so the difference between the members becomes the sample. Typically the mean of the differences is then compared to zero. The common example scenario for when a [[paired difference test]] is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect. [[Z-test]]s are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. A [[Student's t-test|''t''-test]] is appropriate for comparing means under relaxed conditions (less is assumed). Tests of proportions are analogous to tests of means (the 50% proportion). Chi-squared tests use the same calculations and the same probability distribution for different applications: * [[Chi-squared test]]s for variance are used to determine whether a normal population has a specified variance. The null hypothesis is that it does. * Chi-squared tests of independence are used for deciding whether two variables are associated or are independent. The variables are categorical rather than numeric. It can be used to decide whether [[left-handedness]] is correlated with height (or not). The null hypothesis is that the variables are independent. The numbers used in the calculation are the observed and expected frequencies of occurrence (from [[contingency table]]s). * Chi-squared goodness of fit tests are used to determine the adequacy of curves fit to data. The null hypothesis is that the curve fit is adequate. It is common to determine curve shapes to minimize the mean square error, so it is appropriate that the goodness-of-fit calculation sums the squared errors. [[F-test]]s (analysis of variance, ANOVA) are commonly used when deciding whether groupings of data by category are meaningful. If the variance of test scores of the left-handed in a class is much smaller than the variance of the whole class, then it may be useful to study lefties as a group. The null hypothesis is that two variances are the same – so the proposed grouping is not meaningful. In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found in [[:Category:Statistical tests|other articles]]. Proofs exist that the test statistics are appropriate.<ref name="Loveland">{{Cite thesis |type= M.Sc. (Mathematics) |title= Mathematical Justification of Introductory Hypothesis Tests and Development of Reference Materials |url=https://digitalcommons.usu.edu/gradreports/14 |last= Loveland |first= Jennifer L. |year= 2011 |publisher= Utah State University |access-date= April 30, 2013}} Abstract: "The focus was on the Neyman–Pearson approach to hypothesis testing. A brief historical development of the Neyman–Pearson approach is followed by mathematical proofs of each of the hypothesis tests covered in the reference material." The proofs do not reference the concepts introduced by Neyman and Pearson, instead they show that traditional test statistics have the probability distributions ascribed to them, so that significance calculations assuming those distributions are correct. The thesis information is also posted at mathnstats.com as of April 2013.</ref> {|class="wikitable" ! Name ! Formula ! Assumptions or notes |- |One-sample [[z-test|<math>z</math> -test]] |align=center|<math>z=\frac{\overline{x}-\mu_0}{({\sigma}/\sqrt n)}</math> |(Normal population '''or''' ''n'' large) '''and''' σ known. <br /> (''z'' is the distance from the mean in relation to the [[standard error|standard deviation of the mean]]). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within ''k'' standard deviations for any ''k'' (see: ''[[Chebyshev's inequality]]''). |- |Two-sample z-test |align=center|<math>z=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}</math> |Normal population '''and''' independent observations '''and''' σ<sub>1</sub> and σ<sub>2</sub> are known where <math>d_0</math> is the value of <math>\mu_1-\mu_2</math> under the null hypothesis |- |One-sample [[Student's t-test|''t''-test]] |align=center|<math>t=\frac{\overline{x}-\mu_0} {( s / \sqrt{n} )} ,</math> <br /> <math>df=n-1 \ </math> |(Normal population '''or''' ''n'' large) '''and''' <math>\sigma</math> unknown |- |Paired ''t''-test | align="center" |<math>t=\frac{\overline{d}-d_0} { ( s_d / \sqrt{n} ) } ,</math> <math>df=n-1 \ </math> |(Normal population of differences '''or''' ''n'' large) '''and''' <math>\sigma</math> unknown |- |Two-sample pooled [[Student's t-test|''t''-test]], equal variances | align="center" |<math>t=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{s_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},</math> <math>s_p^2=\frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 - 2},</math><br /> <math>df=n_1 + n_2 - 2 \ </math><ref name="NIST2mean">NIST handbook: [http://www.itl.nist.gov/div898/handbook/eda/section3/eda353.htm Two-Sample ''t''-test for Equal Means]</ref> |(Normal populations '''or''' ''n''<sub>1</sub> + ''n''<sub>2</sub> > 40) '''and''' independent observations '''and''' σ<sub>1</sub> = σ<sub>2</sub> unknown |- |Two-sample unpooled ''t''-test, unequal variances ([[Welch's t test|Welch's ''t''-test]]) | align="center" |<math>t=\frac{(\overline{x}_1 - \overline{x}_2) - d_0}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}},</math><br /> <math>df = \frac{\left(\dfrac{s_1^2}{n_1}+\dfrac{s_2^2}{n_2}\right)^2} {\dfrac{\left(\dfrac{s_1^2}{n_1}\right)^2}{n_1-1} + \dfrac{\left(\dfrac{s_2^2}{n_2}\right)^2}{n_2-1}}</math><ref name="NIST2mean" /> |(Normal populations '''or''' ''n''<sub>1</sub> + ''n''<sub>2</sub> > 40) '''and''' independent observations '''and''' σ<sub>1</sub> ≠ σ<sub>2</sub> both unknown |- |One-proportion z-test | align="center" |<math>z=\frac{\hat{p} - p_0}{\sqrt{p_0 (1-p_0)}}\sqrt n</math> |''n<sup> .</sup>p<sub>0</sub>'' > 10 '''and''' ''n'' (1 − ''p<sub>0</sub>'') > 10 '''and''' it is a SRS (Simple Random Sample), see [[Binomial distribution#Normal approximation|notes]]. |- |Two-proportion z-test, pooled for <math>H_0\colon p_1=p_2</math> | align="center" |<math>z=\frac{(\hat{p}_1 - \hat{p}_2)}{\sqrt{\hat{p}(1 - \hat{p})(\frac{1}{n_1} + \frac{1}{n_2})}}</math> <math>\hat{p}=\frac{x_1 + x_2}{n_1 + n_2}</math> |''n''<sub>1</sub> ''p''<sub>1</sub> > 5 '''and''' ''n''<sub>1</sub>(1 − ''p''<sub>1</sub>) > 5 '''and''' ''n''<sub>2</sub> ''p''<sub>2</sub> > 5 '''and''' ''n''<sub>2</sub>(1 − ''p''<sub>2</sub>) > 5 '''and''' independent observations, see [[Binomial distribution#Normal approximation|notes]]. |- |Two-proportion z-test, unpooled for <math>|d_0|>0</math> | align="center" |<math>z=\frac{(\hat{p}_1 - \hat{p}_2) - d_0}{\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}}</math> |''n''<sub>1</sub> ''p''<sub>1</sub> > 5 '''and''' ''n''<sub>1</sub>(1 − ''p''<sub>1</sub>) > 5 '''and''' ''n''<sub>2</sub> ''p''<sub>2</sub> > 5 '''and''' ''n''<sub>2</sub>(1 − ''p''<sub>2</sub>) > 5 '''and''' independent observations, see [[Binomial distribution#Normal approximation|notes]]. |- |Chi-squared test for variance | align="center" |<math>\chi^2=(n-1)\frac{s^2}{\sigma^2_0}</math> |''df = n-1'' • Normal population |- |Chi-squared test for goodness of fit | align="center" |<math>\chi^2=\sum_k\frac{(\text{observed}-\text{expected})^2}{\text{expected}}</math> |''df = k'' − 1 − ''# parameters estimated'', and one of these must hold. • All expected counts are at least 5.<ref>Steel, R. G. D., and Torrie, J. H., ''Principles and Procedures of Statistics with Special Reference to the Biological Sciences.'', [[McGraw Hill]], 1960, page 350.</ref> • All expected counts are > 1 and no more than 20% of expected counts are less than 5<ref>{{cite book|last=Weiss|first=Neil A.|title=Introductory Statistics|edition=5th|year=1999|pages=[https://archive.org/details/introductorystat00neil/page/802 802]|isbn=0-201-59877-9|url=https://archive.org/details/introductorystat00neil/page/802}} </ref> |- |Two-sample F test for equality of variances | align="center" |<math>F=\frac{s_1^2}{s_2^2}</math> |Normal populations<br />Arrange so <math>s_1^2 \ge s_2^2</math> and reject H<sub>0</sub> for <math>F > F(\alpha/2,n_1-1,n_2-1)</math><ref>NIST handbook: [http://www.itl.nist.gov/div898/handbook/eda/section3/eda359.htm F-Test for Equality of Two Standard Deviations] (Testing standard deviations the same as testing variances)</ref> |- |[[Regression analysis#Diagnostics|Regression]] ''t''-test of <math>H_0\colon R^2=0.</math> | align="center" |<math>t=\sqrt{\frac{R^2(n-k-1^*)}{1-R^2}}</math> |Reject ''H''<sub>0</sub> for <math>t > t(\alpha/2,n-k-1^*)</math><ref>Steel, R. G. D., and Torrie, J. H., ''Principles and Procedures of Statistics with Special Reference to the Biological Sciences.'', [[McGraw Hill]], 1960, page 288.)</ref><br />*Subtract 1 for intercept; ''k'' terms contain independent variables. |- | colspan="3" | {{List of statistics symbols}} |}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)