Statistical significance

Revision as of 17:57, 14 May 2025 by imported>Fgnievinski (→‎top: [[MOS:WAW)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description In statistical hypothesis testing,<ref name=Sirkin>Template:Cite book</ref><ref name=Borror>Template:Cite book</ref> a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true.<ref name="Myers et al-p65" /> More precisely, a study's defined significance level, denoted by <math>\alpha</math>, is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true;<ref name="Dalgaard">Template:Cite book</ref> and the p-value of a result, <math>p</math>, is the probability of obtaining a result at least as extreme, given that the null hypothesis is true.<ref name=":0">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> The result is said to be statistically significant, by the standards of the study, when <math>p \le \alpha</math>.<ref name="Johnson">Template:Cite journal</ref><ref name="Redmond and Colton">Template:Cite book</ref><ref name="Cumming-p27">Template:Cite book</ref><ref name="Krzywinski and Altman">Template:Cite journal</ref><ref name="Sham and Purcell">Template:Cite journal</ref><ref name="Altman">Template:Cite book</ref><ref name=Devore>Template:Cite book</ref> The significance level for a study is chosen before data collection, and is typically set to 5%<ref name="Salkind">Template:Cite encyclopedia</ref> or much lower—depending on the field of study.<ref name="Sproull">Template:Cite book</ref>

In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.<ref name=Babbie2>Template:Cite book</ref><ref name=Faherty>Template:Cite book</ref> But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population,<ref name=Sirkin/> thereby rejecting the null hypothesis.<ref name=McKillup>Template:Cite book</ref>

This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research significance, theoretical significance, or practical significance.<ref name=Sirkin /><ref name=Borror /><ref name="Myers et al-p124">Template:Cite book</ref><ref name=":1">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> For example, the term clinical significance refers to the practical importance of a treatment effect.<ref>Template:Cite journal</ref>

HistoryEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Statistical significance dates to the 18th century, in the work of John Arbuthnot and Pierre-Simon Laplace, who computed the p-value for the human sex ratio at birth, assuming a null hypothesis of equal probability of male and female births; see Template:Slink for details.<ref>Template:Cite book</ref><ref>Template:Cite journal</ref><ref name="Conover1999">Template:Citation</ref><ref name="Sprent1989">Template:Citation</ref><ref>Template:Cite book</ref><ref name="Bellhouse2001">Template:Citation </ref><ref name="Hald1998">Template:Citation </ref>

In 1925, Ronald Fisher advanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publication Statistical Methods for Research Workers.<ref name="Cumming">Template:Cite book</ref><ref name="Fisher1925">Template:Cite book</ref><ref name="Poletiek">Template:Cite book</ref> Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.<ref name=Quinn>Template:Cite book</ref> In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named <math>\alpha</math>. They recommended that <math>\alpha</math> be set ahead of time, prior to any data collection.<ref name=Quinn /><ref name="Neyman">Template:Cite journal</ref>

Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.<ref name=Quinn />

Related conceptsEdit

The significance level <math>\alpha</math> is the threshold for <math>p</math> below which the null hypothesis is rejected even though by assumption it were true, and something else is going on. This means that <math>\alpha</math> is also the probability of mistakenly rejecting the null hypothesis, if the null hypothesis is true.<ref name="Dalgaard" /> This is also called false positive and type I error.

Sometimes researchers talk about the confidence level Template:Math instead. This is the probability of not rejecting the null hypothesis given that it is true.<ref>"Conclusions about statistical significance are possible with the help of the confidence interval. If the confidence interval does not include the value of zero effect, it can be assumed that there is a statistically significant result." Template:Cite journal</ref><ref>StatNews #73: Overlapping Confidence Intervals and Statistical Significance</ref> Confidence levels and confidence intervals were introduced by Neyman in 1937.<ref name="Neyman1937">Template:Cite journal</ref>

Role in statistical hypothesis testingEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

File:NormalDist1.96.svg
In a two-tailed test, the rejection region for a significance level of Template:Math is partitioned to both ends of the sampling distribution and makes up 5% of the area under the curve (white areas).

Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the hypothesis that no effect exists in the phenomenon being studied.<ref name=Meier>Template:Cite book</ref> For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level <math>\alpha</math>.

To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true.<ref name=":0" /><ref name="Devore"/> The null hypothesis is rejected if the p-value is less than (or equal to) a predetermined level, <math>\alpha</math>. <math>\alpha</math> is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%.

For example, when <math>\alpha</math> is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%,<ref name="Healy2009">Template:Cite book</ref> and a statistically significant result is one where the observed p-value is less than (or equal to) 5%.<ref name="Healy2006">Template:Cite book</ref> When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution.<ref name=Heath>Template:Cite book</ref> These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.

The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better.<ref name="Myers et al-p65">Template:Cite book</ref> A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used.<ref name="Hinton 2014">Template:Cite book</ref> The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.

Significance thresholds in specific fieldsEdit

Template:Further In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (for example 5σ).<ref name=Vaughan>Template:Cite book</ref><ref name=Bracken>Template:Cite book</ref> For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.<ref name="Bracken"/><ref name=franklin>Template:Cite book</ref>

In other fields of scientific research such as genome-wide association studies, significance levels as low as Template:Val are not uncommon<ref name="Clarke et al">Template:Cite journal</ref><ref name="Barsh et al">Template:Cite journal</ref>—as the number of tests performed is extremely large.

LimitationsEdit

Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive<ref name="Carver">Template:Cite journal</ref> and not replicable.<ref name="Ioannidis">Template:Cite journal</ref><ref name="peerj.com">Template:Cite journal</ref> There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant.<ref name="A Visitor’s Guide to Effect Sizes">Template:Cite journal</ref><ref name=":1"/>

Effect sizeEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

Effect size is a measure of a study's practical significance.<ref name="A Visitor’s Guide to Effect Sizes"/> A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation coefficient between two variables or its square, and other measures.<ref name=Pedhazur>Template:Cite book</ref>

ReproducibilityEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

A statistically significant result may not be easy to reproduce.<ref name="peerj.com"/> In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.<ref>Template:Cite book</ref>

ChallengesEdit

Template:See also

Overuse in some journalsEdit

Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of Template:Math=5%, was being relied on too heavily as the primary measure of validity of a hypothesis.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the journal Basic and Applied Social Psychology banned the use of significance testing altogether from papers it published,<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> requiring authors to use other measures to evaluate hypotheses and impact.<ref>Template:Cite journal</ref><ref>Template:Cite news</ref>

Other editors, commenting on this ban have noted: "Banning the reporting of p-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem. There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly."<ref>Template:Cite journal</ref> Some statisticians prefer to use alternative measures of evidence, such as likelihood ratios or Bayes factors.<ref name="Wasserstein 129–133"/> Using Bayesian statistics can avoid confidence levels, but also requires making additional assumptions,<ref name="Wasserstein 129–133">Template:Cite journal</ref> and may not necessarily improve practice regarding statistical testing.<ref>Template:Cite journal</ref>

The widespread abuse of statistical significance represents an important topic of research in metascience.<ref>Template:Cite journal</ref>

Redefining significanceEdit

In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p ≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".<ref name="Wasserstein 129–133"/> In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005.<ref>Template:Cite journal</ref> Other researchers responded that imposing a more stringent significance threshold would aggravate problems such as data dredging; alternative propositions are thus to select and justify flexible p-value thresholds before collecting data,<ref>Template:Cite journal</ref> or to interpret p-values as continuous indices, thereby discarding thresholds and statistical significance.<ref>Template:Cite journal</ref> Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science,<ref>Template:Cite journal</ref> and the ASA published a further official statement <ref name="Wasserstein2 129–133">Template:Cite journal</ref> declaring (page 2): <templatestyles src="Template:Blockquote/styles.css" />

We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term "statistically significant" entirely. Nor should variants such as "significantly different," "<math>p \le 0.05</math>," and "nonsignificant" survive, whether expressed in words, by asterisks in a table, or in some other way.{{#if:|{{#if:|}}

}}

{{#invoke:Check for unknown parameters|check|unknown=Template:Main other|preview=Page using Template:Blockquote with unknown parameter "_VALUE_"|ignoreblank=y| 1 | 2 | 3 | 4 | 5 | author | by | char | character | cite | class | content | multiline | personquoted | publication | quote | quotesource | quotetext | sign | source | style | text | title | ts }}

See alsoEdit

Template:Portal

ReferencesEdit

Template:Reflist

Further readingEdit

External linksEdit

Template:Wikiversity

Template:Statistics Template:Authority control