Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Statistical hypothesis test
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Alternatives== {{main|Estimation statistics}} {{See also|Confidence interval#Statistical hypothesis testing}} A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with an [[interval estimate]]; this data-analysis philosophy is broadly referred to as [[estimation statistics]]. Estimation statistics can be accomplished with either frequentist<ref>{{Cite journal |last=Ho |first=Joses |last2=Tumkaya |first2=Tayfun |last3=Aryal |first3=Sameer |last4=Choi |first4=Hyungwon |last5=Claridge-Chang |first5=Adam |date=June 19, 2019 |title=Moving beyond P values: data analysis with estimation graphics |url=https://www.nature.com/articles/s41592-019-0470-3 |journal=Nature Methods |language=en |volume=16 |issue=7 |pages=565–566 |doi=10.1038/s41592-019-0470-3 |issn=1548-7091}}</ref> or Bayesian methods.<ref name="Kruschke 2012">{{cite journal|last=Kruschke|first=J K|author-link=John K. Kruschke|title=Bayesian Estimation Supersedes the T Test|journal=Journal of Experimental Psychology: General|date=July 9, 2012 |volume=142|issue=2|pages=573–603|doi=10.1037/a0029146|pmid=22774788|s2cid=5610231 |url=https://jkkweb.sitehost.iu.edu/articles/Kruschke2012JEPG.pdf}}</ref><ref name="Kruschke 2018">{{cite journal|last=Kruschke|first=J K|author-link=John K. Kruschke|title=Rejecting or Accepting Parameter Values in Bayesian Estimation|journal=Advances in Methods and Practices in Psychological Science|date=May 8, 2018|volume=1|issue=2|pages=270–280|doi=10.1177/2515245918771304|s2cid=125788648 |url=https://jkkweb.sitehost.iu.edu/articles/Kruschke2018RejectingOrAcceptingParameterValuesWithSupplement.pdf}}</ref> Critics of significance testing have advocated basing inference less on p-values and more on confidence intervals for effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality :.<ref name=Armstrong1>{{cite journal|author=Armstrong, J. Scott|title=Significance tests harm progress in forecasting|journal=International Journal of Forecasting|volume=23|pages=321–327|year=2007|url=http://repository.upenn.edu/cgi/viewcontent.cgi?article=1104&context=marketing_papers|doi=10.1016/j.ijforecast.2007.03.004|issue=2|citeseerx=10.1.1.343.9516|s2cid=1550979}}</ref> But none of these suggested alternatives inherently produces a decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals: "The distinction between the ... approaches is largely one of reporting and interpretation."<ref name=Lehmann97>{{cite journal|author=E. L. Lehmann|title=Testing Statistical Hypotheses: The Story of a Book|journal=Statistical Science|volume=12|issue=1|pages=48–52|year=1997|doi=10.1214/ss/1029963261|doi-access=free}}</ref> [[Bayesian inference]] is one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)).<ref name="nickerson"/> For example, Bayesian [[parameter estimation]] can provide rich information about the data from which researchers can draw inferences, while using uncertain [[Prior probability|priors]] that exert only minimal influence on the results when enough data is available. Psychologist [[John K. Kruschke]] has suggested Bayesian estimation as an alternative for the [[Student's t-test|''t''-test]]<ref name="Kruschke 2012" /> and has also contrasted Bayesian estimation for assessing null values with Bayesian model comparison for hypothesis testing.<ref name="Kruschke 2018" /> Two competing models/hypotheses can be compared using [[Bayes factors]].<ref>{{cite report |last=Kass |first=R. E. |title=Bayes factors and model uncertainty |year=1993|url=http://www.stat.washington.edu/research/reports/1993/tr254.pdf |publisher=Department of Statistics, University of Washington}}</ref> Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor the [[probability distribution]] of the test statistic under the alternative hypothesis are often available in the social sciences.<ref name="nickerson"/> Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often to [[objectivity (science)|objectively]] assess the [[probability]] that a [[hypothesis]] is true based on the data they have collected.<ref>{{Cite journal | last = Rozeboom | first = William W | title = The fallacy of the null-hypothesis significance test | journal = Psychological Bulletin | volume = 57 | issue = 5 | pages = 416–428 | year = 1960 | url = http://stats.org.uk/statistical-inference/Rozeboom1960.pdf | doi=10.1037/h0042040| pmid = 13744252 | citeseerx = 10.1.1.398.9002 }} "...the proper application of statistics to scientific inference is irrevocably committed to extensive consideration of inverse [AKA Bayesian] probabilities..." It was acknowledged, with regret, that a priori probability distributions were available "only as a subjective feel, differing from one person to the next" "in the more immediate future, at least".</ref><ref>{{Cite journal | last = Berger | first = James | title = The Case for Objective Bayesian Analysis | journal = Bayesian Analysis | volume = 1 | issue = 3 | pages = 385–402 | year = 2006 | doi=10.1214/06-ba115| doi-access = free }} In listing the competing definitions of "objective" Bayesian analysis, "A major goal of statistics (indeed science) is to find a completely coherent objective Bayesian methodology for learning from data." The author expressed the view that this goal "is not attainable".</ref> Neither [[Ronald Fisher|Fisher]]'s significance testing, nor [[Neyman–Pearson lemma|Neyman–Pearson]] hypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use of [[Bayes' Theorem]], which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use of [[subjectivity]] in the form of the [[prior probability]].<ref name="Neyman 289–337"/><ref>{{cite journal|last=Aldrich|first=J|title=R. A. Fisher on Bayes and Bayes' theorem|journal=Bayesian Analysis|year=2008|volume=3|issue=1|pages=161–170|doi=10.1214/08-BA306|df=mdy-all|doi-access=free}}</ref> Fisher's strategy is to sidestep this with the [[p-value|''p''-value]] (an objective ''index'' based on the data alone) followed by ''inductive inference'', while Neyman–Pearson devised their approach of ''inductive behaviour''.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)