Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Statistical hypothesis test
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Cautions=== "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed."<ref name="moore">{{cite book|last=Moore|first=David|title=Introduction to the Practice of Statistics|publisher=W.H. Freeman and Co|location=New York|year=2003|page=426|isbn=9780716796572}}</ref> This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusion ''might'' be wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: * The [[clever Hans effect]]. A horse appeared to be capable of doing simple arithmetic. * The [[Hawthorne effect]]. Industrial workers were more productive in better illumination, and most productive in worse. * The [[placebo effect]]. Pills with no medically active ingredients were remarkably effective. A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. In [[forecasting]] for example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. Publication bias: Statistically nonsignificant results may be less likely to be published, which can bias the literature. Multiple testing: When multiple true null hypothesis tests are conducted at once without adjustment, the overall probability of Type I error is higher than the nominal alpha level.<ref>{{cite journal |last1=Ranganathan |first1=Priya |last2=Pramesh |first2=C. S |last3=Buyse |first3=Marc |title=Common pitfalls in statistical analysis: The perils of multiple testing |journal=Perspect Clin Res |date=April–June 2016 |volume=7 |issue=2 |pages=106–107 |doi=10.4103/2229-3485.179436 |pmid=27141478|pmc=4840791 |doi-access=free }}</ref> Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)