Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
P-value
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Calculation == Usually, <math>T</math> is a [[test statistic]]. A test statistic is the output of a [[scalar (mathematics)|scalar]] function of all the observations. This statistic provides a single number, such as a [[t-statistic|''t''-statistic]] or an [[F-test|''F''-statistic]]. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data. For the important case in which the data are hypothesized to be a random sample from a normal distribution, depending on the nature of the test statistic and the hypotheses of interest about its distribution, different null hypothesis tests have been developed. Some such tests are the [[z-test|''z''-test]] for hypotheses concerning the mean of a [[normal distribution]] with known variance, the [[t-test|''t''-test]] based on [[Student's t-distribution|Student's ''t''-distribution]] of a suitable statistic for hypotheses concerning the mean of a normal distribution when the variance is unknown, the [[F-test|''F''-test]] based on the [[F-distribution|''F''-distribution]] of yet another statistic for hypotheses concerning the variance. For data of other nature, for instance, categorical (discrete) data, test statistics might be constructed whose null hypothesis distribution is based on normal approximations to appropriate statistics obtained by invoking the [[central limit theorem]] for large samples, as in the case of [[Pearson's chi-squared test]]. Thus computing a ''p''-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a [[one-tailed test]] or a [[two-tailed test]]), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its [[cumulative distribution function]] (CDF) is often a difficult problem. Today, this computation is done using statistical software, often via numeric methods (rather than exact formulae), but, in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated ''p''-values from these discrete values{{citation needed|date=March 2018}}. Rather than using a table of ''p''-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed ''p''-values; this corresponds to computing the [[quantile function]] (inverse CDF).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)