Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
F-distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Relation to the chi-squared distribution=== In instances where the ''F''-distribution is used, for example in the [[analysis of variance]], independence of <math>U_1</math> and <math>U_2</math> (defined above) might be demonstrated by applying [[Cochran's theorem]]. Equivalently, since the [[chi-squared distribution]] is the sum of squares of [[Independence (probability theory)|independent]] [[standard normal]] random variables, the random variable of the ''F''-distribution may also be written <math display="block">X = \frac{s_1^2}{\sigma_1^2} \div \frac{s_2^2}{\sigma_2^2},</math> where <math>s_1^2 = \frac{S_1^2}{d_1}</math> and <math>s_2^2 = \frac{S_2^2}{d_2}</math>, <math>S_1^2</math> is the sum of squares of <math>d_1</math> random variables from normal distribution <math>N(0,\sigma_1^2)</math> and <math>S_2^2</math> is the sum of squares of <math>d_2</math> random variables from normal distribution <math>N(0,\sigma_2^2)</math>. In a [[frequentist]] context, a scaled ''F''-distribution therefore gives the probability <math>p(s_1^2/s_2^2 \mid \sigma_1^2, \sigma_2^2)</math>, with the ''F''-distribution itself, without any scaling, applying where <math>\sigma_1^2</math> is being taken equal to <math>\sigma_2^2</math>. This is the context in which the ''F''-distribution most generally appears in [[F-test|''F''-tests]]: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis. The quantity <math>X</math> has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant [[Jeffreys prior]] is taken for the [[prior probability|prior probabilities]] of <math>\sigma_1^2</math> and <math>\sigma_2^2</math>.<ref>{{cite book |first=G. E. P. |last=Box |first2=G. C. |last2=Tiao |year=1973 |title=Bayesian Inference in Statistical Analysis |publisher=Addison-Wesley |page=110 |isbn=0-201-00622-7 }}</ref> In this context, a scaled ''F''-distribution thus gives the posterior probability <math>p(\sigma^2_2 /\sigma_1^2 \mid s^2_1, s^2_2)</math>, where the observed sums <math>s^2_1</math> and <math>s^2_2</math> are now taken as known.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)