Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Z-test
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== For maximum likelihood estimation of a parameter === Location tests are the most familiar ''Z''-tests. Another class of ''Z''-tests arises in [[maximum likelihood]] estimation of the [[parameter]]s in a [[parametric statistics|parametric]] [[statistical model]]. Maximum likelihood estimates are approximately normal under certain conditions, and their asymptotic variance can be calculated in terms of the Fisher information. The maximum likelihood estimate divided by its standard error can be used as a test statistic for the null hypothesis that the population value of the parameter equals zero. More generally, if <math>\hat{\theta}</math> is the maximum likelihood estimate of a parameter θ, and θ<sub>0</sub> is the value of θ under the null hypothesis, :<math>\frac{\hat{\theta}-\theta_0}{{\rm SE}(\hat{\theta})}</math> can be used as a ''Z''-test statistic. When using a ''Z''-test for maximum likelihood estimates, it is important to be aware that the normal approximation may be poor if the sample size is not sufficiently large. Although there is no simple, universal rule stating how large the sample size must be to use a ''Z''-test, [[Monte Carlo method|simulation]] can give a good idea as to whether a ''Z''-test is appropriate in a given situation. ''Z''-tests are employed whenever it can be argued that a test statistic follows a normal distribution under the null hypothesis of interest. Many [[non-parametric statistics|non-parametric]] test statistics, such as [[U statistic]]s, are approximately normal for large enough sample sizes, and hence are often performed as ''Z''-tests. === Comparing the proportions of two binomials === {{Main|Two-proportion Z-test}} The '''''Z''-test for comparing two proportions''' is a statistical method used to evaluate whether the proportion of a certain characteristic differs significantly between two independent samples. This test leverages the property that the [[Binomial_distribution#Estimation_of_parameters|sample proportions]] (which is the average of observations coming from a [[Bernoulli distribution]]) are [[Asymptotic_distribution|asymptotically]] [[normal distribution|normal]] under the [[Central Limit Theorem]], enabling the construction of a ''Z''-test. The z-statistic for comparing two proportions is computed using: <math>z = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_1} + \frac{1}{n_2}\right)}}</math> Where: * <math>\hat{p}_1</math> = sample proportion in the first sample * <math>\hat{p}_2</math> = sample proportion in the second sample * <math>n_1</math> = size of the first sample * <math>n_2</math> = size of the second sample * <math>\hat{p}</math> = pooled proportion, calculated as <math>\hat{p} = \frac{x_1 + x_2}{n_1 + n_2}</math>, where <math>x_1</math> and <math>x_2</math> are the counts of successes in the two samples. The [[confidence interval]] for the difference between two proportions, based on the definitions above, is: <math>(\hat{p}_1 - \hat{p}_2) \pm z_{\alpha/2} \sqrt{\frac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \frac{\hat{p}_2(1-\hat{p}_2)}{n_2}}</math> Where: * <math>z_{\alpha/2}</math> is the critical value of the standard normal distribution (e.g., 1.96 for a 95% confidence level). The MDE for when using the (two-sided) ''Z''-test formula for comparing two proportions, incorporating critical values for <math>\alpha</math> and <math>1-\beta</math>, and the standard errors of the proportions:<ref>COOLSerdash (https://stats.stackexchange.com/users/21054/coolserdash), Two proportion sample size calculation, URL (version: 2023-04-14): https://stats.stackexchange.com/q/612894</ref><ref>Chow S-C, Shao J, Wang H, Lokhnygina Y (2018): Sample size calculations in clinical research. 3rd ed. CRC Press.</ref> <math> \text{MDE} = |p_1 - p_2| = z_{1-\alpha/2} \sqrt{p_0(1-p_0)\left(\frac{1}{n_1} + \frac{1}{n_2}\right)} + z_{1-\beta} \sqrt{\frac{p_1(1-p_1)}{n_1} + \frac{p_2(1-p_2)}{n_2}} </math> Where: * <math>z_{1-\alpha/2}</math>: Critical value for the significance level. * <math>z_{1-\beta}</math>: Quantile for the desired power. * <math>p_0=p_1=p_2</math>: When assuming the null is correct.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)