Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
G-test
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistical test}} {{DISPLAYTITLE:''G''-test}} In [[statistics]], '''''G''-tests''' are [[likelihood ratio test|likelihood-ratio]] or [[maximum likelihood]] [[statistical significance]] tests that are increasingly being used in situations where [[chi-squared test]]s were previously recommended.<ref>{{cite book|author=McDonald, J.H.|year=2014|title=Handbook of Biological Statistics|location=Baltimore, Maryland|publisher=Sparky House Publishing|edition=Third|chapter=G–test of goodness-of-fit|chapter-url=http://www.biostathandbook.com/gtestgof.html|pages=53–58}}</ref> ==Formulation== The general formula for ''G'' is :<math> G = 2\sum_{i} {O_{i} \cdot \ln\left(\frac{O_i}{E_i}\right)}, </math> where <math display="inline">O_i \geq 0</math> is the observed count in a cell, <math display="inline">E_i > 0</math> is the expected count under the [[null hypothesis]], <math display="inline">\ln</math> denotes the [[natural logarithm]], and the sum is taken over all non-empty cells. The resulting <math display="inline">G</math> is [[Chi-squared distribution|chi-squared distributed]]. Furthermore, the total observed count should be equal to the total expected count:<math display="block">\sum_i O_i = \sum_i E_i = N</math>where <math display="inline">N</math> is the total number of observations. ===Derivation=== We can derive the value of the ''G''-test from the [[Likelihood-ratio test|log-likelihood ratio test]] where the underlying model is a multinomial model. Suppose we had a sample <math display="inline">x = (x_1, \ldots, x_m)</math> where each <math display="inline">x_i</math> is the number of times that an object of type <math display="inline">i</math> was observed. Furthermore, let <math display="inline">n = \sum_{i=1}^m x_i</math> be the total number of objects observed. If we assume that the underlying model is multinomial, then the test statistic is defined by<math display="block">\ln \left( \frac{L(\tilde{\theta}|x)}{L(\hat{\theta}|x)} \right) = \ln \left( \frac{\prod_{i=1}^m \tilde{\theta}_i^{x_i}}{\prod_{i=1}^m \hat{\theta}_i^{x_i}} \right)</math>where <math display="inline">\tilde{\theta}</math> is the null hypothesis and <math>\hat{\theta}</math> is the [[maximum likelihood estimate]] (MLE) of the parameters given the data. Recall that for the multinomial model, the MLE of <math display="inline">\hat{\theta}_i</math> given some data is defined by<math display="block">\hat{\theta}_i = \frac{x_i}{n}</math>Furthermore, we may represent each null hypothesis parameter <math>\tilde{\theta}_i</math> as<math display="block">\tilde{\theta}_i = \frac{e_i}{n}</math>Thus, by substituting the representations of <math display="inline">\tilde{\theta}</math> and <math display="inline">\hat{\theta}</math> in the log-likelihood ratio, the equation simplifies to<math display="block">\begin{align} \ln \left( \frac{L(\tilde{\theta}|x)}{L(\hat{\theta}|x)} \right) &= \ln \prod_{i=1}^m \left(\frac{e_i}{x_i}\right)^{x_i} \\ &= \sum_{i=1}^m x_i \ln\left(\frac{e_i}{x_i}\right) \\ \end{align}</math>Relabel the variables <math display="inline">e_i</math> with <math display="inline">E_i</math> and <math display="inline">x_i</math> with <math display="inline">O_i</math>. Finally, multiply by a factor of <math display="inline">-2</math> (used to make the G test formula [[#Relation to the chi-squared test|asymptotically equivalent to the Pearson's chi-squared test formula]]) to achieve the form <math>\begin{alignat}{2} G & = & \; -2 \sum_{i=1}^m O_i \ln\left(\frac{E_i}{O_i}\right) \\ & = & 2 \sum_{i=1}^m O_i \ln\left(\frac{O_i}{E_i}\right) \end{alignat}</math> Heuristically, one can imagine <math>~ O_i ~</math> as continuous and approaching zero, in which case <math>~ O_i \ln O_i \to 0 ~,</math> and terms with zero observations can simply be dropped. However the ''expected'' count in each cell must be strictly greater than zero for each cell (<math>~ E_i > 0 ~ \forall \, i ~</math>) to apply the method. ==Distribution and use== Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the [[probability distribution|distribution]] of ''G'' is approximately a [[chi-squared distribution]], with the same number of [[degrees of freedom (statistics)|degrees of freedom]] as in the corresponding chi-squared test. For very small samples the [[multinomial test]] for goodness of fit, and [[Fisher's exact test]] for contingency tables, or even Bayesian hypothesis selection are preferable to the ''G''-test.<ref name=McDonald-2014-HBS>{{cite book |last=McDonald |first=John H. |year=2014 |title=Handbook of Biological Statistics |location=Baltimore, MD |publisher=Sparky House Publishing |edition=3rd |chapter=Small numbers in chi-square and ''G''–tests |chapter-url=http://www.biostathandbook.com/small.html |pages=86–89}}</ref> McDonald recommends to always use an exact test (exact test of goodness-of-fit, [[Fisher's exact test]]) if the total sample size is less than 1 000 . :There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, and ''G''–test will give almost identical {{mvar|p}} values. Spreadsheets, web-page calculators, and [[Statistical Analysis System|SAS]] shouldn't have any problem doing an exact test on a sample size of 1 000 . :::: — John H. McDonald (2014)<ref name=McDonald-2014-HBS/> ''G''-tests have been recommended at least since the 1981 edition of ''Biometry'', a statistics textbook by [[Robert R. Sokal]] and [[F. James Rohlf]].<ref>{{cite book |last1=Sokal |first1=R. R. |last2=Rohlf |first2=F. J. |year=1981 |title=Biometry: The Principles and Practice of Statistics in Biological Research |location=New York |publisher=Freeman |edition=Second |isbn=978-0-7167-2411-7 |url-access=registration |url=https://archive.org/details/biometryprincipl00soka_0 }}</ref> ==Relation to other metrics== ===Relation to the chi-squared test=== The commonly used [[chi-squared test]]s for goodness of fit to a distribution and for independence in [[contingency table]]s are in fact approximations of the [[log-likelihood ratio]] on which the ''G''-tests are based.<ref>{{cite arXiv |last=Hoey |first=J. |year=2012 |eprint=1206.4881|title=The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test |class=stat.ME }}</ref> The general formula for Pearson's chi-squared test statistic is :<math> \chi^2 = \sum_{i} {\frac{\left(O_i - E_i\right)^2}{E_i}} ~.</math> The approximation of ''G'' by chi squared is obtained by a second order [[Taylor series|Taylor expansion]] of the natural logarithm around 1 (see [[#Derivation (chi-squared)]] below). We have <math> G \approx \chi^2 </math> when the observed counts <math>~ O_i ~</math> are close to the expected counts <math>~ E_i ~.</math> When this difference is large, however, the <math>~ \chi^2 ~</math> approximation begins to break down. Here, the effects of outliers in data will be more pronounced, and this explains the why <math>~ \chi^2 ~</math> tests fail in situations with little data. For samples of a reasonable size, the ''G''-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the ''G''-test is better than for the [[Pearson's chi-squared test]].<ref>{{cite book |last1=Harremoës |first1=P. |last2=Tusnády |first2=G. |year=2012 |arxiv=1202.1125 |chapter=Information divergence is more chi squared distributed than the chi squared statistic |title=Proceedings ISIT 2012 |pages=538–543 |bibcode=2012arXiv1202.1125H }}</ref> In cases where <math>~ O_i > 2 \cdot E_i ~</math> for some cell case the ''G''-test is always better than the chi-squared test.{{citation needed|date=August 2011}} For testing goodness-of-fit the ''G''-test is infinitely more [[Efficiency (statistics)|efficient]] than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.<ref>{{cite journal |last1=Quine |first1=M. P. |last2=Robinson |first2=J. |year=1985 |title=Efficiencies of chi-square and likelihood ratio goodness-of-fit tests |journal=[[Annals of Statistics]] |volume=13 |issue= 2|pages=727–742 |doi=10.1214/aos/1176349550|doi-access=free }}</ref><ref>{{cite journal |last1=Harremoës |first1=P. |last2=Vajda |first2=I. |year=2008 |title=On the Bahadur-efficient testing of uniformity by means of the entropy |journal=[[IEEE Transactions on Information Theory]] |volume=54 |pages=321–331 |doi=10.1109/tit.2007.911155|citeseerx=10.1.1.226.8051 |s2cid=2258586 }}</ref> ====Derivation (chi-squared)==== Consider :<math> G = 2\sum_{i} {O_{i} \ln\left(\frac{O_i}{E_i}\right)} ~,</math> and let <math>O_i = E_i + \delta_i</math> with <math>\sum_i \delta_i = 0 ~,</math> so that the total number of counts remains the same. Upon substitution we find, :<math> G = 2\sum_{i} {(E_i + \delta_i) \ln \left(1+\frac{\delta_i}{E_i}\right)} ~.</math> A Taylor expansion around <math>1+\frac{\delta_i}{E_i}</math> can be performed using <math> \ln(1 + x) = x - \frac{1}{2}x^2 + \mathcal{O}(x^3) </math>. The result is :<math> G = 2\sum_{i} (E_i + \delta_i) \left(\frac{\delta_i}{E_i} - \frac{1}{2}\frac{\delta_i^2}{E_i^2} + \mathcal{O}\left(\delta_i^3\right) \right) ~,</math> and distributing terms we find, :<math> G = 2\sum_{i} \delta_i + \frac{1}{2}\frac{\delta_i^2}{E_i} + \mathcal{O}\left(\delta_i^3\right)~.</math> Now, using the fact that <math>~ \sum_{i} \delta_i = 0 ~</math> and <math>~ \delta_i = O_i - E_i ~,</math> we can write the result, :<math>~ G \approx \sum_{i} \frac{\left(O_i-E_i\right)^2}{E_i} ~.</math> ===Relation to Kullback–Leibler divergence=== The ''G''-test statistic is proportional to the [[Kullback–Leibler divergence]] of the theoretical distribution from the empirical distribution: :<math> \begin{align} G &= 2\sum_{i} {O_{i} \cdot \ln\left(\frac{O_i}{E_i}\right)} = 2 N \sum_{i} {o_i \cdot \ln\left(\frac{o_i}{e_i}\right)} \\ &= 2 N \, D_{\mathrm{KL}}(o\|e), \end{align}</math> where ''N'' is the total number of observations and <math>o_i</math> and <math>e_i</math> are the empirical and theoretical frequencies, respectively. ===Relation to mutual information=== For analysis of [[contingency table]]s the value of ''G'' can also be expressed in terms of [[mutual information]]. Let :<math>N = \sum_{ij}{O_{ij}} \; </math> , <math> \; \pi_{ij} = \frac{O_{ij}}{N} \;</math> , <math>\; \pi_{i.} = \frac{\sum_j O_{ij}}{N} \; </math>, and <math>\; \pi_{. j} = \frac{\sum_i O_{ij}}{N} \;</math>. Then ''G'' can be expressed in several alternative forms: :<math> G = 2 \cdot N \cdot \sum_{ij}{\pi_{ij} \left( \ln(\pi_{ij})-\ln(\pi_{i.})-\ln(\pi_{.j}) \right)} ,</math> :<math> G = 2 \cdot N \cdot \left[ H(r) + H(c) - H(r,c) \right] , </math> :<math> G = 2 \cdot N \cdot \operatorname{MI}(r,c) \, ,</math> where the [[Entropy (information theory)|entropy]] of a discrete random variable <math>X \,</math> is defined as :<math> H(X) = - {\sum_{x \in \text{Supp}(X)} p(x) \log p(x)} \, ,</math> and where :<math> \operatorname{MI}(r,c)= H(r) + H(c) - H(r,c) \, </math> is the [[mutual information]] between the row vector ''r'' and the column vector ''c'' of the contingency table. It can also be shown{{citation needed|date=August 2011}} that the inverse document frequency weighting commonly used for text retrieval is an approximation of ''G'' applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the ''G'' statistic.{{citation needed|date=August 2011}} ==Application== * The [[McDonald–Kreitman test]] in [[statistical genetics]] is an application of the ''G''-test. * Dunning<ref>Dunning, Ted (1993). "[https://www.aclweb.org/anthology/J93-1003 Accurate Methods for the Statistics of Surprise and Coincidence] {{Webarchive|url=https://web.archive.org/web/20111215212356/http://acl.ldc.upenn.edu/J/J93/J93-1003.pdf |date=2011-12-15 }}", ''[[Computational Linguistics (journal)|Computational Linguistics]]'', Volume 19, issue 1 (March, 1993).</ref> introduced the test to the [[computational linguistics]] community where it is now widely used. * The R-scape program (used by [[Rfam]]) uses G-test to detect co-variation between RNA sequence alignment positions.<ref>{{cite journal |last1=Rivas |first1=Elena |title=RNA structure prediction using positive and negative evolutionary information |journal=PLOS Computational Biology |date=30 October 2020 |volume=16 |issue=10 |pages=e1008387 |doi=10.1371/journal.pcbi.1008387|doi-access=free |pmc=7657543 }}</ref> ==Statistical software== * In [[R programming language|R]] fast implementations can be found in the [http://cran.r-project.org/package=AMR AMR] and [http://cran.r-project.org/package=Rfast Rfast] packages. For the AMR package, the command is <code>g.test</code> which works exactly like <code>chisq.test</code> from base R. R also has the [http://rforge.net/doc/packages/Deducer/likelihood.test.html likelihood.test] {{Webarchive|url=https://web.archive.org/web/20131216095329/http://rforge.net/doc/packages/Deducer/likelihood.test.html |date=2013-12-16 }} function in the [http://rforge.net/doc/packages/Deducer/html/00Index.html Deducer] {{Webarchive|url=https://web.archive.org/web/20120309105120/http://www.rforge.net/doc/packages/Deducer/html/00Index.html |date=2012-03-09 }} package. '''Note:''' Fisher's ''G''-test in the [https://cran.r-project.org/web/packages/GeneCycle/ GeneCycle Package] of the [[R programming language]] (<code>fisher.g.test</code>) does not implement the ''G''-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.<ref>{{cite journal | last1 = Fisher | first1 = R. A. | year = 1929 | title = Tests of significance in harmonic analysis | journal = Proceedings of the Royal Society of London A | volume = 125 | issue = 796| pages = 54–59 | doi=10.1098/rspa.1929.0151| bibcode = 1929RSPSA.125...54F| doi-access = free | hdl = 2440/15201 | hdl-access = free }}</ref> * Another [[R programming language|R]] implementation to compute the G statistic and corresponding p-values is provided by the R package [http://cran.r-project.org/package=entropy entropy]. The commands are <code>Gstat</code> for the standard G statistic and the associated p-value and <code>Gstatindep</code> for the G statistic applied to comparing joint and product distributions to test independence. * In [[SAS System|SAS]], one can conduct ''G''-test by applying the <code>/chisq</code> option after the <code>proc freq</code>.<ref>[http://www.biostathandbook.com/gtestind.html G-test of independence], [http://www.biostathandbook.com/gtestgof.html G-test for goodness-of-fit] in Handbook of Biological Statistics, University of Delaware. (pp. 46–51, 64–69 in: McDonald, J. H. (2009) ''Handbook of Biological Statistics'' (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)</ref> * In [[Stata]], one can conduct a ''G''-test by applying the <code>lr</code> option after the <code>tabulate</code> command. * In [[Java (programming language)|Java]], use <code>org.apache.commons.math3.stat.inference.GTest</code>.<ref>[https://commons.apache.org/proper/commons-math/javadocs/api-3.3/org/apache/commons/math3/stat/inference/GTest.html org.apache.commons.math3.stat.inference.GTest]</ref> * In [[Python (programming language)|Python]], use <code>scipy.stats.power_divergence</code> with <code>lambda_=0</code>.<ref>{{Cite web|url=https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.power_divergence.html#scipy.stats.power_divergence|title = Scipy.stats.power_divergence — SciPy v1.7.1 Manual}}</ref> ==References== {{reflist|25em}} ==External links== * [http://ucrel.lancs.ac.uk/llwizard.html G<sup>2</sup>/Log-likelihood calculator] {{Statistics}} {{DEFAULTSORT:G-test}} [[Category:Statistical tests for contingency tables]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Mvar
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:Webarchive
(
edit
)