Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Correlation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Common misconceptions== ===Correlation and causality=== {{Main|Correlation does not imply causation}} {{See also|Normally distributed and uncorrelated does not imply independent}} The conventional dictum that "[[correlation does not imply causation]]" means that correlation cannot be used by itself to infer a causal relationship between the variables.<ref>{{cite journal | last=Aldrich | first=John | journal=Statistical Science | volume=10 | issue=4 | year=1995 | pages=364β376 | title=Correlations Genuine and Spurious in Pearson and Yule | jstor=2246135 | doi=10.1214/ss/1177009870| doi-access=free }}</ref> This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with [[identity (mathematics)|identity]] relations ([[tautology (logic)|tautologies]]), where no causal process exists (e.g., between two variables measuring the same construct). Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be. === Simple linear correlations === [[File:Anscombe's quartet 3.svg|thumb|325px|right|[[Anscombe's quartet]]: four sets of data with the same correlation of 0.816]] The Pearson correlation coefficient indicates the strength of a ''linear'' relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the [[conditional expectation|conditional mean]] of <math>Y</math> given <math>X</math>, denoted <math>\operatorname{E}(Y \mid X)</math>, is not linear in <math>X</math>, the correlation coefficient will not fully determine the form of <math>\operatorname{E}(Y \mid X)</math>. The adjacent image shows [[scatter plot]]s of [[Anscombe's quartet]], a set of four different pairs of variables created by [[Francis Anscombe]].<ref>{{cite journal | last=Anscombe | first=Francis J. | year=1973 | title=Graphs in statistical analysis | journal=The American Statistician | volume=27 | issue=1 | pages=17β21 | jstor=2682899 | doi=10.2307/2682899}}</ref> The four <math>y</math> variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (<math display="inline">y=3+0.5x</math>). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for one [[outlier]] which exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. These examples indicate that the correlation coefficient, as a [[summary statistic]], cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a [[normal distribution]], but this is only partially correct.<ref name="thirteenways"/> The Pearson correlation can be accurately calculated for any distribution that has a finite [[covariance matrix]], which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only a [[sufficient statistic]] if the data is drawn from a [[multivariate normal distribution]]. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)