Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
G factor (psychometrics)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Spearman's law of diminishing returns== A number of researchers have suggested that the proportion of variation accounted for by ''g'' may not be uniform across all subgroups within a population. '''Spearman's law of diminishing returns''' ('''SLODR'''), also termed the ''cognitive ability differentiation hypothesis'', predicts that the positive correlations among different cognitive abilities are weaker among more intelligent subgroups of individuals. More specifically, SLODR predicts that the ''g'' factor will account for a smaller proportion of individual differences in cognitive tests scores at higher scores on the ''g'' factor. SLODR was originally proposed in 1927 by [[Charles Spearman]],<ref>Spearman, C. (1927). ''The abilities of man''. New York: MacMillan.</ref> who reported that the average correlation between 12 cognitive ability tests was .466 in 78 normal children, and .782 in 22 "defective" children. Detterman and Daniel rediscovered this phenomenon in 1989.<ref>{{cite journal | last1 = Detterman | first1 = D.K. | last2 = Daniel | first2 = M.H. | year = 1989 | title = Correlations of mental tests with each other and with cognitive variables are highest for low IQ groups | journal = Intelligence | volume = 13 | issue = 4| pages = 349β359 | doi = 10.1016/s0160-2896(89)80007-8 }}</ref> They reported that for subtests of both the [[Wechsler Adult Intelligence Scale|WAIS]] and the [[Wechsler Intelligence Scale for Children|WISC]], subtest intercorrelations decreased monotonically with ability group, ranging from approximately an average intercorrelation of .7 among individuals with IQs less than 78 to .4 among individuals with IQs greater than 122.<ref>Deary & Pagliari 1991</ref> SLODR has been replicated in a variety of child and adult samples who have been measured using broad arrays of cognitive tests. The most common approach has been to divide individuals into multiple ability groups using an observable proxy for their general intellectual ability, and then to either compare the average interrelation among the subtests across the different groups, or to compare the proportion of variation accounted for by a single common factor, in the different groups.<ref name="deary et al">Deary et al. 1996</ref> However, as both Deary et al. (1996).<ref name="deary et al"/> and Tucker-Drob (2009)<ref name="Tucker-Drob, E. M. 2009">Tucker-Drob 2009</ref> have pointed out, dividing the continuous distribution of intelligence into an arbitrary number of discrete ability groups is less than ideal for examining SLODR. Tucker-Drob (2009)<ref name="Tucker-Drob, E. M. 2009"/> extensively reviewed the literature on SLODR and the various methods by which it had been previously tested, and proposed that SLODR could be most appropriately captured by fitting a common factor model that allows the relations between the factor and its indicators to be nonlinear in nature. He applied such a factor model to a nationally representative data of children and adults in the United States and found consistent evidence for SLODR. For example, Tucker-Drob (2009) found that a general factor accounted for approximately 75% of the variation in seven different cognitive abilities among very low IQ adults, but only accounted for approximately 30% of the variation in the abilities among very high IQ adults. A recent meta-analytic study by Blum and Holling<ref>{{cite journal | last1 = Blum | first1 = D. | last2 = Holling | first2 = H. | year = 2017 | title = Spearman's Law of Diminishing Returns. A meta-analysis | journal = Intelligence | volume = 65 | pages = 60β66 | doi = 10.1016/j.intell.2017.07.004 }}</ref> also provided support for the differentiation hypothesis. As opposed to most research on the topic, this work made it possible to study ability and age variables as continuous predictors of the ''g'' saturation, and not just to compare lower- vs. higher-skilled or younger vs. older groups of testees. Results demonstrate that the mean correlation and ''g'' loadings of cognitive ability tests decrease with increasing ability, yet increase with respondent age. SLODR, as described by [[Charles Spearman]], could be confirmed by a ''g''-saturation decrease as a function of IQ as well as a ''g''-saturation increase from middle age to senescence. Specifically speaking, for samples with a mean intelligence that is two standard deviations (i.e., 30 IQ-points) higher, the mean correlation to be expected is decreased by approximately .15 points. The question remains whether a difference of this magnitude could result in a greater apparent factorial complexity when cognitive data are factored for the higher-ability sample, as opposed to the lower-ability sample. It seems likely that greater factor dimensionality should tend to be observed for the case of higher ability, but the magnitude of this effect (i.e., how much more likely and how many more factors) remains uncertain.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)