Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Origin of language
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== From-where-to-what theory === [[File:From where to what.png|thumb|An illustration of the "from where to what" model of language evolution]] The "from where to what" model is a language evolution model that is derived primarily from the organization of [[language processing in the brain]] into two structures: the auditory dorsal stream and the auditory ventral stream.<ref>{{Cite journal |last=Poliva |first=Oren |date=20 September 2017 |title=From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans |journal=F1000Research |volume=4 |page=67 |doi=10.12688/f1000research.6175.3 |issn=2046-1402 |pmc=5600004 |pmid=28928931 |doi-access=free}}</ref><ref>{{Cite journal |last=Poliva |first=Oren |date=30 June 2016 |title=From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language |journal=Frontiers in Neuroscience |volume=10 |page=307 |doi=10.3389/fnins.2016.00307 |issn=1662-453X |pmc=4928493 |pmid=27445676 |doi-access=free}}</ref> It hypothesizes seven stages of language evolution (see illustration). Speech originated for the purpose of exchanging contact calls between mothers and their offspring to find one another in the event they became separated (illustration part 1). The contact calls could be modified with intonations in order to express either a higher or lower level of distress (illustration part 2). The use of two types of contact calls enabled the first question-answer conversation. In this scenario, the child would emit a low-level distress call to express a desire to interact with an object, and the mother would respond with either another low-level distress call (to express approval of the interaction) or a high-level distress call (to express disapproval) (illustration part 3). Over time, the improved use of intonations and vocal control led to the invention of unique calls (phonemes) associated with distinct objects (illustration part 4). At first, children learned the calls (phonemes) from their parents by imitating their lip-movements (illustration part 5). Eventually, infants were able to encode into long-term memory all the calls (phonemes). Consequentially, mimicry via lip-reading was limited to infancy and older children learned new calls through mimicry without lip-reading (illustration part 6). Once individuals became capable of producing a sequence of calls, this allowed multi-syllabic words, which increased the size of their vocabulary (illustration part 7). The use of words, composed of sequences of syllables, provided the infrastructure for communicating with sequences of words (i.e. sentences). The theory's name is derived from the two auditory streams, which are both found in the brains of humans and other primates. The auditory ventral stream is responsible for sound recognition, and so it is referred to as the auditory ''what'' stream.<ref>{{Cite journal |last=Scott |first=S. K. |date=1 December 2000 |title=Identification of a pathway for intelligible speech in the left temporal lobe |journal=Brain |volume=123 |issue=12 |pages=2400–2406 |doi=10.1093/brain/123.12.2400 |issn=1460-2156 |pmc=5630088 |pmid=11099443}}</ref><ref>{{Cite journal |last1=Davis |first1=Matthew H. |last2=Johnsrude |first2=Ingrid S. |date=15 April 2003 |title=Hierarchical Processing in Spoken Language Comprehension |journal=The Journal of Neuroscience |volume=23 |issue=8 |pages=3423–3431 |doi=10.1523/jneurosci.23-08-03423.2003 |issn=0270-6474 |pmc=6742313 |pmid=12716950 |doi-access=free}}</ref><ref>{{Cite journal |last1=Petkov |first1=Christopher I. |last2=Kayser |first2=Christoph |last3=Steudel |first3=Thomas |last4=Whittingstall |first4=Kevin |last5=Augath |first5=Mark |last6=Logothetis |first6=Nikos K. |date=10 February 2008 |title=A voice region in the monkey brain |journal=Nature Neuroscience |volume=11 |issue=3 |pages=367–374 |doi=10.1038/nn2043 |issn=1097-6256 |pmid=18264095 |s2cid=5505773}}</ref> In primates, the auditory dorsal stream is responsible for [[sound localization]], and thus it is called the auditory ''where'' stream. Only in humans (in the left hemisphere) is it also responsible for other processes associated with language use and acquisition, such as speech repetition and production, integration of phonemes with their lip movements, perception and production of intonations, phonological [[long-term memory]] (long-term memory storage of the sounds of words), and phonological working memory (the temporary storage of the sounds of words).<ref>{{Cite journal |last1=Buchsbaum |first1=Bradley R. |last2=Baldo |first2=Juliana |last3=Okada |first3=Kayoko |last4=Berman |first4=Karen F. |last5=Dronkers |first5=Nina |last6=D'Esposito |first6=Mark |last7=Hickok |first7=Gregory |date=December 2011 |title=Conduction aphasia, sensory-motor integration, and phonological short-term memory – An aggregate analysis of lesion and fMRI data |journal=Brain and Language |volume=119 |issue=3 |pages=119–128 |doi=10.1016/j.bandl.2010.12.001 |issn=0093-934X |pmc=3090694 |pmid=21256582}}</ref><ref>{{Cite journal |last1=Warren |first1=Jane E. |last2=Wise |first2=Richard J.S. |last3=Warren |first3=Jason D. |date=December 2005 |title=Sounds do-able: auditory–motor transformations and the posterior temporal plane |journal=Trends in Neurosciences |volume=28 |issue=12 |pages=636–643 |doi=10.1016/j.tins.2005.09.010 |issn=0166-2236 |pmid=16216346 |s2cid=36678139}}</ref><ref>{{Cite journal |last=Campbell |first=Ruth |date=12 March 2008 |title=The processing of audio-visual speech: empirical and neural bases |journal=Philosophical Transactions of the Royal Society of London B: Biological Sciences |volume=363 |issue=1493 |pages=1001–1010 |doi=10.1098/rstb.2007.2155 |issn=0962-8436 |pmc=2606792 |pmid=17827105}}</ref><ref>{{Cite journal |last1=Kayser |first1=Christoph |last2=Petkov |first2=Christopher I. |last3=Logothetis |first3=Nikos K. |date=December 2009 |title=Multisensory interactions in primate auditory cortex: fMRI and electrophysiology |journal=Hearing Research |volume=258 |issue=1–2 |pages=80–88 |doi=10.1016/j.heares.2009.02.011 |issn=0378-5955 |pmid=19269312 |s2cid=31412246}}</ref><ref>{{Cite journal |last1=Hickok |first1=Gregory |last2=Buchsbaum |first2=Bradley |last3=Humphries |first3=Colin |last4=Muftuler |first4=Tugan |date=1 July 2003 |title=Auditory–Motor Interaction Revealed by fMRI: Speech, Music, and Working Memory in Area Spt |journal=Journal of Cognitive Neuroscience |volume=15 |issue=5 |pages=673–682 |doi=10.1162/089892903322307393 |issn=1530-8898 |pmid=12965041}}</ref><ref>{{Cite journal |last1=Schwartz |first1=M. F. |last2=Faseyitan |first2=O. |last3=Kim |first3=J. |last4=Coslett |first4=H. B. |date=20 November 2012 |title=The dorsal stream contribution to phonological retrieval in object naming |journal=Brain |volume=135 |issue=12 |pages=3799–3814 |doi=10.1093/brain/aws300 |issn=0006-8950 |pmc=3525060 |pmid=23171662}}</ref><ref>{{Cite journal |last=Gow |first=David W. |date=June 2012 |title=The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing |journal=Brain and Language |volume=121 |issue=3 |pages=273–288 |doi=10.1016/j.bandl.2012.03.005 |issn=0093-934X |pmc=3348354 |pmid=22498237}}</ref><ref>{{Cite journal |last1=Buchsbaum |first1=Bradley R. |last2=D'Esposito |first2=Mark |date=May 2008 |title=The Search for the Phonological Store: From Loop to Convolution |journal=Journal of Cognitive Neuroscience |volume=20 |issue=5 |pages=762–778 |doi=10.1162/jocn.2008.20501 |issn=0898-929X |pmid=18201133 |s2cid=17878480}}</ref> Some evidence also indicates a role in recognizing others by their voices.<ref>{{Cite journal |last1=Lachaux |first1=Jean-Philippe |last2=Jerbi |first2=Karim |last3=Bertrand |first3=Olivier |last4=Minotti |first4=Lorella |last5=Hoffmann |first5=Dominique |last6=Schoendorff |first6=Benjamin |last7=Kahane |first7=Philippe |date=31 October 2007 |title=A Blueprint for Real-Time Functional Mapping via Human Intracranial Recordings |journal=PLOS ONE |volume=2 |issue=10 |pages=e1094 |bibcode=2007PLoSO...2.1094L |doi=10.1371/journal.pone.0001094 |issn=1932-6203 |pmc=2040217 |pmid=17971857 |doi-access=free}}</ref><ref>{{Cite journal |last1=Jardri |first1=Renaud |last2=Houfflin-Debarge |first2=Véronique |last3=Delion |first3=Pierre |last4=Pruvo |first4=Jean-Pierre |last5=Thomas |first5=Pierre |last6=Pins |first6=Delphine |date=April 2012 |title=Assessing fetal response to maternal speech using a noninvasive functional brain imaging technique |journal=International Journal of Developmental Neuroscience |volume=30 |issue=2 |pages=159–161 |doi=10.1016/j.ijdevneu.2011.11.002 |issn=0736-5748 |pmid=22123457 |s2cid=2603226}}</ref> The emergence of each of these functions in the auditory dorsal stream represents an intermediate stage in the evolution of language. A contact call origin for human language is consistent with animal studies, as like human language, contact call discrimination in monkeys is lateralised to the left hemisphere.<ref>{{Cite journal |last1=Petersen |first1=M. |last2=Beecher |first2=M. |last3=Zoloth |last4=Moody |first4=D. |last5=Stebbins |first5=W. |date=20 October 1978 |title=Neural lateralization of species-specific vocalizations by Japanese macaques (Macaca fuscata) |journal=Science |volume=202 |issue=4365 |pages=324–327 |bibcode=1978Sci...202..324P |doi=10.1126/science.99817 |issn=0036-8075 |pmid=99817}}</ref><ref>{{Cite journal |last1=Heffner |first1=H. |last2=Heffner |first2=R. |date=5 October 1984 |title=Temporal lobe lesions and perception of species-specific vocalizations by macaques |journal=Science |volume=226 |issue=4670 |pages=75–76 |bibcode=1984Sci...226...75H |doi=10.1126/science.6474192 |issn=0036-8075 |pmid=6474192}}</ref> Mice with knock-out to language related genes (such as [[FOXP2]] and [[SRPX2]]) also resulted in the pups no longer emitting contact calls when separated from their mothers.<ref>{{Cite journal |last1=Shu |first1=W. |last2=Cho |first2=J. Y. |last3=Jiang |first3=Y. |last4=Zhang |first4=M. |last5=Weisz |first5=D. |last6=Elder |first6=G. A. |last7=Schmeidler |first7=J. |last8=De Gasperi |first8=R. |last9=Sosa |first9=M. A. G. |date=27 June 2005 |title=Altered ultrasonic vocalization in mice with a disruption in the Foxp2 gene |journal=Proceedings of the National Academy of Sciences |volume=102 |issue=27 |pages=9643–9648 |bibcode=2005PNAS..102.9643S |doi=10.1073/pnas.0503739102 |issn=0027-8424 |pmc=1160518 |pmid=15983371 |doi-access=free}}</ref><ref>{{Cite journal |last1=Sia |first1=G. M. |last2=Clem |first2=R. L. |last3=Huganir |first3=R. L. |date=31 October 2013 |title=The Human Language-Associated Gene SRPX2 Regulates Synapse Formation and Vocalization in Mice |journal=Science |volume=342 |issue=6161 |pages=987–991 |bibcode=2013Sci...342..987S |doi=10.1126/science.1245079 |issn=0036-8075 |pmc=3903157 |pmid=24179158}}</ref> Supporting this model is also its ability to explain unique human phenomena, such as the use of intonations when converting words into commands and questions, the tendency of infants to mimic vocalizations during the first year of life (and its disappearance later on) and the protruding and visible [[human lip]]s, which are not found in other apes. This theory could be considered an elaboration of the putting-down-the-baby theory of language evolution.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)