Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Auditory cortex
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Relationship to the auditory system === The auditory cortex is the most highly organized processing unit of sound in the brain. This cortex area is the neural crux of hearing, and—in humans—language and music. The auditory cortex is divided into three separate parts: the primary, secondary, and tertiary auditory cortex. These structures are formed concentrically around one another, with the primary cortex in the middle and the tertiary cortex on the outside. The primary auditory cortex is [[wikt:tonotopically|tonotopically]] organized, which means that neighboring cells in the cortex respond to neighboring frequencies.<ref>{{cite journal|last=Lauter|first=Judith L|author2=P Herscovitch |author3=C Formby |author4=ME Raichle |title=Tonotopic organization in human auditory cortex revealed by positron emission tomography|journal=Hearing Research|volume=20|issue=3|pages=199–205|year=1985|doi=10.1016/0378-5955(85)90024-3|pmid=3878839|s2cid=45928728}}</ref> Tonotopic mapping is preserved throughout most of the audition circuit. The primary auditory cortex receives direct input from the [[medial geniculate nucleus]] of the [[thalamus]] and thus is thought to identify the fundamental elements of music, such as [[Pitch (music)|pitch]] and [[loudness]]. An [[evoked response]] study of congenitally deaf kittens used [[local field potential]]s to measure [[cortical plasticity]] in the auditory cortex. These kittens were stimulated and measured against a control (an un-stimulated congenitally deaf cat (CDC)) and normal hearing cats. The field potentials measured for artificially stimulated CDC were eventually much stronger than that of a normal hearing cat.<ref>{{cite journal|last=Klinke|first=Rainer|author2=Kral, Andrej|author3= Heid, Silvia|author4= Tillein, Jochen|author5= Hartmann, Rainer|s2cid=38985173|title=Recruitment of the auditory cortex in congenitally deaf cats by long-term cochlear electrostimulation|journal=Science|volume=285|issue=5434|pages=1729–33|date=September 10, 1999|doi=10.1126/science.285.5434.1729|pmid=10481008}}</ref> This finding accords with a study by Eckart Altenmuller, in which it was observed that students who received musical instruction had greater cortical activation than those who did not.<ref>{{cite journal|last=Strickland|title=Music and the brain in childhood development|journal=Childhood Education|volume=78|issue=2|pages=100–4|date=Winter 2001|doi=10.1080/00094056.2002.10522714|s2cid=219597861 }}</ref> The auditory cortex has distinct responses to sounds in the [[Gamma wave|gamma band]]. When subjects are exposed to three or four cycles of a 40 [[hertz]] click, an abnormal spike appears in the [[Electroencephalography|EEG]] data, which is not present for other stimuli. The spike in neuronal activity correlating to this frequency is not restrained to the tonotopic organization of the auditory cortex. It has been theorized that gamma frequencies are [[resonant frequencies]] of certain areas of the brain and appear to affect the visual cortex as well.<ref>{{cite journal|last=Tallon-Baudry|first=C.|author2=Bertrand, O.|title=Oscillatory gamma activity in humans and its role in object representation|journal=Trends in Cognitive Sciences|date=April 1999|volume=3|issue=4|pages=151–162|pmid=10322469|doi=10.1016/S1364-6613(99)01299-1|s2cid=1308261}}</ref> Gamma band activation (25 to 100 Hz) has been shown to be present during the perception of sensory events and the process of recognition. In a 2000 study by Kneif and colleagues, subjects were presented with eight musical notes to well-known tunes, such as ''[[Yankee Doodle]]'' and ''[[Frère Jacques]]''. Randomly, the sixth and seventh notes were omitted and an [[electroencephalogram]], as well as a [[magnetoencephalogram]] were each employed to measure the neural results. Specifically, the presence of gamma waves, induced by the auditory task at hand, were measured from the temples of the subjects. The [[Stimulus–response model|omitted stimulus response]] (OSR)<ref>{{Cite journal|last1=Busse|first1=L|last2=Woldorff|first2=M|date=April 2003|title=The ERP omitted stimulus response to "no-stim" events and its implications for fast-rate event-related fMRI designs.|journal=NeuroImage|volume=18|issue=4|pages=856–864|pmid=12725762|doi=10.1016/s1053-8119(03)00012-0|s2cid=25351923}}</ref> was located in a slightly different position; 7 mm more anterior, 13 mm more medial and 13 mm more superior in respect to the complete sets. The OSR recordings were also characteristically lower in gamma waves as compared to the complete musical set. The evoked responses during the sixth and seventh omitted notes are assumed to be imagined, and were characteristically different, especially in the [[Cerebral hemisphere|right hemisphere]].{{citation needed|date=June 2023}} The right auditory cortex has long been shown to be more sensitive to [[tonality]] (high spectral resolution), while the left auditory cortex has been shown to be more sensitive to minute sequential differences (rapid temporal changes) in sound, such as in speech.<ref>{{cite journal |author=Arianna LaCroix |author2=Alvaro F. Diaz |author3=Corianne Rogalsky|title=The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study |journal=Frontiers in Psychology |date=2015 |volume=6 |issue=1138 |page=18 |isbn=978-2-88919-911-2 |url=https://books.google.com/books?id=bwEvDwAAQBAJ&q=%22auditory+cortex%22+left+right+speech+music+%22temporal+changes%22+%22spectral+resolution%22&pg=PA18}}</ref> Tonality is represented in more places than just the auditory cortex; one other specific area is the rostromedial [[prefrontal cortex]] (RMPFC).<ref>{{cite journal|last=Janata|first=P. |author2=Birk, J.L. |author3=Van Horn, J.D. |author4=Leman, M. |author5=Tillmann, B. |author6=Bharucha, J.J. |title=The Cortical Topography of Tonal Structures Underlying Western Music|journal=Science|date=December 2002|volume=298|issue=5601|pages=2167–2170|doi=10.1126/science.1076262|url=http://atonal.ucdavis.edu/publications/papers/Janata_etal_2002_Science.pdf|access-date=11 September 2012|pmid=12481131|bibcode=2002Sci...298.2167J |s2cid=3031759 }}</ref> A study explored the areas of the brain which were active during tonality processing, using [[fMRI]]. The results of this experiment showed preferential [[blood-oxygen-level-dependent]] activation of specific [[voxels]] in RMPFC for specific tonal arrangements. Though these collections of voxels do not represent the same tonal arrangements between subjects or within subjects over multiple trials, it is interesting and informative that RMPFC, an area not usually associated with audition, seems to code for immediate tonal arrangements in this respect. RMPFC is a subsection of the [[medial prefrontal cortex]], which projects to many diverse areas including the [[amygdala]], and is thought to aid in the inhibition of negative [[emotion]].<ref>{{cite journal |last1=Cassel |first1=M. D. |last2=Wright |first2=D. J. |date=September 1986 |title=Topography of projections from the medial prefrontal cortex to the amygdala in the rat |journal=Brain Research Bulletin |volume=17 |issue=3 |pages=321–333 |doi=10.1016/0361-9230(86)90237-6 |pmid=2429740|s2cid=22826730 }}</ref> Another study has suggested that people who experience 'chills' while listening to music have a higher volume of fibres connecting their auditory cortex to areas associated with emotional processing.<ref>{{cite journal | last1 = Sachs |first1=Matthew E. |last2=Ellis |first2=Robert J. |last3=Schlaug Gottfried |first3=Louie Psyche | year = 2016 | title = Brain connectivity reflects human aesthetic responses to music | journal = Social Cognitive and Affective Neuroscience | volume = 11 | issue = 6| pages = 884–891| doi = 10.1093/scan/nsw009 |pmid=26966157 |pmc=4884308 | doi-access = free }}</ref> In a study involving [[dichotic listening]] to speech, in which one message is presented to the right ear and another to the left, it was found that the participants chose letters with stops (e.g. 'p', 't', 'k', 'b') far more often when presented to the right ear than the left. However, when presented with phonemic sounds of longer duration, such as vowels, the participants did not favor any particular ear.<ref>{{Cite journal|last1=Jerger|first1=James|last2=Martin|first2=Jeffrey|date=2004-12-01|title=Hemispheric asymmetry of the right ear advantage in dichotic listening|journal=Hearing Research|volume=198|issue=1|pages=125–136|doi=10.1016/j.heares.2004.07.019|pmid=15567609|s2cid=2504300|issn=0378-5955}}</ref> Due to the contralateral nature of the auditory system, the right ear is connected to Wernicke's area, located within the posterior section of the superior temporal gyrus in the left cerebral hemisphere. Sounds entering the auditory cortex are treated differently depending on whether or not they register as speech. When people listen to speech, according to the strong and weak [[Speech perception#Speech mode hypothesis|speech mode hypotheses]], they, respectively, engage perceptual mechanisms unique to speech or engage their knowledge of language as a whole.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)