Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computational neuroscience
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Major topics== Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena. ===Single-neuron modeling=== {{main|Biological neuron models}} Even a single neuron has complex biophysical characteristics and can perform computations (e.g.<ref>{{cite journal |author=Forrest MD |title=Intracellular Calcium Dynamics Permit a Purkinje Neuron Model to Perform Toggle and Gain Computations Upon its Inputs. |journal=Frontiers in Computational Neuroscience |volume=8 |pages=86 |year=2014 | doi=10.3389/fncom.2014.00086 |pmid=25191262 |pmc=4138505|doi-access=free }}</ref>). Hodgkin and Huxley's [[Hodgkin–Huxley model|original model]] only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and [[Shunting (neurophysiology)|shunting]]. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.<ref>{{cite book |author1=Wu, Samuel Miao-sin |author2=Johnston, Daniel |title=Foundations of cellular neurophysiology |publisher=MIT Press |location=Cambridge, Mass |year=1995 |isbn=978-0-262-10053-3 }}</ref> The computational functions of complex [[dendrites]] are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.<ref>{{cite book |author=Koch, Christof |title=Biophysics of computation: information processing in single neurons |publisher=Oxford University Press |location=Oxford [Oxfordshire] |year=1999 |isbn=978-0-19-510491-2 }}</ref> There are many software packages, such as [[GENESIS (software)|GENESIS]] and [[Neuron (software)|NEURON]], that allow rapid and systematic ''in silico'' modeling of realistic neurons. [[Blue Brain]], a project founded by [[Henry Markram]] from the [[École Polytechnique Fédérale de Lausanne]], aims to construct a biophysically detailed simulation of a [[cortical column]] on the [[Blue Gene]] [[supercomputer]]. Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics.<ref>{{cite journal|author=Forrest MD|year=2014|title=Intracellular Calcium Dynamics Permit a Purkinje Neuron Model to Perform Toggle and Gain Computations Upon its Inputs.|journal=Frontiers in Computational Neuroscience|volume=8|pages=86|doi=10.3389/fncom.2014.00086|pmc=4138505|pmid=25191262|doi-access=free}}</ref> However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.<ref>{{cite journal |author=Forrest MD |title=Simulation of alcohol action upon a detailed Purkinje neuron model and a simpler surrogate model that runs >400 times faster |journal= BMC Neuroscience | volume=16 |issue=27 |pages=27 | date=April 2015 |doi=10.1186/s12868-015-0162-6 |pmid=25928094 |pmc=4417229 |doi-access=free }}</ref> === Modeling Neuron-glia interactions === Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify the [[potassium cycle]],<ref>{{Cite web |title=Dynamics of Ion Fluxes between Neurons, Astrocytes and the Extracellular Space during Neurotransmission |url=https://cyberleninka.ru/article/n/dynamics-of-ion-fluxes-between-neurons-astrocytes-and-the-extracellular-space-during-neurotransmission/viewer |access-date=2023-03-14 |website=cyberleninka.ru}}</ref><ref>{{Cite journal |last1=Sibille |first1=Jérémie |last2=Duc |first2=Khanh Dao |last3=Holcman |first3=David |last4=Rouach |first4=Nathalie |date=2015-03-31 |title=The Neuroglial Potassium Cycle during Neurotransmission: Role of Kir4.1 Channels |journal=PLOS Computational Biology |language=en |volume=11 |issue=3 |pages=e1004137 |doi=10.1371/journal.pcbi.1004137 |issn=1553-7358 |pmc=4380507 |pmid=25826753|bibcode=2015PLSCB..11E4137S |doi-access=free }}</ref> so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.<ref>{{Cite journal |last1=Pannasch |first1=Ulrike |last2=Freche |first2=Dominik |last3=Dallérac |first3=Glenn |last4=Ghézali |first4=Grégory |last5=Escartin |first5=Carole |last6=Ezan |first6=Pascal |last7=Cohen-Salmon |first7=Martine |last8=Benchenane |first8=Karim |last9=Abudara |first9=Veronica |last10=Dufour |first10=Amandine |last11=Lübke |first11=Joachim H. R. |last12=Déglon |first12=Nicole |last13=Knott |first13=Graham |last14=Holcman |first14=David |last15=Rouach |first15=Nathalie |date=April 2014 |title=Connexin 30 sets synaptic strength by controlling astroglial synapse invasion |url=https://www.nature.com/articles/nn.3662 |journal=Nature Neuroscience |language=en |volume=17 |issue=4 |pages=549–558 |doi=10.1038/nn.3662 |pmid=24584052 |s2cid=554918 |issn=1546-1726}}</ref> ===Development, axonal patterning, and guidance=== Computational neuroscience aims to address a wide array of questions, including: How do [[axons]] and [[dendrites]] form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from [[molecular biology]] that distinct parts of the nervous system release distinct chemical cues, from [[growth factors]] to [[hormones]] that modulate and influence the growth and development of functional connections between neurons. Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the ''minimal wiring hypothesis'', which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.<ref>{{cite journal|author3-link=Karel Svoboda (scientist) |vauthors=Chklovskii DB, Mel BW, Svoboda K |title=Cortical rewiring and information storage |journal=Nature |volume=431 |issue=7010 |pages=782–8 |date=October 2004|pmid=15483599 |doi=10.1038/nature03012 |bibcode = 2004Natur.431..782C |s2cid=4430167 }}<br/>Review article</ref> ===Sensory processing=== Early models on sensory processing understood within a theoretical framework are credited to [[Horace Barlow]]. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of [[efficient coding hypothesis|efficient coding]], where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.<ref>Zhaoping L. 2014, [https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199564668.001.0001/acprof-9780199564668-chapter-3 The efficient coding principle ], chapter 3, of the textbook [https://global.oup.com/academic/product/understanding-vision-9780198829362?cc=de&lang=en& Understanding vision: theory, models, and data ]</ref> Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck.<ref>see visual spational attention https://en.wikipedia.org/wiki/Visual_spatial_attention</ref> A subsequent theory, [[V1 Saliency Hypothesis|V1 Saliency Hypothesis (V1SH)]], has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.<ref name=Li2002>Li. Z. 2002 [https://www.sciencedirect.com/science/article/abs/pii/S1364661300018179 A saliency map in primary visual cortex] Trends in Cognitive Sciences vol. 6, Pages 9-16, and Zhaoping, L. 2014, [https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199564668.001.0001/acprof-9780199564668-chapter-5 The V1 hypothesis—creating a bottom-up saliency map for preattentive selection and segmentation] in the book [https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199564668.001.0001/acprof-9780199564668 Understanding Vision: Theory, Models, and Data]</ref> Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form of [[Bayesian approaches to brain function|Bayesian inference]] and integration of different sensory information in generating our perception of the physical world.<ref>{{cite journal|last1=Weiss|first1=Yair|last2=Simoncelli|first2=Eero P.|last3=Adelson|first3=Edward H.|title=Motion illusions as optimal percepts|journal=Nature Neuroscience|date=20 May 2002|volume=5|issue=6|pages=598–604|doi=10.1038/nn0602-858|pmid=12021763|s2cid=2777968}}</ref><ref>{{cite journal|last1=Ernst|first1=Marc O.|last2=Bülthoff|first2=Heinrich H.|title=Merging the senses into a robust percept|journal=Trends in Cognitive Sciences|date=April 2004|volume=8|issue=4|pages=162–169|doi=10.1016/j.tics.2004.02.002|pmid=15050512|citeseerx=10.1.1.299.4638|s2cid=7837073}}</ref> ===Motor control=== Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems. ===Memory and synaptic plasticity=== {{main|Synaptic plasticity}} Earlier models of [[memory]] are primarily based on the postulates of [[Hebbian learning]]. Biologically relevant models such as [[Hopfield net]] have been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and [[long-term memory]], localizing in the [[hippocampus]]. One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable [[synapses]] are easy to train but also prone to stochastic disruption. Stable [[synapses]] forget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades. ===Behaviors of networks=== Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most [[artificial neural networks]], sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as the [[visual cortex]], are understood in some detail.<ref>{{Cite journal|last1=Olshausen|first1=Bruno A.|last2=Field|first2=David J.|date=1997-12-01|title=Sparse coding with an overcomplete basis set: A strategy employed by V1?|journal=Vision Research|volume=37|issue=23|pages=3311–3325|doi=10.1016/S0042-6989(97)00169-7|pmid=9425546|s2cid=14208692|doi-access=free}}</ref> It is also unknown what the computational functions of these specific connectivity patterns are, if any. The interactions of neurons in a small network can be often reduced to simple models such as the [[Ising model]]. The [[statistical mechanics]] of such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions.<ref>{{cite journal |vauthors=Schneidman E, Berry MJ, Segev R, Bialek W |title=Weak pairwise correlations imply strongly correlated network states in a neural population |journal=Nature |volume=440 |issue=7087 |pages=1007–12 |year=2006 |pmid=16625187 |pmc=1785327 |doi=10.1038/nature04701 |bibcode=2006Natur.440.1007S|arxiv = q-bio/0512013 }}</ref> It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of [[two-photon microscopy]] and [[calcium imaging]], we now have powerful experimental methods with which to test the new theories regarding neuronal networks. In some cases the complex interactions between ''inhibitory'' and ''excitatory'' neurons can be simplified using [[mean-field theory]], which gives rise to the [[Wilson–Cowan model|population model]] of neural networks.<ref>{{cite journal |author1=Wilson, H. R. |author2=Cowan, J.D. |title=A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue |journal=Kybernetik |volume=13 |issue=2 |pages=55–80 |year=1973 |doi= 10.1007/BF00288786|pmid=4767470 |s2cid=292546 }}</ref> While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.<ref>{{cite book |author1=Anderson, Charles H. |author2=Eliasmith, Chris |title=Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems (Computational Neuroscience) |publisher=The MIT Press |location=Cambridge, Mass |year=2004 |isbn=978-0-262-55060-4 }}</ref> ===Visual attention, identification, and categorization=== Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli.<ref>{{cite book|author1=Marvin M. Chun |author2=Jeremy M. Wolfe |author3=E. B. Goldstein | title=Blackwell Handbook of Sensation and Perception |url=https://archive.org/details/blackwellhandboo00gold |url-access=limited | publisher=Blackwell Publishing Ltd |year=2001 | pages=[https://archive.org/details/blackwellhandboo00gold/page/n284 272]–310 |isbn=978-0-631-20684-2}}</ref> Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.<ref>{{cite book|author1=Edmund Rolls |author2=Gustavo Deco | title=Computational Neuroscience of Vision | publisher=Oxford Scholarship Online | year=2012 | isbn=978-0-198-52488-5}}</ref> An example theory that is being extensively tested behaviorally and physiologically is the [[V1 Saliency Hypothesis]] that a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously.<ref name=Li2002 /> Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes. ===Cognition, discrimination, and learning=== Computational modeling of higher cognitive functions has only recently{{When|date=February 2016}} begun. Experimental data comes primarily from [[single-unit recording]] in [[primates]]. The [[frontal lobe]] and [[parietal lobe]] function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.<ref>{{cite journal |vauthors=Machens CK, Romo R, Brody CD |title=Flexible control of mutual inhibition: a neural model of two-interval discrimination |journal=Science |volume=307 |issue=5712 |pages=1121–4 |year=2005 |pmid=15718474 |doi=10.1126/science.1104171 |bibcode = 2005Sci...307.1121M |citeseerx=10.1.1.523.4396 |s2cid=45378154 }}</ref> The [[brain]] seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and [[face perception|recognizing faces]]. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines. The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. [[Integrative neuroscience]] attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.<ref>{{cite journal |vauthors=Robinson PA, Rennie CJ, Rowe DL, O'Connor SC, Gordon E | title=Multiscale brain modelling | journal=Philosophical Transactions of the Royal Society B | volume=360 | issue=1457|pages=1043–1050|year=2005|doi=10.1098/rstb.2005.1638 |pmid=16087447 |pmc=1854922 }}</ref> The Computational Representational Understanding of Mind ([[CRUM]]) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making. ===[[Consciousness]]=== One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. [[Francis Crick]], [[Giulio Tononi]] and [[Christof Koch]] made some attempts to formulate consistent frameworks for future work in [[neural correlates of consciousness]] (NCC), though much of the work in this field remains speculative.<ref>{{cite journal |vauthors=Crick F, Koch C |title=A framework for consciousness |journal=Nat. Neurosci. |volume=6 |issue=2 |pages=119–26 |year=2003 |pmid=12555104 |doi=10.1038/nn0203-119|s2cid=13960489 |url= https://zenodo.org/record/852680 }}</ref> ===Computational clinical neuroscience=== [[Computational clinical neuroscience]] is a field that brings together experts in neuroscience, [[neurology]], [[psychiatry]], [[decision sciences]] and computational modeling to quantitatively define and investigate problems in [[neurological disorders|neurological]] and [[mental disorders|psychiatric diseases]], and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.<ref>{{cite journal | last1=Adaszewski | first1=Stanisław | last2=Dukart | first2=Juergen | last3=Kherif | first3=Ferath | last4=Frackowiak | first4=Richard | last5=Draganski | first5=Bogdan | author6=Alzheimer's Disease Neuroimaging Initiative |title=How early can we predict Alzheimer's disease using computational anatomy? |journal=Neurobiol Aging |volume=34 |issue=12 |pages=2815–26 |year=2013 |doi=10.1016/j.neurobiolaging.2013.06.015 |pmid=23890839|s2cid=1025210}}</ref><ref>{{cite journal |vauthors=Friston KJ, Stephan KE, Montague R, Dolan RJ |title=Computational psychiatry: the brain as a phantastic organ |journal=Lancet Psychiatry |volume=1 |issue=2 |pages=148–58 |year=2014 |doi=10.1016/S2215-0366(14)70275-5 |pmid=26360579 |s2cid=15504512 }}</ref> === Predictive computational neuroscience === Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma <ref>{{Cite journal |last1=Floyrac |first1=Aymeric |last2=Doumergue |first2=Adrien |last3=Legriel |first3=Stéphane |last4=Deye |first4=Nicolas |last5=Megarbane |first5=Bruno |last6=Richard |first6=Alexandra |last7=Meppiel |first7=Elodie |last8=Masmoudi |first8=Sana |last9=Lozeron |first9=Pierre |last10=Vicaut |first10=Eric |last11=Kubis |first11=Nathalie |last12=Holcman |first12=David |date=2023 |title=Predicting neurological outcome after cardiac arrest by combining computational parameters extracted from standard and deviant responses from auditory evoked potentials |journal=Frontiers in Neuroscience |volume=17 |page=988394 |doi=10.3389/fnins.2023.988394 |pmid=36875664 |pmc=9975713 |issn=1662-453X|doi-access=free }}</ref> or anesthesia.<ref>{{Cite journal |last1=Sun |first1=Christophe |last2=Holcman |first2=David |date=2022-08-01 |title=Combining transient statistical markers from the EEG signal to predict brain sensitivity to general anesthesia |url=https://www.sciencedirect.com/science/article/pii/S174680942200235X |journal=Biomedical Signal Processing and Control |language=en |volume=77 |pages=103713 |doi=10.1016/j.bspc.2022.103713 |s2cid=248488365 |issn=1746-8094}}</ref> For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient. ===Computational Psychiatry=== [[Computational psychiatry]] is a new emerging field that brings together experts in [[machine learning]], [[neuroscience]], [[neurology]], [[psychiatry]], [[psychology]] to provide an understanding of psychiatric disorders.<ref>{{cite journal|last1=Montague|first1=P. Read | last2=Dolan| first2=Raymond J. | last3=Friston|first3=Karl J.|author3-link=Karl Friston|last4=Dayan|first4=Peter|author4-link=Peter Dayan|title=Computational psychiatry|journal=[[Trends in Cognitive Sciences]]|date=14 Dec 2011|volume=16|issue=1|pages=72–80|doi=10.1016/j.tics.2011.11.018|pmid=22177032 |pmc=3556822 }}</ref><ref>{{cite journal | last1=Kato | first1=Ayaka | last2=Kunisato | first2=Yoshihiko | last3=Katahira | first3=Kentaro | last4=Okimura | first4=Tsukasa | last5=Yamashita | first5=Yuichi |title=Computational Psychiatry Research Map (CPSYMAP): a new database for visualizing research papers |journal=Frontiers in Psychiatry|volume=11|issue=1360|year=2020 | page=578706 |doi=10.3389/fpsyt.2020.578706| pmid=33343418 | pmc=7746554 | doi-access=free }}</ref><ref>{{cite journal | last1=Huys | first1=Quentin J M | last2=Maia | first2=Tiago V | last3=Frank | first3=Michael J |title=Computational psychiatry as a bridge from neuroscience to clinical applications |journal=Nature Neuroscience |volume=19 |issue=3 |pages=404–413 |year=2016 |doi=10.1038/nn.4238 | pmid=26906507 | pmc=5443409 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)