Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Temporal difference learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== In neuroscience == The TD [[algorithm]] has also received attention in the field of [[neuroscience]]. Researchers discovered that the firing rate of [[dopamine]] [[neurons]] in the [[ventral tegmental area]] (VTA) and [[substantia nigra]] (SNc) appear to mimic the error function in the algorithm.<ref name="WSchultz-1997"/><ref name=":0" /><ref name=":1" /><ref name=":2" /><ref name=":3" /> The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future [[reward system|reward]]. [[Dopamine]] cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.<ref name="WSchultz-1998">{{cite journal |author=Schultz, W. |year=1998 |title=Predictive reward signal of dopamine neurons |journal=Journal of Neurophysiology |volume=80 |issue=1 |pages=1β27|doi=10.1152/jn.1998.80.1.1 |pmid=9658025 |citeseerx=10.1.1.408.5994 |s2cid=52857162 }}</ref> Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for [[reinforcement learning]]. The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.<ref name="PDayan-2001">{{cite journal |author=Dayan, P. |year=2001 |title=Motivated reinforcement learning |journal=Advances in Neural Information Processing Systems |volume=14 |pages=11β18 |publisher=MIT Press |url=http://books.nips.cc/papers/files/nips14/CS01.pdf}}</ref><ref>{{Cite journal |last=Tobia, M. J., etc. |date=2016 |title=Altered behavioral and neural responsiveness to counterfactual gains in the elderly |journal=Cognitive, Affective, & Behavioral Neuroscience |volume=16 |issue=3 |pages=457β472|doi=10.3758/s13415-016-0406-7 |pmid=26864879 |s2cid=11299945 |doi-access=free }}</ref> It has also been used to study conditions such as [[schizophrenia]] or the consequences of pharmacological manipulations of dopamine on learning.<ref name="ASmith-2006">{{cite journal |author=Smith, A., Li, M., Becker, S. and Kapur, S. |year=2006 |title=Dopamine, prediction error, and associative learning: a model-based account |journal=Network: Computation in Neural Systems |volume=17 |issue=1 |pages=61β84 |doi=10.1080/09548980500361624 |pmid=16613795|s2cid=991839 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)