Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Temporal difference learning
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Computer programming concept}} {{Machine learning|Reinforcement learning}} '''Temporal difference''' ('''TD''') '''learning''' refers to a class of [[Model-free (reinforcement learning)|model-free]] [[reinforcement learning]] methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like [[Monte Carlo method]]s, and perform updates based on current estimates, like [[dynamic programming]] methods.{{sfnp|Sutton|Barto|2018|p=133}} While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known.<ref name="RSutton-1988">{{cite journal |last1=Sutton |first1=Richard S. |title=Learning to predict by the methods of temporal differences |journal=Machine Learning |date=1 August 1988 |volume=3 |issue=1 |pages=9–44 |doi=10.1007/BF00115009 |s2cid=207771194 |language=en |issn=1573-0565|doi-access=free }}</ref> This is a form of [[bootstrapping]], as illustrated with the following example: <blockquote>Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.<ref name="RSutton-1988" /></blockquote> Temporal difference methods are related to the temporal difference model of [[animal cognition|animal learning]].<ref name="WSchultz-1997">{{cite journal|author=Schultz, W, Dayan, P & Montague, PR.|year=1997|title=A neural substrate of prediction and reward|journal=Science|volume=275|issue=5306|pages=1593–1599|doi=10.1126/science.275.5306.1593|pmid=9054347|citeseerx=10.1.1.133.6176|s2cid=220093382 }}</ref><ref name=":0">{{Cite journal|last1=Montague|first1=P. R.|last2=Dayan|first2=P.|last3=Sejnowski|first3=T. J.|date=1996-03-01|title=A framework for mesencephalic dopamine systems based on predictive Hebbian learning|journal=The Journal of Neuroscience|volume=16|issue=5|pages=1936–1947|issn=0270-6474|pmid=8774460|pmc=6578666|doi=10.1523/JNEUROSCI.16-05-01936.1996|url=http://papers.cnl.salk.edu/PDFs/A%20Framework%20for%20Mesencephalic%20Dopamine%20Systems%20Based%20on%20Predictive%20Hebbian%20Learning%201996-2938.pdf}}</ref><ref name=":1">{{Cite journal|last1=Montague|first1=P.R.|last2=Dayan|first2=P.|last3=Nowlan|first3=S.J.|last4=Pouget|first4=A.|last5=Sejnowski|first5=T.J.|date=1993|title=Using aperiodic reinforcement for directed self-organization|url=http://www.gatsby.ucl.ac.uk/~dayan/papers/mdnps93.pdf|journal=Advances in Neural Information Processing Systems|volume=5|pages=969–976}}</ref><ref name=":2">{{Cite journal|last1=Montague|first1=P. R.|last2=Sejnowski|first2=T. J.|date=1994|title=The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms|journal=Learning & Memory|volume=1|issue=1|pages=1–33|doi=10.1101/lm.1.1.1 |issn=1072-0502|pmid=10467583|s2cid=44560099 |doi-access=free}}</ref><ref name=":3">{{Cite book|last1=Sejnowski|first1=T.J.|last2=Dayan|first2=P.|last3=Montague|first3=P.R.|title=Proceedings of the eighth annual conference on Computational learning theory - COLT '95 |chapter=Predictive Hebbian learning |date=1995|pages=15–18|doi=10.1145/225298.225300|isbn=0897917235|s2cid=1709691 |doi-access=free}}</ref> == Mathematical formulation == The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the [[Reinforcement_learning#Algorithms_for_control_learning|state value function]] of a finite-state [[Markov decision process]] (MDP) under a policy <math>\pi</math>. Let <math>V^\pi</math> denote the state value function of the MDP with states <math>(S_t)_{t\in\mathbb{N}}</math>, rewards <math>(R_t)_{t\in\mathbb{N}}</math> and discount rate<ref>Discount rate parameter allows for a [[time preference]] toward more immediate rewards, and away from distant future rewards</ref> <math>\gamma</math> under the policy <math> \pi </math>:{{sfnp|Sutton|Barto|2018|p=134}} :<math>V^\pi(s) = E_{a \sim \pi}\left\{\sum_{t=0}^\infty \gamma^tR_{t+1}\Bigg| S_0=s\right\}. </math> We drop the action from the notation for convenience. <math>V^\pi</math> satisfies the [[Hamilton-Jacobi-Bellman equation|Hamilton-Jacobi-Bellman Equation]]: : <math>V^\pi(s)=E_{\pi}\{R_1 + \gamma V^\pi(S_1)|S_0=s\},</math> so <math>R_1 + \gamma V^\pi(S_1)</math> is an unbiased estimate for <math>V^\pi(s)</math>. This observation motivates the following algorithm for estimating <math>V^\pi</math>. The algorithm starts by initializing a table <math>V(s)</math> arbitrarily, with one value for each state of the MDP. A positive [[learning rate]] <math>\alpha</math> is chosen. We then repeatedly evaluate the policy <math>\pi</math>, obtain a reward <math>r</math> and update the value function for the current state using the rule:{{sfnp|Sutton|Barto|2018|p=135}} :<math> V(S_t) \leftarrow (1 - \alpha) V(S_t) + \underbrace{\alpha}_{\text{learning rate}} [ \overbrace{R_{t+1} + \gamma V(S_{t+1})}^{\text{The TD target}} ]</math> where <math>S_t</math> and <math>S_{t+1}</math> are the current and next states, respectively. The value <math> R_{t+1} + \gamma V(S_{t+1})</math> is known as the TD target, and <math> R_{t+1} + \gamma V(S_{t+1}) - V(S_t)</math> is known as the TD error. == TD-Lambda == '''TD-Lambda''' is a learning algorithm invented by [[Richard S. Sutton]] based on earlier work on temporal difference learning by [[Arthur Samuel (computer scientist)|Arthur Samuel]].{{sfnp|Sutton|Barto|2018|p=130?}} This algorithm was famously applied by [[Gerald Tesauro]] to create [[TD-Gammon]], a program that learned to play the game of [[backgammon]] at the level of expert human players.{{sfnp|Tesauro|1995}} The lambda (<math>\lambda</math>) parameter refers to the trace decay parameter, with <math>0 \leqslant \lambda \leqslant 1</math>. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when <math>\lambda</math> is higher, with <math>\lambda = 1</math> producing parallel learning to Monte Carlo RL algorithms.{{sfnp|Sutton|Barto|2018|p=175}} == In neuroscience == The TD [[algorithm]] has also received attention in the field of [[neuroscience]]. Researchers discovered that the firing rate of [[dopamine]] [[neurons]] in the [[ventral tegmental area]] (VTA) and [[substantia nigra]] (SNc) appear to mimic the error function in the algorithm.<ref name="WSchultz-1997"/><ref name=":0" /><ref name=":1" /><ref name=":2" /><ref name=":3" /> The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future [[reward system|reward]]. [[Dopamine]] cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.<ref name="WSchultz-1998">{{cite journal |author=Schultz, W. |year=1998 |title=Predictive reward signal of dopamine neurons |journal=Journal of Neurophysiology |volume=80 |issue=1 |pages=1–27|doi=10.1152/jn.1998.80.1.1 |pmid=9658025 |citeseerx=10.1.1.408.5994 |s2cid=52857162 }}</ref> Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for [[reinforcement learning]]. The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.<ref name="PDayan-2001">{{cite journal |author=Dayan, P. |year=2001 |title=Motivated reinforcement learning |journal=Advances in Neural Information Processing Systems |volume=14 |pages=11–18 |publisher=MIT Press |url=http://books.nips.cc/papers/files/nips14/CS01.pdf}}</ref><ref>{{Cite journal |last=Tobia, M. J., etc. |date=2016 |title=Altered behavioral and neural responsiveness to counterfactual gains in the elderly |journal=Cognitive, Affective, & Behavioral Neuroscience |volume=16 |issue=3 |pages=457–472|doi=10.3758/s13415-016-0406-7 |pmid=26864879 |s2cid=11299945 |doi-access=free }}</ref> It has also been used to study conditions such as [[schizophrenia]] or the consequences of pharmacological manipulations of dopamine on learning.<ref name="ASmith-2006">{{cite journal |author=Smith, A., Li, M., Becker, S. and Kapur, S. |year=2006 |title=Dopamine, prediction error, and associative learning: a model-based account |journal=Network: Computation in Neural Systems |volume=17 |issue=1 |pages=61–84 |doi=10.1080/09548980500361624 |pmid=16613795|s2cid=991839 }}</ref> == See also == * [[PVLV]] * [[Q-learning]] * [[Rescorla–Wagner model]] * [[State–action–reward–state–action]] (SARSA) ==Notes== <references/> ===Works cited=== *{{cite book |title=Reinforcement Learning: An Introduction |first1=Richard S. |last1=Sutton |first2=Andrew G. |last2=Barto |edition=2nd |publisher=MIT Press |place=Cambridge, MA |year=2018 |url=http://www.incompleteideas.net/book/the-book.html}} * {{cite journal |first=Gerald |last=Tesauro |title=Temporal Difference Learning and TD-Gammon |journal=Communications of the ACM |date=March 1995 |volume=38 |issue=3 |pages=58–68 |url=http://www.research.ibm.com/massive/tdl.html | doi = 10.1145/203330.203343|s2cid=6023746 |doi-access=free }} ==Further reading== * {{cite book |first=S. P. |last=Meyn |year=2007 |title=Control Techniques for Complex Networks |publisher=Cambridge University Press |isbn=978-0521884419 |ref=none}} See final chapter and appendix. * {{cite journal |last1=Sutton |first1=R. S. |last2=Barto |first2=A. G. |year=1990 |title=Time Derivative Models of Pavlovian Reinforcement |journal=Learning and Computational Neuroscience: Foundations of Adaptive Networks |pages=497–537 |url=http://incompleteideas.net/sutton/papers/sutton-barto-90.pdf |ref=none}} ==External links== * [http://pitoko.net/tdgravity Connect Four TDGravity Applet] (+ mobile phone version) – self-learned using TD-Leaf method (combination of TD-Lambda with shallow tree search) * [http://chet-weger.herokuapp.com/learn_meta_ttt/ Self Learning Meta-Tic-Tac-Toe] Example web app showing how temporal difference learning can be used to learn state evaluation constants for a minimax AI playing a simple board game. * [https://web.archive.org/web/20131116084228/http://www.cs.colorado.edu/~grudic/teaching/CSCI4202/RL.pdf Reinforcement Learning Problem], document explaining how temporal difference learning can be used to speed up [[Q-learning]] * [https://www.cal-r.org/index.php?id=TD-sim TD-Simulator] Temporal difference simulator for classical conditioning {{DEFAULTSORT:Temporal Difference Learning}} [[Category:Computational neuroscience]] [[Category:Reinforcement learning]] [[Category:Subtraction]]<!-- [[Category:Distance]]? -->
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Machine learning
(
edit
)
Template:Sfnp
(
edit
)
Template:Short description
(
edit
)