Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reinforcement learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithms for control learning == Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. === Criterion of optimality === ==== Policy ==== The agent's action selection is modeled as a map called ''policy'': :<math>\pi: \mathcal{A} \times \mathcal{S} \rightarrow [0,1]</math> :<math>\pi(a,s) = \Pr(A_t = a \mid S_t = s)</math> The policy map gives the probability of taking action <math>a</math> when in state <math>s</math>.<ref name=":0">{{Cite web|url=http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf|title=Reinforcement learning: An introduction|access-date=2017-07-23|archive-date=2017-07-12|archive-url=https://web.archive.org/web/20170712170739/http://people.inf.elte.hu/lorincz/Files/RL_2006/SuttonBook.pdf|url-status=dead}}</ref>{{Rp|61}} There are also deterministic policies <math>\pi</math> for which <math>\pi(s)</math> denotes the action that should be played at state <math>s</math>. ==== State-value function ==== The state-value function <math>V_\pi(s)</math> is defined as, ''expected discounted return'' starting with state <math>s</math>, i.e. <math>S_0 = s</math>, and successively following policy <math>\pi</math>. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.<ref name=":0" />{{Rp|60}} :<math>V_\pi(s) = \operatorname \mathbb{E}[G\mid S_0 = s] = \operatorname \mathbb{E}\left[\sum_{t=0}^\infty \gamma^t R_{t+1}\mid S_0 = s\right],</math> where the random variable <math>G</math> denotes the '''discounted return''', and is defined as the sum of future discounted rewards: :<math>G=\sum_{t=0}^\infty \gamma^t R_{t+1}=R_1 + \gamma R_2 + \gamma^2 R_3 + \dots,</math> where <math>R_{t+1}</math> is the reward for transitioning from state <math>S_t</math> to <math>S_{t+1}</math>, <math>0 \le \gamma<1</math> is the [[Q-learning#Discount factor|discount rate]]. <math>\gamma</math> is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called ''stationary'' policies. A policy is ''stationary'' if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to ''deterministic'' stationary policies. A ''deterministic stationary'' policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. === Brute force === The [[brute-force search|brute force]] approach entails two steps: * For each possible policy, sample returns while following it * Choose the policy with the largest expected discounted return One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are [[#Value function|value function estimation]] and [[#Direct policy search|direct policy search]]. === Value function === {{see also|Value function}} Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns <math>\operatorname \mathbb{E}[G]</math> for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from ''any'' initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy <math>\pi</math> by :<math> V^{\pi} (s) = \operatorname \mathbb{E}[G\mid s,\pi],</math> where <math>G</math> stands for the discounted return associated with following <math>\pi</math> from the initial state <math>s</math>. Defining <math>V^*(s)</math> as the maximum possible state-value of <math>V^\pi(s)</math>, where <math>\pi</math> is allowed to change, :<math>V^*(s) = \max_\pi V^\pi(s).</math> A policy that achieves these optimal state-values in each state is called ''optimal''. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since <math>V^*(s) = \max_\pi \mathbb{E}[G\mid s,\pi]</math>, where <math>s</math> is a state randomly sampled from the distribution <math>\mu</math> of initial states (so <math>\mu(s) = \Pr(S_0 = s)</math>). Although state-values suffice to define optimality, it is useful to define action-values. Given a state <math>s</math>, an action <math>a</math> and a policy <math>\pi</math>, the action-value of the pair <math>(s,a)</math> under <math>\pi</math> is defined by :<math>Q^\pi(s,a) = \operatorname \mathbb{E}[G\mid s,a,\pi],\,</math> where <math>G</math> now stands for the random discounted return associated with first taking action <math>a</math> in state <math>s</math> and following <math>\pi</math>, thereafter. The theory of Markov decision processes states that if <math>\pi^*</math> is an optimal policy, we act optimally (take the optimal action) by choosing the action from <math>Q^{\pi^*}(s,\cdot)</math> with the highest action-value at each state, <math>s</math>. The ''action-value function'' of such an optimal policy (<math>Q^{\pi^*}</math>) is called the ''optimal action-value function'' and is commonly denoted by <math>Q^*</math>. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are [[value iteration]] and [[policy iteration]]. Both algorithms compute a sequence of functions <math>Q_k</math> (<math>k=0,1,2,\ldots</math>) that converge to <math>Q^*</math>. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. ==== Monte Carlo methods ==== [[Monte Carlo sampling|Monte Carlo methods]]<ref>{{Cite journal |last1=Singh |first1=Satinder P. |last2=Sutton |first2=Richard S. |date=1996-03-01 |title=Reinforcement learning with replacing eligibility traces |url=https://link.springer.com/article/10.1007/BF00114726 |journal=Machine Learning |language=en |volume=22 |issue=1 |pages=123–158 |doi=10.1007/BF00114726 |issn=1573-0565}}</ref> are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or [[Simulation|simulated]] experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of [[Markov chain|transition probabilities]], which is necessary for [[dynamic programming]] methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving [[random sampling]]; however, in this context, it specifically refers to methods that compute averages from ''complete'' returns, rather than ''partial'' returns. These methods function similarly to the [[Multi-armed bandit|bandit algorithms]], in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem [[non-stationary]]. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes [[value function]]s using full knowledge of the [[Markov decision process]] (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve [[Mathematical optimization|optimality]], first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience.<ref name=":0" /> ==== Temporal difference methods ==== {{Main|Temporal difference learning}} The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of ''generalized policy iteration'' algorithms. Many [[Actor-critic algorithm|''actor-critic'' methods]] belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's [[temporal difference]] (TD) methods that are based on the recursive [[Bellman equation]].<ref>{{cite thesis|last = Sutton|first = Richard S.|title = Temporal Credit Assignment in Reinforcement Learning|degree = PhD|publisher = University of Massachusetts, Amherst, MA|url = http://incompleteideas.net/sutton/publications.html#PhDthesis|author-link = Richard S. Sutton|year = 1984|access-date = 2017-03-29|archive-date = 2017-03-30|archive-url = https://web.archive.org/web/20170330002227/http://incompleteideas.net/sutton/publications.html#PhDthesis|url-status = dead}}</ref>{{sfn|Sutton|Barto|2018|loc=[http://incompleteideas.net/sutton/book/ebook/node60.html §6. Temporal-Difference Learning]}} The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method,<ref>{{cite journal | doi = 10.1023/A:1018056104778 | last1 = Bradtke | first1 = Steven J. | author-link1 = Steven J. Bradtke | last2 = Barto | first2 = Andrew G. | author-link2 = Andrew G. Barto | title = Learning to predict by the method of temporal differences | journal = Machine Learning | volume = 22 | pages = 33–57 | year = 1996 | citeseerx = 10.1.1.143.857 | s2cid = 20327856 }}</ref> may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called <math>\lambda</math> parameter <math>(0\le \lambda\le 1)</math> that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. ==== Function approximation methods ==== In order to address the fifth issue, ''function approximation methods'' are used. ''Linear function approximation'' starts with a mapping <math>\phi</math> that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair <math>(s,a)</math> are obtained by linearly combining the components of <math>\phi(s,a)</math> with some ''weights'' <math>\theta</math>: :<math>Q(s,a) = \sum_{i=1}^d \theta_i \phi_i(s,a).</math> The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from [[nonparametric statistics]] (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the [[Q-learning]] algorithm and its many variants.<ref>{{cite thesis | last = Watkins | first = Christopher J.C.H. | author-link = Christopher J.C.H. Watkins | degree= PhD | title= Learning from Delayed Rewards | year= 1989 | publisher = King's College, Cambridge, UK | url= http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf}}</ref> Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems.<ref name="MBK">{{Cite journal |title = Detection of Static and Mobile Targets by an Autonomous Agent with Deep Q-Learning Abilities | journal=Entropy | year=2022 | volume=24 | issue=8 | page=1168 | doi=10.3390/e24081168 | pmid=36010832 | pmc=9407070 | bibcode=2022Entrp..24.1168M | doi-access=free | last1=Matzliach | first1=Barouch | last2=Ben-Gal | first2=Irad | last3=Kagan | first3=Evgeny }}</ref> The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. === Direct policy search === An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of [[stochastic optimization]]. The two approaches available are gradient-based and gradient-free methods. [[Gradient]]-based methods (''policy gradient methods'') start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector <math>\theta</math>, let <math>\pi_\theta</math> denote the policy associated to <math>\theta</math>. Defining the performance function by <math>\rho(\theta) = \rho^{\pi_\theta}</math> under mild conditions this function will be differentiable as a function of the parameter vector <math>\theta</math>. If the gradient of <math>\rho</math> was known, one could use [[gradient descent|gradient ascent]]. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method<ref>{{cite conference | last = Williams | first = Ronald J. | author-link = Ronald J. Williams | title = A class of gradient-estimating algorithms for reinforcement learning in neural networks | book-title = Proceedings of the IEEE First International Conference on Neural Networks | year = 1987| citeseerx = 10.1.1.129.8871 }}</ref> (which is known as the likelihood ratio method in the [[simulation-based optimization]] literature).<ref>{{cite conference |last1=Peters |first1=Jan |author-link1=Jan Peters (computer scientist) |last2=Vijayakumar |first2=Sethu |author-link2=Sethu Vijayakumar |last3=Schaal |first3=Stefan |author-link3=Stefan Schaal |title=Reinforcement Learning for Humanoid Robotics |conference=IEEE-RAS International Conference on Humanoid Robots | year = 2003 | url = http://www-clmc.usc.edu/publications/p/peters-ICHR2003.pdf |url-status=dead |archive-url=http://web.archive.org/web/20130512223911/http://www-clmc.usc.edu/publications/p/peters-ICHR2003.pdf |archive-date=2013-05-12}}</ref> A large class of methods avoids relying on gradient information. These include [[simulated annealing]], [[cross-entropy method|cross-entropy search]] or methods of [[evolutionary computation]]. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, ''actor–critic methods'' have been proposed and performed well on various problems.<ref>{{Cite web|url=https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2|title=Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C)|last=Juliani|first=Arthur|date=2016-12-17|website=Medium|access-date=2018-02-22}}</ref> Policy search methods have been used in the [[robotics]] context.<ref>{{Cite book |last1=Deisenroth |first1=Marc Peter |author-link1=Marc Peter Deisenroth |url=http://eprints.lincoln.ac.uk/28029/1/PolicySearchReview.pdf |title=A Survey on Policy Search for Robotics |last2=Neumann |first2=Gerhard |author-link2= |last3=Peters |first3=Jan |author-link3=Jan Peters (computer scientist) |publisher=NOW Publishers |year=2013 |series=Foundations and Trends in Robotics |volume=2 |pages=1–142 |doi=10.1561/2300000021 |hdl=10044/1/12051 |issue=1–2}}</ref> Many policy search methods may get stuck in local optima (as they are based on [[Local search (optimization)|local search]]). === Model-based algorithms === Finally, all of the above methods can be combined with algorithms that first learn a model of the [[Markov decision process]], the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions.<ref>{{Cite conference |last1=Sutton |first1=Richard| title=Integrated Architectures for Learning, Planning and Reacting based on Dynamic Programming |year=1990 |book-title=Machine Learning: Proceedings of the Seventh International Workshop}}</ref> Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm.<ref>{{Cite conference | first1 = Long-Ji | last1 = Lin | title = Self-improving reactive agents based on reinforcement learning, planning and teaching | book-title = Machine Learning volume 8 | year = 1992 | doi = 10.1007/BF00992699 |url=https://link.springer.com/content/pdf/10.1007/BF00992699.pdf}}</ref> Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt.<ref>{{Citation |last=Zou |first=Lan |title=Chapter 7 - Meta-reinforcement learning |date=2023-01-01 |url=https://www.sciencedirect.com/science/article/pii/B9780323899314000110 |work=Meta-Learning |pages=267–297 |editor-last=Zou |editor-first=Lan |access-date=2023-11-08 |publisher=Academic Press |doi=10.1016/b978-0-323-89931-4.00011-0 |isbn=978-0-323-89931-4}}</ref> There are other ways to use models than to update a value function.<ref>{{Cite conference | last1 = van Hasselt | first1 = Hado | last2 = Hessel | first2 = Matteo | last3 = Aslanides | first3 = John | title = When to use parametric models in reinforcement learning? | year = 2019 | book-title = Advances in Neural Information Processing Systems 32 | url = https://proceedings.neurips.cc/paper/2019/file/1b742ae215adf18b75449c6e272fd92d-Paper.pdf }}</ref> For instance, in [[model predictive control]] the model is used to update the behavior directly.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)