Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reinforcement learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Direct policy search === An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of [[stochastic optimization]]. The two approaches available are gradient-based and gradient-free methods. [[Gradient]]-based methods (''policy gradient methods'') start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector <math>\theta</math>, let <math>\pi_\theta</math> denote the policy associated to <math>\theta</math>. Defining the performance function by <math>\rho(\theta) = \rho^{\pi_\theta}</math> under mild conditions this function will be differentiable as a function of the parameter vector <math>\theta</math>. If the gradient of <math>\rho</math> was known, one could use [[gradient descent|gradient ascent]]. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method<ref>{{cite conference | last = Williams | first = Ronald J. | author-link = Ronald J. Williams | title = A class of gradient-estimating algorithms for reinforcement learning in neural networks | book-title = Proceedings of the IEEE First International Conference on Neural Networks | year = 1987| citeseerx = 10.1.1.129.8871 }}</ref> (which is known as the likelihood ratio method in the [[simulation-based optimization]] literature).<ref>{{cite conference |last1=Peters |first1=Jan |author-link1=Jan Peters (computer scientist) |last2=Vijayakumar |first2=Sethu |author-link2=Sethu Vijayakumar |last3=Schaal |first3=Stefan |author-link3=Stefan Schaal |title=Reinforcement Learning for Humanoid Robotics |conference=IEEE-RAS International Conference on Humanoid Robots | year = 2003 | url = http://www-clmc.usc.edu/publications/p/peters-ICHR2003.pdf |url-status=dead |archive-url=http://web.archive.org/web/20130512223911/http://www-clmc.usc.edu/publications/p/peters-ICHR2003.pdf |archive-date=2013-05-12}}</ref> A large class of methods avoids relying on gradient information. These include [[simulated annealing]], [[cross-entropy method|cross-entropy search]] or methods of [[evolutionary computation]]. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, ''actor–critic methods'' have been proposed and performed well on various problems.<ref>{{Cite web|url=https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2|title=Simple Reinforcement Learning with Tensorflow Part 8: Asynchronous Actor-Critic Agents (A3C)|last=Juliani|first=Arthur|date=2016-12-17|website=Medium|access-date=2018-02-22}}</ref> Policy search methods have been used in the [[robotics]] context.<ref>{{Cite book |last1=Deisenroth |first1=Marc Peter |author-link1=Marc Peter Deisenroth |url=http://eprints.lincoln.ac.uk/28029/1/PolicySearchReview.pdf |title=A Survey on Policy Search for Robotics |last2=Neumann |first2=Gerhard |author-link2= |last3=Peters |first3=Jan |author-link3=Jan Peters (computer scientist) |publisher=NOW Publishers |year=2013 |series=Foundations and Trends in Robotics |volume=2 |pages=1–142 |doi=10.1561/2300000021 |hdl=10044/1/12051 |issue=1–2}}</ref> Many policy search methods may get stuck in local optima (as they are based on [[Local search (optimization)|local search]]).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)