Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov decision process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Reinforcement learning== {{main|Reinforcement learning}} [[Reinforcement learning]] is an interdisciplinary area of [[machine learning]] and [[optimal control]] that has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.<ref>{{cite journal|author1=Shoham, Y.|author2= Powers, R.|author3= Grenager, T. |year=2003|title= Multi-agent reinforcement learning: a critical survey |pages= 1β13|journal= Technical Report, Stanford University|url=http://jmvidal.cse.sc.edu/library/shoham03a.pdf|access-date=2018-12-12}}</ref> Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance is <math>\varepsilon-</math>close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible). ===Reinforcement Learning for discrete MDPs=== For the purpose of this section, it is useful to define a further function, which corresponds to taking the action <math>a</math> and then continuing optimally (or according to whatever policy one currently has): :<math>\ Q(s,a) = \sum_{s'} P_a(s,s') (R_a(s,s') + \gamma V(s')).\ </math> While this function is also unknown, experience during learning is based on <math>(s, a)</math> pairs (together with the outcome <math>s'</math>; that is, "I was in state <math>s</math> and I tried doing <math>a</math> and <math>s'</math> happened"). Thus, one has an array <math>Q</math> and uses experience to update it directly. This is known as [[Q-learning]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)