Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov decision process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Notable variants=== ====Value iteration==== In value iteration {{harv|Bellman|1957}}, which is also called [[backward induction]], the <math>\pi</math> function is not used; instead, the value of <math>\pi(s)</math> is calculated within <math>V(s)</math> whenever it is needed. Substituting the calculation of <math>\pi(s)</math> into the calculation of <math>V(s)</math> gives the combined step{{explain|reason=The derivation of the substituion is needed|date=July 2018}}: :<math> V_{i+1}(s) := \max_a \left\{ \sum_{s'} P_a(s,s') \left( R_a(s,s') + \gamma V_i(s') \right) \right\}, </math> where <math>i</math> is the iteration number. Value iteration starts at <math>i = 0</math> and <math>V_0</math> as a guess of the [[value function]]. It then iterates, repeatedly computing <math>V_{i+1}</math> for all states <math>s</math>, until <math>V</math> converges with the left-hand side equal to the right-hand side (which is the "[[Bellman equation]]" for this problem{{clarify|date=January 2018}}). [[Lloyd Shapley]]'s 1953 paper on [[stochastic games]] included as a special case the value iteration method for MDPs,<ref>{{cite journal|last=Shapley|first=Lloyd|author-link=Lloyd Shapley|title=Stochastic Games|year=1953|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=39|issue=10|pages=1095β1100|doi=10.1073/pnas.39.10.1095|pmid=16589380|pmc=1063912|bibcode=1953PNAS...39.1095S|doi-access=free}}</ref> but this was recognized only later on.<ref>{{cite book|first=Lodewijk|last=Kallenberg|chapter=Finite state and action MDPs|editor-first1=Eugene A.|editor-last1=Feinberg|editor-link1=Eugene A. Feinberg|editor-first2=Adam|editor-last2=Shwartz|title=Handbook of Markov decision processes: methods and applications|publisher=Springer|year=2002|isbn=978-0-7923-7459-6}}</ref> ====Policy iteration==== In policy iteration {{harv|Howard|1960}}, step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimize [[Sears]] catalogue mailing, which he had been optimizing using value iteration.<ref>Howard 2002, [https://pubsonline.informs.org/doi/10.1287/opre.50.1.100.17788 "Comments on the Origin and Application of Markov Decision Processes"]</ref>) Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by making <math>s = s'</math> in the step two equation.{{clarify|date=January 2018}} Thus, repeating step two to convergence can be interpreted as solving the linear equations by [[Relaxation (iterative method)|relaxation]]. This variant has the advantage that there is a definite stopping condition: when the array <math>\pi</math> does not change in the course of applying step 1 to all states, the algorithm is completed. Policy iteration is usually slower than value iteration for a large number of possible states. ====Modified policy iteration==== In modified policy iteration ({{harvnb|van Nunen|1976}}; {{harvnb|Puterman|Shin|1978}}), step one is performed once, and then step two is repeated several times.<ref>{{cite journal|first1=M. L.|last1=Puterman|last2=Shin|first2=M. C.|title=Modified Policy Iteration Algorithms for Discounted Markov Decision Problems|journal=Management Science|volume=24|issue=11|year=1978|doi=10.1287/mnsc.24.11.1127|pages=1127β1137}}</ref><ref>{{cite journal|first=J.A. E. E|last=van Nunen|title=A set of successive approximation methods for discounted Markovian decision problems |journal= Zeitschrift fΓΌr Operations Research|volume=20|issue=5|pages=203β208|year=1976|doi=10.1007/bf01920264|s2cid=5167748}}</ref> Then step one is again performed once and so on. ====Prioritized sweeping==== In this variant, the steps are preferentially applied to states which are in some way important β whether based on the algorithm (there were large changes in <math>V</math> or <math>\pi</math> around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)