Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov decision process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Definition== [[File:Markov Decision Process.svg|thumb|400x400px|Example of a simple MDP with three states (green circles) and two actions (orange circles), with two rewards (orange arrows)]] A Markov decision process is a 4-[[tuple]] <math>(S, A, P_a, R_a)</math>, where: * <math>S</math> is a [[set (mathematics)|set]] of states called the ''<dfn>state space</dfn>''. The state space may be discrete or continuous, like the [[set of real numbers]]. * <math>A</math> is a set of actions called the ''<dfn>action space</dfn>'' (alternatively, <math>A_s</math> is the set of actions available from state <math>s</math>). As for state, this set may be discrete or continuous. * <math>P_a(s, s')</math> is, on an intuitive level, the probability that action <math>a</math> in state <math>s</math> at time <math>t</math> will lead to state <math>s'</math> at time <math>t+1</math>. In general, this probability transition is defined to satisfy <math>\Pr(s_{t+1}\in S' \mid s_t = s, a_t=a)=\int_{S'} P_a(s, s')ds',</math> for every <math>S'\subseteq S</math> measurable. In case the state space is discrete, the integral is intended with respect to the counting measure, so that the latter simplifies as <math>P_a(s, s')= \Pr(s_{t+1}=s' \mid s_t = s, a_t=a)</math>; In case <math>S\subseteq \mathbb R^d</math>, the integral is usually intended with respect to the [[Lebesgue measure]]. *<math>R_a(s, s')</math> is the immediate reward (or expected immediate reward) received after transitioning from state <math>s</math> to state <math>s'</math>, due to action <math>a</math>. A policy function <math>\pi</math> is a (potentially probabilistic) mapping from state space (<math>S</math>) to action space (<math>A</math>). ===Optimization objective=== The goal in a Markov decision process is to find a good "policy" for the decision maker: a function <math>\pi</math> that specifies the action <math>\pi(s)</math> that the decision maker will choose when in state <math>s</math>. Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a [[Markov chain]] (since the action chosen in state <math>s</math> is completely determined by <math>\pi(s)</math>). The objective is to choose a policy <math>\pi</math> that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon: :<math>E\left[\sum^{\infty}_{t=0} {\gamma^t R_{a_t} (s_t, s_{t+1})}\right] </math> (where we choose <math>a_t = \pi(s_t)</math>, i.e. actions given by the policy). And the expectation is taken over <math>s_{t+1} \sim P_{a_t}(s_t,s_{t+1})</math> where <math>\ \gamma \ </math> is the discount factor satisfying <math>0 \le\ \gamma\ \le\ 1</math>, which is usually close to <math>1</math> (for example, <math> \gamma = 1/(1+r) </math> for some discount rate <math>r</math>). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely. Another possible, but strictly related, objective that is commonly used is the <math>H-</math>step return. This time, instead of using a discount factor <math>\ \gamma \ </math>, the agent is interested only in the first <math>H</math> steps of the process, with each reward having the same weight. :<math>E\left[\sum^{H-1}_{t=0} {R_{a_t} (s_t, s_{t+1})}\right] </math> (where we choose <math>a_t = \pi(s_t)</math>, i.e. actions given by the policy). And the expectation is taken over <math>s_{t+1} \sim P_{a_t}(s_t,s_{t+1})</math> where <math>\ H \ </math> is the time horizon. Compared to the previous objective, the latter one is more used in [[Learning Theory]]. A policy that maximizes the function above is called an ''<dfn>optimal policy</dfn>'' and is usually denoted <math>\pi^*</math>. A particular MDP may have multiple distinct optimal policies. Because of the [[Markov property]], it can be shown that the optimal policy is a function of the current state, as assumed above. ===Simulator models=== In many cases, it is difficult to represent the transition probability distributions, <math>P_a(s, s')</math>, explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often called ''<dfn>episodes</dfn>'' may be produced. Another form of simulator is a ''<dfn>generative model</dfn>'', a single step simulator that can generate samples of the next state and reward given any state and action.<ref name="Kearns Sparse">{{cite journal |last1=Kearns |first1=Michael |last2=Mansour |first2=Yishay |last3=Ng |first3=Andrew |title=A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes |journal=Machine Learning |date=2002 |volume=49 |issue=193β208 |pages=193β208 |doi=10.1023/A:1017932429737 |doi-access=free }}</ref> (Note that this is a different meaning from the term [[generative model]] in the context of statistical classification.) In [[algorithms]] that are expressed using [[pseudocode]], <math>G</math> is often used to represent a generative model. For example, the expression <math>s', r \gets G(s, a)</math> might denote the action of sampling from the generative model where <math>s</math> and <math>a</math> are the current state and action, and <math>s'</math> and <math>r</math> are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory. These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models through [[regression analysis|regression]]. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, the [[dynamic programming]] algorithms described in the next section require an explicit model, and [[Monte Carlo tree search]] requires a generative model (or an episodic simulator that can be copied at any state), whereas most [[#Reinforcement learning|reinforcement learning]] algorithms require only an episodic simulator.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)