Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Stochastic programming
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Two-stage problem definition== The basic idea of two-stage stochastic programming is that (optimal) decisions should be based on data available at the time the decisions are made and cannot depend on future observations. The two-stage formulation is widely used in stochastic programming. The general formulation of a two-stage stochastic programming problem is given by: <math display="block"> \min_{x\in X}\{ g(x)= f(x) + E_{\xi}[Q(x,\xi)]\} </math> where <math>Q(x,\xi)</math> is the optimal value of the second-stage problem <math display="block"> \min_{y}\{ q(y,\xi) \,|\,T(\xi)x+W(\xi) y = h(\xi)\}. </math> The classical two-stage linear stochastic programming problems can be formulated as <math display="block"> \begin{array}{llr} \min\limits_{x\in \mathbb{R}^n} &g(x)= c^T x + E_{\xi}[Q(x,\xi)] & \\ \text{subject to} & Ax = b &\\ & x \geq 0 & \end{array} </math> where <math> Q(x,\xi)</math> is the optimal value of the second-stage problem <math display="block"> \begin{array}{llr} \min\limits_{y\in \mathbb{R}^m} & q(\xi)^T y & \\ \text{subject to} & T(\xi)x+W(\xi)y = h(\xi) &\\ & y \geq 0 & \end{array} </math> In such formulation <math>x\in \mathbb{R}^n</math> is the first-stage decision variable vector, <math>y\in \mathbb{R}^m</math> is the second-stage decision variable vector, and <math>\xi(q,T,W,h)</math> contains the data of the second-stage problem. In this formulation, at the first stage we have to make a "here-and-now" decision <math>x</math> before the realization of the uncertain data <math>\xi</math>, viewed as a random vector, is known. At the second stage, after a realization of <math>\xi</math> becomes available, we optimize our behavior by solving an appropriate optimization problem. At the first stage we optimize (minimize in the above formulation) the cost <math>c^Tx</math> of the first-stage decision plus the expected cost of the (optimal) second-stage decision. We can view the second-stage problem simply as an optimization problem which describes our supposedly optimal behavior when the uncertain data is revealed, or we can consider its solution as a recourse action where the term <math>Wy</math> compensates for a possible inconsistency of the system <math>Tx\leq h</math> and <math>q^Ty</math> is the cost of this recourse action. The considered two-stage problem is ''linear'' because the objective functions and the constraints are linear. Conceptually this is not essential and one can consider more general two-stage stochastic programs. For example, if the first-stage problem is integer, one could add integrality constraints to the first-stage problem so that the feasible set is discrete. Non-linear objectives and constraints could also be incorporated if needed.<ref>{{cite book| last1=Shapiro|first1=Alexander|last2=Philpott|first2=Andy|title=A tutorial on Stochastic Programming| url=http://www2.isye.gatech.edu/people/faculty/Alex_Shapiro/TutorialSP.pdf}}</ref> === Distributional assumption === The formulation of the above two-stage problem assumes that the second-stage data <math>\xi</math> is modeled as a random vector with a '''''known''''' probability distribution. This would be justified in many situations. For example, the distribution of <math>\xi</math> could be inferred from historical data if one assumes that the distribution does not significantly change over the considered period of time. Also, the empirical distribution of the sample could be used as an approximation to the distribution of the future values of <math>\xi</math>. If one has a prior model for <math>\xi</math>, one could obtain a posteriori distribution by a Bayesian update.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)