Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Optimal control
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==General method== Optimal control deals with the problem of finding a control law for a given system such that a certain [[optimality criterion]] is achieved. A control problem includes a [[cost functional]] that is a [[function (mathematics)|function]] of state and control variables. An '''optimal control''' is a set of [[differential equation]]s describing the paths of the control variables that minimize the cost function. The optimal control can be derived using [[Pontryagin's maximum principle]] (a [[necessary condition]] also known as Pontryagin's minimum principle or simply Pontryagin's principle),<ref>{{cite book |first=I. M. |last=Ross |author-link=I. Michael Ross |year=2009 |title=A Primer on Pontryagin's Principle in Optimal Control |publisher=Collegiate Publishers |isbn=978-0-9843571-0-9 }}</ref> or by solving the [[Hamilton–Jacobi–Bellman equation]] (a [[sufficient condition]]). We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to ''minimize'' the total traveling time? In this example, the term ''control law'' refers specifically to the way in which the driver presses the accelerator and shifts the gears. The ''system'' consists of both the car and the road, and the ''optimality criterion'' is the minimization of the total traveling time. Control problems usually include ancillary [[Constraint (mathematics)|constraint]]s. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and [[initial condition]]s of the system. [[constraint (mathematics)|Constraint]]s are often interchangeable with the cost function. Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows.<ref name=":0" /> Minimize the continuous-time cost functional <math display="block">J[\textbf{x}(\cdot), \textbf{u}(\cdot), t_0, t_f] := E\,[\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f] + \int_{t_0}^{t_f} F\,[\textbf{x}(t),\textbf{u}(t),t] \,\mathrm dt</math> subject to the first-order dynamic constraints (the '''state equation''') <math display="block"> \dot{\textbf{x}}(t) = \textbf{f}\,[\,\textbf{x}(t), \textbf{u}(t), t],</math> the algebraic ''path constraints'' <math display="block"> \textbf{h}\,[\textbf{x}(t),\textbf{u}(t),t] \leq \textbf{0},</math> and the [[boundary condition|endpoint condition]]s <math display="block">\textbf{e}[\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f] = 0</math> where <math>\textbf{x}(t)</math> is the ''state'', <math>\textbf{u}(t)</math> is the ''control'', <math>t</math> is the independent variable (generally speaking, time), <math>t_0</math> is the initial time, and <math>t_f</math> is the terminal time. The terms <math>E</math> and <math>F</math> are called the ''endpoint cost '' and the ''running cost'' respectively. In the calculus of variations, <math>E</math> and <math>F</math> are referred to as the Mayer term and the ''[[Lagrange multiplier|Lagrangian]]'', respectively. Furthermore, it is noted that the path constraints are in general ''inequality'' constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution <math>[\textbf{x}^*(t),\textbf{u}^*(t),t_0^*, t_f^*]</math> to the optimal control problem is ''locally minimizing''.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)