Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Model predictive control
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Overview == [[File:MPC 3x3.gif|thumb|3 state and 3 actuator multi-input multi-output MPC simulation]] The models used in MPC are generally intended to represent the behavior of complex and simple [[dynamical system]]s. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic [[PID controller]]s. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics. MPC models predict the change in the [[dependent variable]]s of the modeled system that will be caused by changes in the [[independent variable]]s. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints. MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required. While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the [[superposition principle]] of [[linear algebra]] enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust. When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a [[Kalman filter]] or specify a model for linear MPC. An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N [[convex optimization]] problems in parallel based on exchange of information among controllers.<ref>{{cite journal |last1=Al-Gherwi |first1=Walid |last2=Budman |first2=Hector |last3=Elkamel |first3=Ali |title=A robust distributed model predictive control based on a dual-mode approach |journal=Computers and Chemical Engineering |date=3 July 2012 |volume=50 |issue=2013 |pages=130–138 |doi=10.1016/j.compchemeng.2012.11.002 }}</ref> ===Theory behind MPC=== [[Image:MPC scheme basic.svg|thumb|A discrete MPC scheme.]] MPC is based on iterative, finite-horizon optimization of a plant model. At time <math>t</math> the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: <math>[t,t+T]</math>. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of [[Euler–Lagrange equation]]s) a cost-minimizing control strategy until time <math>t+T</math>. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called '''receding horizon control'''. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.<ref>Nikolaou, Michael; "Model predictive controllers: A critical synthesis of theory and industrial needs", ''Advances in Chemical Engineering'', volume 26, Academic Press, 2001, pages 131-204</ref><ref>{{Cite journal |last1=Berberich |first1=Julian |last2=Kohler |first2=Johannes |last3=Muller |first3=Matthias A. |last4=Allgöwer |first4=Frank |date=2022 |title=Linear Tracking MPC for Nonlinear Systems—Part I: The Model-Based Case |url=https://ieeexplore.ieee.org/document/9756294 |journal=IEEE Transactions on Automatic Control |volume=67 |issue=9 |pages=4390–4405 |doi=10.1109/TAC.2022.3166872 |arxiv=2105.08560 |s2cid=234763155 |issn=0018-9286}}</ref> === Principles of MPC === Model predictive control is a multivariable control algorithm that uses: * an internal dynamic model of the process * a cost function ''J'' over the receding horizon * an optimization algorithm minimizing the cost function ''J'' using the control input ''u'' An example of a quadratic cost function for optimization is given by: :<math>J=\sum_{i=1}^N w_{x_i} (r_i-x_i)^2 + \sum_{i=1}^M w_{u_i} {\Delta u_i}^2</math> <!-- previous version -- J = weightCV1*(rv1 – cv1) 2 + weightCV2*(rv2 – cv2) 2 +… + weightMV1*(Δmv1) 2 + weightMV2*(Δmv2) 2 +… --> without violating constraints (low/high limits) with :<math>x_i</math>: <math>i</math><sup>th</sup> controlled variable (e.g. measured temperature) :<math>r_i</math>: <math>i</math><sup>th</sup> reference variable (e.g. required temperature) :<math>u_i</math>: <math>i</math><sup>th</sup> manipulated variable (e.g. control valve) :<math>w_{x_i}</math>: weighting coefficient reflecting the relative importance of <math>x_i</math> :<math>w_{u_i}</math>: weighting coefficient penalizing relative big changes in <math>u_i</math> etc.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)