Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Control theory
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Branch of engineering and mathematics}} {{About|control theory in engineering|control theory in linguistics|control (linguistics)|control theory in psychology and sociology|control theory (sociology)|and|Perceptual control theory}} {{Use mdy dates|date=July 2016}} '''Control theory''' is a field of [[control engineering]] and [[applied mathematics]] that deals with the [[control system|control]] of [[dynamical system]]s in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any ''delay'', ''overshoot'', or ''steady-state error'' and ensuring a level of control [[Stability theory|stability]]; often with the aim to achieve a degree of [[Optimal control|optimality]]. To do this, a '''controller''' with the requisite corrective behavior is required. This controller monitors the controlled [[process variable]] (PV), and compares it with the reference or [[Setpoint (control system)|set point]] (SP). The difference between actual and desired value of the process variable, called the ''error'' signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are [[controllability]] and [[observability]]. Control theory is used in [[control system engineering]] to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as [[robotics]]. Extensive use is usually made of a diagrammatic style known as the [[block diagram]]. In it the [[transfer function]], also known as the system function or network function, is a mathematical model of the relation between the input and output based on the [[differential equation]]s describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by [[James Clerk Maxwell]].<ref>{{cite journal |first=J. C. |last=Maxwell |author-link=James Clerk Maxwell |title=On Governors |date=1868 |journal=Proceedings of the Royal Society |volume=100 |url=https://upload.wikimedia.org/wikipedia/commons/b/b1/On_Governors.pdf |archive-url=https://web.archive.org/web/20081219051207/http://upload.wikimedia.org/wikipedia/commons/b/b1/On_Governors.pdf |archive-date=2008-12-19 |url-status=live}}</ref> Control theory was further advanced by [[Edward Routh]] in 1874, [[Jacques Charles François Sturm|Charles Sturm]] and in 1895, [[Adolf Hurwitz]], who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of [[PID control]] theory by [[Nicolas Minorsky]].<ref>{{cite journal |last=Minorsky |first=Nicolas |author-link=Nicolas Minorsky |title=Directional stability of automatically steered bodies |journal=Journal of the American Society of Naval Engineers |year=1922 |volume=34 |pages=280–309 |issue=2 |doi=10.1111/j.1559-3584.1922.tb04958.x}}</ref> Although a major application of [[mathematics|mathematical]] control theory is in [[Control Systems Engineering|control systems engineering]], which deals with the design of [[process control]] systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and [[operations research]].<ref>{{Cite web|url=https://d-nb.info/gnd/4032317-1|title=Katalog der Deutschen Nationalbibliothek (Authority control)|last=GND|website=portal.dnb.de|access-date=2024-12-21}}</ref> ==History== {{see also|Control engineering#History}} [[File:Boulton and Watt centrifugal governor-MJ.jpg|thumb|right|[[Centrifugal governor]] in a [[Boulton & Watt engine]] of 1788]] Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the [[centrifugal governor]], conducted by the physicist [[James Clerk Maxwell]] in 1868, entitled ''On Governors''.<ref name="Maxwell1867">{{cite journal|author=Maxwell, J.C.|year=1868|title=On Governors|journal=Proceedings of the Royal Society of London|volume=16|pages=270–283|doi=10.1098/rspl.1867.0055|jstor=112510|doi-access=}}<!--| access-date = 2008-04-14--></ref> A centrifugal governor was already used to regulate the velocity of windmills.<ref>{{cite web| title = Control Theory: History, Mathematical Achievements and Perspectives|url=https://citeseerx.ist.psu.edu/doc/10.1.1.302.5633|author1=Fernandez-Cara, E. |author2=Zuazua, E. |publisher=Boletin de la Sociedad Espanola de Matematica Aplicada|citeseerx=10.1.1.302.5633 |issn=1575-9822}}</ref> Maxwell described and analyzed the phenomenon of [[self-oscillation]], in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, [[Edward John Routh]], abstracted Maxwell's results for the general class of linear systems.<ref name=Routh1975>{{cite book | author = Routh, E.J. |author2=Fuller, A.T. | year = 1975 | title = Stability of motion | publisher = Taylor & Francis }}</ref> Independently, [[Adolf Hurwitz]] analyzed system stability using differential equations in 1877, resulting in what is now known as the [[Routh–Hurwitz theorem]].<ref name=Routh1877>{{cite book | author = Routh, E.J. | year = 1877 | title = A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion: Particularly Steady Motion | url = https://archive.org/details/atreatiseonstab00routgoog | publisher = Macmillan and co. }}</ref><ref name=Hurwitz1964>{{cite journal | author = Hurwitz, A. | year = 1964 | title = On The Conditions Under Which An Equation Has Only Roots With Negative Real Parts | journal = Selected Papers on Mathematical Trends in Control Theory }}</ref> A notable application of dynamic control was in the area of crewed flight. The [[Wright brothers]] made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By [[World War II]], control theory was becoming an important area of research. [[Irmgard Flügge-Lotz]] developed the theory of discontinuous automatic control systems, and applied the [[bang–bang control|bang-bang principle]] to the development of [[autopilot|automatic flight control equipment]] for aircraft.<ref>{{cite journal|last1=Flugge-Lotz|first1=Irmgard|last2=Titus|first2=Harold A.|title=Optimum and Quasi-Optimum Control of Third and Fourth-Order Systems|journal=Stanford University Technical Report|date=October 1962|issue=134|pages=8–12|url=http://www.dtic.mil/dtic/tr/fulltext/u2/621137.pdf|archive-url=https://web.archive.org/web/20190427142417/http://www.dtic.mil/dtic/tr/fulltext/u2/621137.pdf|url-status=dead|archive-date=April 27, 2019}}</ref><ref>{{cite book|last1=Hallion|first1=Richard P.|editor1-last=Sicherman|editor1-first=Barbara|editor2-last=Green|editor2-first=Carol Hurd|editor3-last=Kantrov|editor3-first=Ilene|editor4-last=Walker|editor4-first=Harriette|title=Notable American Women: The Modern Period: A Biographical Dictionary|url=https://archive.org/details/notableamericanw00sich|url-access=registration|date=1980|publisher=Belknap Press of Harvard University Press|location=Cambridge, Mass.|isbn=9781849722704|pages=[https://archive.org/details/notableamericanw00sich/page/241 241–242]}}</ref> Other areas of application for discontinuous controls included [[fire-control system]]s, [[guidance system]]s and [[electronics]]. Sometimes, mechanical methods are used to improve the stability of systems. For example, [[Stabilizer (ship)|ship stabilizers]] are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The [[Space Race]] also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an [[internal model (motor control)|internal model]] that obeys the [[Good regulator|good regulator theorem]]. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a [[controller (control theory)|regulator]] interacting with a [[plant (control theory)|plant]]. ==Open-loop and closed-loop (feedback) control== {{excerpt|Control loop#Open-loop and closed-loop}} ==Classical control theory== {{excerpt|Closed-loop controller}} ==Linear and nonlinear control theory== The field of control theory can be divided into two branches: * ''[[Linear control theory]]'' – This applies to systems made of devices which obey the [[superposition principle]], which means roughly that the output is proportional to the input. They are governed by [[linear differential equation]]s. A major subclass is systems which in addition have parameters which do not change with time, called ''[[linear time invariant]]'' (LTI) systems. These systems are amenable to powerful [[frequency domain]] mathematical techniques of great generality, such as the [[Laplace transform]], [[Fourier transform]], [[Z transform]], [[Bode plot]], [[root locus]], and [[Nyquist stability criterion]]. These lead to a description of the system using terms like [[Bandwidth (signal processing)|bandwidth]], [[frequency response]], [[eigenvalue]]s, [[gain (electronics)|gain]], [[resonant frequency|resonant frequencies]], [[zeros and poles]], which give solutions for system response and design techniques for most systems of interest. * ''[[Nonlinear control theory]]'' – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by [[nonlinear differential equation]]s. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include [[limit cycle]] theory, [[Poincaré map]]s, [[Lyapunov function|Lyapunov stability theorem]], and [[describing function]]s. Nonlinear systems are often analyzed using [[numerical method]]s on computers, for example by [[simulation|simulating]] their operation using a [[simulation language]]. If only solutions near a stable point are of interest, nonlinear systems can often be [[linearization|linearized]] by approximating them by a linear system using [[perturbation theory]], and linear techniques can be used.<ref>{{cite web| url = http://www.mathworks.com/help/toolbox/simulink/slref/trim.html| title = trim point}}</ref> ==Analysis techniques – frequency domain and time domain== Mathematical techniques for analyzing and designing control systems fall into two different categories: * ''[[Frequency domain]]'' – In this type the values of the [[state variable]]s, the mathematical [[variable (mathematics)|variables]] representing the system's input, output and feedback are represented as functions of [[frequency]]. The input signal and the system's [[transfer function]] are converted from time functions to functions of frequency by a [[transform (mathematics)|transform]] such as the [[Fourier transform]], [[Laplace transform]], or [[Z transform]]. The advantage of this technique is that it results in a simplification of the mathematics; the ''[[differential equation]]s'' that represent the system are replaced by ''[[algebraic equation]]s'' in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. * ''[[Time-domain state space representation]]'' – In this type the values of the [[state variable]]s are represented as functions of time. With this model, the system being analyzed is represented by one or more [[differential equation]]s. Since frequency domain techniques are limited to [[linear function|linear]] systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as [[simulation language]]s have made their analysis routine. In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domain [[state space (controls)|state space]] representation,{{citation needed|date=December 2022}} a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.<ref>{{cite book|title=State space & linear systems|series=Schaum's outline series |publisher=McGraw Hill|author=Donald M Wiberg|year=1971 |isbn=978-0-07-070096-3}}</ref><ref>{{cite journal|author=Terrell, William|title=Some fundamental control theory I: Controllability, observability, and duality —AND— Some fundamental control Theory II: Feedback linearization of single input nonlinear systems|journal=American Mathematical Monthly|volume=106|issue=9|year=1999|pages=705–719 and 812–828|url=http://www.maa.org/programs/maa-awards/writing-awards/some-fundamental-control-theory-i-controllability-observability-and-duality-and-some-fundamental|doi=10.2307/2589614|jstor=2589614}}</ref> ==System interfacing== Control systems can be divided into different categories depending on the number of inputs and outputs. * [[Single-input single-output system|Single-input single-output]] (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an [[audio system]], in which the control input is the input audio signal and the output is the sound waves from the speaker. * [[Multiple-input multiple-output system|Multiple-input multiple-output]] (MIMO) – These are found in more complicated systems. For example, modern large [[telescope]]s such as the [[Keck telescopes|Keck]] and [[MMT Observatory|MMT]] have mirrors composed of many separate segments each controlled by an [[actuator]]. The shape of the entire mirror is constantly adjusted by a MIMO [[active optics]] control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the [[wavefront]] due to turbulence in the atmosphere. Complicated systems such as [[nuclear reactor]]s and human [[cell (biology)|cells]] are simulated by a computer as large MIMO control systems. ===Classical SISO system design === The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using [[differential equations]], in the complex-s domain with the [[Laplace transform]], or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are [[PID controller]]s. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. ===Modern MIMO system design=== Modern control theory is carried out in the [[State space (controls)|state space]], and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order [[differential equation]]s defined using [[state variables]]. [[Nonlinear control|Nonlinear]], [[multivariable control|multivariable]], [[adaptive control|adaptive]] and [[robust control]] theories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars like [[Rudolf E. Kálmán]] and [[Aleksandr Lyapunov]] are well known among the people who have shaped modern control theory. ==Topics in control theory== ===Stability=== The ''stability'' of a general [[dynamical system]] with no input can be described with [[Lyapunov stability]] criteria. *A [[linear system]] is called [[BIBO stability|bounded-input bounded-output (BIBO) stable]] if its output will stay [[bounded function|bounded]] for any bounded input. *Stability for [[nonlinear system]]s that take an input is [[input-to-state stability]] (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time '''linear systems'''. Mathematically, this means that for a causal linear system to be stable all of the [[Pole (complex analysis)|poles]] of its [[transfer function]] must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside * in the open left half of the [[complex plane]] for continuous time, when the [[Laplace transform]] is used to obtain the transfer function. * inside the [[unit circle]] for discrete time, when the [[Z-transform]] is used. The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in [[Cartesian coordinates]] where the <math>x</math> axis is the real axis and the discrete Z-transform is in [[circular coordinates]] where the <math>\rho</math> axis is the real axis. When the appropriate conditions above are satisfied a system is said to be [[asymptotic stability|asymptotically stable]]; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a [[Absolute value#Complex_numbers|modulus]] equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is [[marginal stability|marginally stable]]; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has an [[impulse response]] of :<math>\ x[n] = 0.5^n u[n]</math> then the Z-transform (see [[Z-transform#Example 2 (causal ROC)|this example]]), is given by :<math>\ X(z) = \frac{1}{1 - 0.5z^{-1}}</math> which has a pole in <math>z = 0.5</math> (zero [[imaginary number|imaginary part]]). This system is BIBO (asymptotically) stable since the pole is ''inside'' the unit circle. However, if the impulse response was :<math>\ x[n] = 1.5^n u[n]</math> then the Z-transform is :<math>\ X(z) = \frac{1}{1 - 1.5z^{-1}}</math> which has a pole at <math>z = 1.5</math> and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the [[root locus]], [[Bode plot]]s or the [[Nyquist plot]]s. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use [[Ship stability#Stabilizer fins|antiroll fins]] that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. ===Controllability and observability=== {{Main| Controllability|Observability}} [[Controllability]] and [[observability]] are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed ''stabilizable''. Observability instead is related to the possibility of ''observing'', through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the [[eigenvalues]] of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. ===Control specification=== Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially [[robotics]] or aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have <math>Re[\lambda] < -\overline{\lambda}</math>, where <math>\overline{\lambda}</math> is a fixed value strictly greater than zero, instead of simply asking that <math>Re[\lambda]<0</math>. Another typical specification is the rejection of a step disturbance; including an [[integrator]] in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the [[rise time]] (the time needed by the control system to reach the desired value after a perturbation), peak [[overshoot (signal)|overshoot]] (the highest value reached by the response before reaching the desired value) and others ([[settling time]], quarter-decay). Frequency domain specifications are usually related to [[robust control|robustness]] (see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). ===Model identification and robustness=== A control system must always have some robustness property. A [[robust control]]ler is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true [[system dynamics]] can be so complicated that a complete model is impossible. ;System identification {{details|System identification}} The process of determining the equations that govern the model's dynamics is called [[system identification]]. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its [[transfer function]] or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a [[Mass-spring-damper model|mass-spring-damper]] system we know that <math> m \ddot{{x}}(t) = - K x(t) - \Beta \dot{x}(t)</math>. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. ;Analysis Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using [[Nyquist diagram|Nyquist]] and [[Bode diagram]]s. Topics include [[Bode plot#Gain margin and phase margin|gain and phase margin]] and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. ;Constraints A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: [[model predictive control]] (see later), and [[anti-wind up system (control)|anti-wind up systems]]. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. ==System classifications== ===Linear systems control=== {{Main|State space (controls)}} For MIMO systems, pole placement can be performed mathematically using a [[State space (controls)|state space representation]] of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. ===Nonlinear systems control=== {{Main|Nonlinear control}} Processes in industries like [[robotics]] and the [[aerospace industry]] typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., [[feedback linearization]], [[backstepping]], [[sliding mode control]], trajectory linearization control normally take advantage of results based on [[Lyapunov's theory]]. [[Differential geometry]] has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.<ref name=Shi_Gu_et_al>{{cite journal |author1 = Gu Shi|year = 2015 |title = Controllability of structural brain networks (Article Number 8414) |journal = Nature Communications |volume = 6 |quote = Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure |doi = 10.1038/ncomms9414|issue = 6 |display-authors=etal|arxiv = 1406.5197|bibcode = 2015NatCo...6.8414G |pmid = 26423222 |pmc = 4600713 |page = 8414}}</ref> ===Decentralized systems control=== {{Main|Distributed control system}} When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. ===Deterministic and stochastic systems control=== {{Main|Stochastic control}} A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. ==Main control strategies== Every control system must guarantee first the stability of the closed-loop behavior. For [[linear system]]s, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on [[Aleksandr Lyapunov]]'s Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. ;List of the main control techniques *[[Optimal control]] is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are [[Model Predictive Control]] (MPC) and [[linear-quadratic-Gaussian control]] (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in [[process control]]. *[[Robust control]] deals explicitly with uncertainty in its approach to controller design. Controllers designed using ''robust control'' methods tend to be able to cope with small differences between the true system and the nominal model used for design.<ref>{{cite journal|last1=Melby|first1=Paul|last2=et.|first2=al.|title=Robustness of Adaptation in Controlled Self-Adjusting Chaotic Systems |journal=Fluctuation and Noise Letters |volume=02|issue=4|pages=L285–L292|date=2002|doi=10.1142/S0219477502000919}}</ref> The early methods of [[Hendrik Wade Bode|Bode]] and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include [[H-infinity loop-shaping]] developed by Duncan McFarlane and [[Keith Glover]], [[Sliding mode control]] (SMC) developed by [[Vadim Utkin]], and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications.<ref name='TCL1'>{{cite journal|title=Safe Protocols for Generating Power Pulses with Heterogeneous Populations of Thermostatically Controlled Loads |author=N. A. Sinitsyn. S. Kundu, S. Backhaus |journal=[[Energy Conversion and Management]]|volume=67|year=2013|pages=297–308|arxiv=1211.0248|doi=10.1016/j.enconman.2012.11.021|bibcode=2013ECM....67..297S |s2cid=32067734 }}</ref> Robust methods aim to achieve robust performance and/or [[Stability theory|stability]] in the presence of small modeling errors. *[[Stochastic control]] deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations. *[[Adaptive control]] uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the [[aerospace industry]] in the 1950s, and have found particular success in that field. *A [[hierarchical control system]] is a type of [[control system]] in which a set of devices and governing software is arranged in a [[hierarchical]] [[tree (data structure)|tree]]. When the links in the tree are implemented by a [[computer network]], then that hierarchical control system is also a form of [[networked control system]]. *[[Intelligent control]] uses various AI computing approaches like [[artificial neural networks]], [[Bayesian probability]], [[fuzzy logic]],<ref>{{cite journal | title=A novel fuzzy framework for nonlinear system control| journal=Fuzzy Sets and Systems | year=2010 | last1=Liu |first1=Jie |author2=Wilson Wang |author3=Farid Golnaraghi |author4=Eric Kubica | volume=161 | issue=21 | pages=2746–2759 | doi=10.1016/j.fss.2010.04.009}}</ref> [[machine learning]], [[evolutionary computation]] and [[genetic algorithms]] or a combination of these methods, such as [[neuro-fuzzy]] algorithms, to control a [[dynamic system]]. *[[Self-organized criticality control]] may be defined as attempts to interfere in the processes by which the [[self-organized]] system dissipates energy. ==People in systems and control== {{Main|People in systems and control}} Many active and historical figures made significant contribution to control theory including * [[Pierre-Simon Laplace]] invented the [[Z-transform]] in his work on [[probability theory]], now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the [[Laplace transform]] which is named after him. * [[Irmgard Flugge-Lotz]] developed the theory of [[bang-bang control|discontinuous automatic control]] and applied it to [[autopilot|automatic aircraft control systems]]. * [[Alexander Lyapunov]] in the 1890s marks the beginning of [[stability theory]]. * [[Harold Stephen Black|Harold S. Black]] invented the concept of [[negative feedback amplifier]]s in 1927. He managed to develop stable negative feedback amplifiers in the 1930s. * [[Harry Nyquist]] developed the [[Nyquist stability criterion]] for feedback systems in the 1930s. * [[Richard Bellman]] developed [[dynamic programming]] in the 1940s.<ref>{{cite magazine |author=Richard Bellman |date=1964 |title=Control Theory |doi=10.1038/scientificamerican0964-186 |magazine=[[Scientific American]] |volume=211 |issue=3 |pages=186–200|author-link=Richard Bellman }}</ref> * [[Warren E. Dixon]], control theorist and a professor * [[Kyriakos G. Vamvoudakis]], developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems * [[Andrey Kolmogorov]] co-developed the [[Wiener filter|Wiener–Kolmogorov filter]] in 1941. * [[Norbert Wiener]] co-developed the Wiener–Kolmogorov filter and coined the term [[cybernetics]] in the 1940s. * [[John R. Ragazzini]] introduced [[digital control]] and the use of [[Z-transform]] in control theory (invented by Laplace) in the 1950s. * [[Lev Pontryagin]] introduced the [[Pontryagin's minimum principle|maximum principle]] and the [[Bang-bang control|bang-bang principle]]. * [[Pierre-Louis Lions]] developed [[viscosity solutions]] into stochastic control and [[optimal control]] methods. * [[Rudolf E. Kálmán]] pioneered the [[state-space]] approach to systems and control. Introduced the notions of [[controllability]] and [[observability]]. Developed the [[Kalman filter]] for linear estimation. * [[Ali H. Nayfeh]] who was one of the main contributors to nonlinear control theory and published many books on perturbation methods * [[Jan Camiel Willems|Jan C. Willems]] Introduced the concept of dissipativity, as a generalization of [[Lyapunov function]] to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the [[linear matrix inequality]] (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory. ==See also== {{Portal|Systems science}} ;Examples of control systems {{colbegin}} * {{annotated link|Automation}} * {{annotated link|Deadbeat controller}} * {{annotated link|Distributed parameter systems}} * {{annotated link|Fractional-order control}} * {{annotated link|H-infinity loop-shaping}} * {{annotated link|Hierarchical control system}} * {{annotated link|Model predictive control}} * {{annotated link|Optimal control}} * {{annotated link|Process control}} * {{annotated link|Robust control}} * {{annotated link|Servomechanism}} * {{annotated link|State space (controls)}} * {{annotated link|Vector control (motor)|Vector control}} {{colend}} ;Topics in control theory {{colbegin}} * {{annotated link|Coefficient diagram method}} * {{annotated link|Control reconfiguration}} * {{annotated link|Feedback}} * {{annotated link|H infinity}} * {{annotated link|Hankel singular value}} * {{annotated link|Krener's theorem}} * {{annotated link|Lead-lag compensator}} * {{annotated link|Minor loop feedback}} * {{annotated link|Minor loop feedback|Multi-loop feedback}} * {{annotated link|Positive systems}} * {{annotated link|Radial basis function}} * {{annotated link|Root locus}} * {{annotated link|Signal-flow graph}}s * {{annotated link|Stable polynomial}} * {{annotated link|State space representation}} * {{annotated link|Steady state}} * {{annotated link|Transient response}} * {{annotated link|Transient state}} * {{annotated link|Underactuation}} * {{annotated link|Youla–Kucera parametrization}} * {{annotated link|Markov chain approximation method}} {{colend}} ;Other related topics {{colbegin}} * {{annotated link|Adaptive system}} * {{annotated link|Automation and remote control}} * {{annotated link|Bond graph}} * {{annotated link|Control engineering}} * {{annotated link|Control–feedback–abort loop}} * {{annotated link|Controller (control theory)}} * {{annotated link|Cybernetics}} * {{annotated link|Intelligent control}} * {{annotated link|Mathematical system theory}} * {{annotated link|Negative feedback amplifier}} * {{annotated link|Outline of management}} * {{annotated link|People in systems and control}} * {{annotated link|Perceptual control theory}} * {{annotated link|Systems theory}} {{colend}} ==References== {{Reflist}} == Further reading == * {{cite book | editor-last = Levine | editor-first = William S. | title = The Control Handbook | publisher = CRC Press | place = New York | year = 1996 | isbn = 978-0-8493-8570-4 }} * {{cite book |author1=Karl J. Åström |author2=Richard M. Murray | year = 2008 | title = Feedback Systems: An Introduction for Scientists and Engineers.| publisher = Princeton University Press | url = http://www.cds.caltech.edu/~murray/books/AM08/pdf/am08-complete_28Sep12.pdf | isbn = 978-0-691-13576-2 }} * {{cite book | author= Christopher Kilian | title= Modern Control Technology | publisher= Thompson Delmar Learning | year= 2005 | isbn=978-1-4018-5806-3 }} * {{cite book | author= Vannevar Bush | title= Operational Circuit Analysis | publisher= John Wiley and Sons, Inc. | year= 1929 }} *{{cite book | author= Robert F. Stengel | title= Optimal Control and Estimation | publisher= Dover Publications | year= 1994 | isbn=978-0-486-68200-6 }} * {{cite book |last=Franklin |title=Feedback Control of Dynamic Systems |edition=4 |year=2002 |publisher=Prentice Hall |location=New Jersey |isbn=978-0-13-032393-4 |display-authors=etal }} * {{cite book |author1=Joseph L. Hellerstein |author2=Dawn M. Tilbury|author2-link= Dawn Tilbury |author3=Sujay Parekh | title= Feedback Control of Computing Systems | publisher= John Wiley and Sons | year= 2004 | isbn=978-0-471-26637-2}} *{{cite book | author= [[Diederich Hinrichsen]] and Anthony J. Pritchard | title= Mathematical Systems Theory I – Modelling, State Space Analysis, Stability and Robustness | publisher= Springer | year= 2005 | isbn=978-3-540-44125-0 }} *{{cite book | last = Sontag | first = Eduardo | author-link = Eduardo D. Sontag | year = 1998 | title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition | publisher = Springer | url = http://www.sontaglab.org/FTPDIR/sontag_mathematical_control_theory_springer98.pdf | isbn = 978-0-387-98489-6 }} * {{cite book | last = Goodwin | first = Graham | year = 2001 | title = Control System Design | publisher = Prentice Hall | isbn = 978-0-13-958653-8 }} * {{cite book | author= Christophe Basso | year = 2012 | title = Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide.| publisher = Artech House | url = http://cbasso.pagesperso-orange.fr/Spice.htm | isbn = 978-1608075577 }} <!-- * {{cite book | author = Briat, Corentin | year = 2015 | title = Linear Parameter-Varying and Time-Delay Systems. Analysis, Observation, Filtering & Control | publisher = Springer Verlag Heidelberg | isbn = 978-3-662-44049-0}}--> * {{cite book |author1=Boris J. Lurie |author2=Paul J. Enright |title=Classical Feedback Control with Nonlinear Multi-loop Systems |edition=3 |year=2019 |publisher=CRC Press |isbn=978-1-1385-4114-6 }} ; For Chemical Engineering * {{cite book | last = Luyben | first = William | year = 1989 | title = Process Modeling, Simulation, and Control for Chemical Engineers | publisher = McGraw Hill | isbn = 978-0-07-039159-8 }} == External links == {{Commons category}} {{Wikibooks|Control Systems}} * [https://ctms.engin.umich.edu/CTMS/index.php?aux=Home Control Tutorials for Matlab], a set of worked-through control examples solved by several different methods. * [https://controlguru.com/ Control Tuning and Best Practices] * [https://www.pidlab.com/ Advanced control structures, free on-line simulators explaining the control theory] {{Control theory}} {{Cybernetics}} {{Systems}} {{Areas of mathematics}} {{Authority control}} [[Category:Control theory| ]] [[Category:Control engineering]] [[Category:Computer engineering]] [[Category:Management cybernetics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Annotated link
(
edit
)
Template:Areas of mathematics
(
edit
)
Template:Authority control
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite magazine
(
edit
)
Template:Cite web
(
edit
)
Template:Colbegin
(
edit
)
Template:Colend
(
edit
)
Template:Commons category
(
edit
)
Template:Control theory
(
edit
)
Template:Cybernetics
(
edit
)
Template:Details
(
edit
)
Template:Excerpt
(
edit
)
Template:Main
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Systems
(
edit
)
Template:Use mdy dates
(
edit
)
Template:Wikibooks
(
edit
)
Template:X
(
edit
)