Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Active-set method
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{redirect|Active set|the band|The Active Set}} In mathematical [[Optimization (mathematics)|optimization]], the '''active-set method''' is an algorithm used to identify the active [[Constraint (mathematics)|constraints]] in a set of [[Inequality (mathematics)|inequality]] constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem. An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints : <math>g_1(x) \ge 0, \dots, g_k(x) \ge 0</math> that define the [[feasible region]], that is, the set of all ''x'' to search for the optimal solution. Given a point <math>x</math> in the feasible region, a constraint : <math>g_i(x) \ge 0</math> is called '''active''' at <math>x_0</math> if <math>g_i(x_0) = 0</math>, and '''inactive''' at <math>x_0</math> if <math>g_i(x_0) > 0.</math> Equality constraints are always active. The '''active set''' at <math>x_0</math> is made up of those constraints <math>g_i(x_0)</math> that are active at the current point {{harv|Nocedal|Wright|2006|p=308}}. The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the [[hyperplane]]s that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search. ==Active-set methods== In general an active-set algorithm has the following structure: : Find a feasible starting point : '''repeat until''' "optimal enough" :: ''solve'' the equality problem defined by the active set (approximately) :: ''compute'' the [[Lagrange multipliers]] of the active set :: ''remove'' a subset of the constraints with negative Lagrange multipliers :: ''search'' for infeasible constraints among the inactive constraints and add them to the problem : '''end repeat''' The motivations for this is that near the optimum usually only a small number of all constraints are binding and the solve step usually takes superlinear time in the amount of constraints. Thus repeated solving of a series equality constrained problem, which drop constraints which are not violated when improving but are in the way of improvement (negative lagrange multipliers) and adding of those constraints which the current solution violates can converge against the true solution. The optima of the last problem can often provide an initial guess in case the equality constrained problem solver needs an initial value. Methods that can be described as '''active-set methods''' include:<ref>{{harvnb|Nocedal|Wright|2006|pp=467–480}}</ref> * [[Successive linear programming]] (SLP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> * [[Sequential quadratic programming]] (SQP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> * [[Sequential linear-quadratic programming]] (SLQP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation --> * [[Frank–Wolfe algorithm|Reduced gradient method]] (RG) <!-- acc. to: MPS glossary, http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> * [[Generalized Reduced Gradient|Generalized reduced gradient method]] (GRG) <!-- acc. to: MPS glossary, http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> <!-- ? Wilson's Lagrange-newton method --> <!-- ? Method of feasible directions (MFD) --> <!-- ? Gradient projection method - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method --> == Performance == Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in ''m'', like the [[simplex method]]. However, its practical behaviour is typically much better.<ref name=":0">{{Cite web |last=Nemirovsky and Ben-Tal |date=2023 |title=Optimization III: Convex Optimization |url=http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf}}</ref>{{Rp|location=Sec.9.1}} ==References== {{Reflist}} ==Bibliography== * {{cite book |last=Murty |first=K. G. |title=Linear complementarity, linear and nonlinear programming |series=Sigma Series in Applied Mathematics |volume=3 |publisher=Heldermann Verlag |location=Berlin |year=1988 |pages=xlviii+629 pp |isbn=3-88538-403-5 |url=http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/ |mr=949214 |access-date=2010-04-03 |archive-url=https://web.archive.org/web/20100401043940/http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/ |archive-date=2010-04-01 |url-status=dead }} * {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006 }} [[Category:Optimization algorithms and methods]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Harv
(
edit
)
Template:Harvnb
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)