Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lagrange multiplier
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Example 5 β Numerical optimization=== [[Image:lagnum1.png|thumb|right|300px|Lagrange multipliers cause the critical points to occur at saddle points (Example '''5''').]] [[Image:lagnum2.png|thumb|right|300px|The magnitude of the gradient can be used to force the critical points to occur at local minima (Example '''5''').]] The critical points of Lagrangians occur at [[saddle point]]s, rather than at local maxima (or minima).<ref name=Walsh1975/><ref name=Heath2005>{{cite book |first=Michael T. |last= Heath |author-link=Michael Heath (computer scientist) |year=2005 |title=Scientific Computing: An introductory survey |page=203 |publisher=McGraw-Hill |isbn=978-0-07-124489-3 |url=https://books.google.com/books?id=gwBrMAEACAAJ}}</ref> Unfortunately, many numerical optimization techniques, such as [[hill climbing]], [[gradient descent]], some of the [[quasi-Newton method]]s, among others, are designed to find local maxima (or minima) and not saddle points. For this reason, one must either modify the formulation to ensure that it's a minimization problem (for example, by extremizing the square of the [[gradient]] of the Lagrangian as below), or else use an optimization technique that finds [[stationary points]] (such as [[Newton's method in optimization|Newton's method]] without an extremum seeking [[line search]]) and not necessarily extrema. As a simple example, consider the problem of finding the value of {{mvar|x}} that minimizes <math>\ f(x) = x^2\ ,</math> constrained such that <math>\ x^2 = 1 ~.</math> (This problem is somewhat untypical because there are only two values that satisfy this constraint, but it is useful for illustration purposes because the corresponding unconstrained function can be visualized in three dimensions.) Using Lagrange multipliers, this problem can be converted into an unconstrained optimization problem: <math display="block"> \mathcal{L}( x, \lambda ) = x^2 + \lambda(x^2-1) ~.</math> The two critical points occur at saddle points where {{math|''x'' {{=}} 1}} and {{math|''x'' {{=}} β1}}. In order to solve this problem with a numerical optimization technique, we must first transform this problem such that the critical points occur at local minima. This is done by computing the magnitude of the gradient of the unconstrained optimization problem. First, we compute the partial derivative of the unconstrained problem with respect to each variable: <math display="block">\begin{align} & \frac{\partial \mathcal{L} }{ \partial x } = 2x + 2x \lambda \\[5pt] & \frac{\partial \mathcal{L} }{ \partial \lambda }=x^2-1 ~. \end{align}</math> If the target function is not easily differentiable, the differential with respect to each variable can be approximated as <math display="block">\begin{align} \frac{\ \partial \mathcal{L}\ }{ \partial x } \approx \frac{\mathcal{L}(x + \varepsilon,\lambda) - \mathcal{L}(x,\lambda)}{\varepsilon}, \\[5pt] \frac{\ \partial \mathcal{L}\ }{ \partial \lambda } \approx \frac{\mathcal{L}(x, \lambda + \varepsilon) - \mathcal{L}(x,\lambda)}{\varepsilon}, \end{align}</math> where <math> \varepsilon </math> is a small value. Next, we compute the magnitude of the gradient, which is the square root of the sum of the squares of the partial derivatives: <math display="block">\begin{align} h(x,\lambda) & = \sqrt{ (2x+2x\lambda)^2 + (x^2-1)^2\ } \\[4pt] & \approx \sqrt{ \left(\frac{\ \mathcal{L}(x+\varepsilon,\lambda)-\mathcal{L}(x,\lambda)\ }{\varepsilon}\right)^2 + \left(\frac{\ \mathcal{L}(x,\lambda+\varepsilon) - \mathcal{L}(x,\lambda)\ }{\varepsilon}\right)^2\ }~ . \end{align}</math> (Since magnitude is always non-negative, optimizing over the squared-magnitude is equivalent to optimizing over the magnitude. Thus, the "square root" may be omitted from these equations with no expected difference in the results of optimization.) The critical points of {{mvar|h}} occur at {{math|1=''x'' = 1}} and {{math|1=''x'' = β1}}, just as in <math> \mathcal{L} ~.</math> Unlike the critical points in <math> \mathcal{L}\, ,</math> however, the critical points in {{mvar|h}} occur at local minima, so numerical optimization techniques can be used to find them.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)