Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hessian matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Bordered Hessian === A '''{{em|bordered Hessian}}''' is used for the second-derivative test in certain constrained optimization problems. Given the function <math>f</math> considered previously, but adding a constraint function <math>g</math> such that <math>g(\mathbf{x}) = c,</math> the bordered Hessian is the Hessian of the [[Lagrange multiplier|Lagrange function]] <math>\Lambda(\mathbf{x}, \lambda) = f(\mathbf{x}) + \lambda[g(\mathbf{x}) - c]</math>:<ref>{{cite web|title=Econ 500: Quantitative Methods in Economic Analysis I|date=October 7, 2004|first=Arne|last=Hallam|url=https://www2.econ.iastate.edu/classes/econ500/hallam/documents/opt_con_gen_000.pdf|work=Iowa State}}</ref> <math display=block>\mathbf H(\Lambda) = \begin{bmatrix} \dfrac{\partial^2 \Lambda}{\partial \lambda^2} & \dfrac{\partial^2 \Lambda}{\partial \lambda \partial \mathbf x} \\ \left(\dfrac{\partial^2 \Lambda}{\partial \lambda \partial \mathbf x}\right)^{\mathsf{T}} & \dfrac{\partial^2 \Lambda}{\partial \mathbf x^2} \end{bmatrix} = \begin{bmatrix} 0 & \dfrac{\partial g}{\partial x_1} & \dfrac{\partial g}{\partial x_2} & \cdots & \dfrac{\partial g}{\partial x_n} \\[2.2ex] \dfrac{\partial g}{\partial x_1} & \dfrac{\partial^2 \Lambda}{\partial x_1^2} & \dfrac{\partial^2 \Lambda}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 \Lambda}{\partial x_1\,\partial x_n} \\[2.2ex] \dfrac{\partial g}{\partial x_2} & \dfrac{\partial^2 \Lambda}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 \Lambda}{\partial x_2^2} & \cdots & \dfrac{\partial^2 \Lambda}{\partial x_2\,\partial x_n} \\[2.2ex] \vdots & \vdots & \vdots & \ddots & \vdots \\[2.2ex] \dfrac{\partial g}{\partial x_n} & \dfrac{\partial^2 \Lambda}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 \Lambda}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 \Lambda}{\partial x_n^2} \end{bmatrix} = \begin{bmatrix} 0 & \dfrac{\partial g}{\partial \mathbf x} \\ \left(\dfrac{\partial g}{\partial \mathbf x}\right)^\mathsf{T} & \dfrac{\partial^2 \Lambda}{\partial \mathbf x^2} \end{bmatrix}</math> If there are, say, <math>m</math> constraints then the zero in the upper-left corner is an <math>m \times m</math> block of zeros, and there are <math>m</math> border rows at the top and <math>m</math> border columns at the left. The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, as <math>\mathbf{z}^\mathsf{T} \mathbf{H} \mathbf{z} = 0</math> if <math>\mathbf{z}</math> is any vector whose sole non-zero entry is its first. The second derivative test consists here of sign restrictions of the determinants of a certain set of <math>n - m</math> submatrices of the bordered Hessian.<ref>{{Cite book|last1=Neudecker|first1=Heinz|last2=Magnus|first2=Jan R.|title=Matrix Differential Calculus with Applications in Statistics and Econometrics|publisher=[[John Wiley & Sons]]|location=New York|isbn=978-0-471-91516-4|year=1988|page=136}}</ref> Intuitively, the <math>m</math> constraints can be thought of as reducing the problem to one with <math>n - m</math> free variables. (For example, the maximization of <math>f\left(x_1, x_2, x_3\right)</math> subject to the constraint <math>x_1 + x_2 + x_3 = 1</math> can be reduced to the maximization of <math>f\left(x_1, x_2, 1 - x_1 - x_2\right)</math> without constraint.) Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first <math>2 m</math> leading principal minors are neglected, the smallest minor consisting of the truncated first <math>2 m + 1</math> rows and columns, the next consisting of the truncated first <math>2 m + 2</math> rows and columns, and so on, with the last being the entire bordered Hessian; if <math>2 m + 1</math> is larger than <math>n + m,</math> then the smallest leading principal minor is the Hessian itself.<ref>{{cite book|last=Chiang|first=Alpha C.|title=Fundamental Methods of Mathematical Economics|publisher=McGraw-Hill|edition=Third|year=1984|page=[https://archive.org/details/fundamentalmetho0000chia_b4p1/page/386 386]|isbn=978-0-07-010813-4|url=https://archive.org/details/fundamentalmetho0000chia_b4p1/page/386}}</ref> There are thus <math>n - m</math> minors to consider, each evaluated at the specific point being considered as a [[Candidate solution#Calculus|candidate maximum or minimum]]. A sufficient condition for a local {{em|maximum}} is that these minors alternate in sign with the smallest one having the sign of <math>(-1)^{m+1}.</math> A sufficient condition for a local {{em|minimum}} is that all of these minors have the sign of <math>(-1)^m.</math> (In the unconstrained case of <math>m=0</math> these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)