Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Definite matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Simultaneous diagonalization == One symmetric matrix and another matrix that is both symmetric and positive definite can be [[diagonalizable matrix#Simultaneous diagonalization|simultaneously diagonalized]]. This is so although simultaneous diagonalization is not necessarily performed with a [[Matrix similarity|similarity transformation]]. This result does not extend to the case of three or more matrices. In this section we write for the real case. Extension to the complex case is immediate. Let <math>M</math> be a symmetric and <math>N</math> a symmetric and positive definite matrix. Write the generalized eigenvalue equation as <math>\left(M - \lambda N\right)\mathbf{x} = 0</math> where we impose that <math>\mathbf{x}</math> be normalized, i.e. <math>\mathbf{x}^\mathsf{T} N \mathbf{x} = 1.</math> Now we use [[Cholesky decomposition]] to write the inverse of <math>N</math> as <math>Q^\mathsf{T} Q.</math> Multiplying by <math>Q</math> and letting <math>\mathbf{x} = Q^\mathsf{T} \mathbf{y},</math> we get <math>Q \left(M - \lambda N\right) Q^\mathsf{T} \mathbf{y} = 0,</math> which can be rewritten as <math>\left(Q M Q^\mathsf{T} \right)\mathbf{y} = \lambda \mathbf{y}</math> where <math>\mathbf{y}^\mathsf{T} \mathbf{y} = 1.</math> Manipulation now yields <math>MX = NX\Lambda</math> where <math>X</math> is a matrix having as columns the generalized eigenvectors and <math>\Lambda</math> is a diagonal matrix of the generalized eigenvalues. Now premultiplication with <math>X^\mathsf{T}</math> gives the final result: <math>X^\mathsf{T} MX = \Lambda</math> and <math>X^\mathsf{T} N X = I,</math> but note that this is no longer an orthogonal diagonalization with respect to the inner product where <math>\mathbf{y}^\mathsf{T} \mathbf{y} = 1.</math> In fact, we diagonalized <math>M</math> with respect to the inner product induced by <math>N.</math><ref>{{harvtxt|Horn|Johnson|2013}}, p. 485, Theorem 7.6.1</ref> Note that this result does not contradict what is said on simultaneous diagonalization in the article [[diagonalizable matrix#Simultaneous diagonalization|Diagonalizable matrix]], which refers to simultaneous diagonalization by a similarity transformation. Our result here is more akin to a simultaneous diagonalization of two quadratic forms, and is useful for optimization of one form under conditions on the other.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)