Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gram matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties== ===Positive-semidefiniteness=== The Gram matrix is [[symmetric matrix|symmetric]] in the case the inner product is real-valued; it is [[Hermitian matrix|Hermitian]] in the general, complex case by definition of an [[inner product]]. The Gram matrix is [[Positive-semidefinite matrix|positive semidefinite]], and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation: : <math> x^\dagger \mathbf{G} x = \sum_{i,j}x_i^* x_j\left\langle v_i, v_j \right\rangle = \sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle = \biggl\langle \sum_i x_i v_i, \sum_j x_j v_j \biggr\rangle = \biggl\| \sum_i x_i v_i \biggr\|^2 \geq 0 . </math> The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the [[inner-product]], and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors <math> v_i </math> are linearly independent (that is, <math display="inline">\sum_i x_i v_i \neq 0</math> for all <math>x</math>).<ref name="HJ-7.2.10"/> ===Finding a vector realization=== {{See also|Positive definite matrix#Decomposition}} Given any positive semidefinite matrix <math>M</math>, one can decompose it as: : <math>M = B^\dagger B</math>, where <math>B^\dagger</math> is the [[conjugate transpose]] of <math>B</math> (or <math>M = B^\textsf{T} B</math> in the real case). Here <math>B</math> is a <math>k \times n</math> matrix, where <math>k</math> is the [[matrix rank|rank]] of <math>M</math>. Various ways to obtain such a decomposition include computing the [[Cholesky decomposition]] or taking the [[square root of a matrix|non-negative square root]] of <math>M</math>. The columns <math>b^{(1)}, \dots, b^{(n)}</math> of <math>B</math> can be seen as ''n'' vectors in <math>\mathbb{C}^k</math> (or ''k''-dimensional Euclidean space <math>\mathbb{R}^k</math>, in the real case). Then : <math>M_{ij} = b^{(i)} \cdot b^{(j)}</math> where the [[dot product]] <math display="inline">a \cdot b = \sum_{\ell=1}^k a_\ell^* b_\ell</math> is the usual inner product on <math>\mathbb{C}^k</math>. Thus a [[Hermitian matrix]] <math>M</math> is positive semidefinite if and only if it is the Gram matrix of some vectors <math>b^{(1)}, \dots, b^{(n)}</math>. Such vectors are called a '''vector realization''' of {{nowrap|<math>M</math>.}} The infinite-dimensional analog of this statement is [[Mercer's theorem]]. ===Uniqueness of vector realizations=== If <math>M</math> is the Gram matrix of vectors <math>v_1,\dots,v_n</math> in <math>\mathbb{R}^k</math> then applying any rotation or reflection of <math>\mathbb{R}^k</math> (any [[orthogonal transformation]], that is, any [[Euclidean isometry]] preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any <math>k \times k</math> [[orthogonal matrix]] <math>Q</math>, the Gram matrix of <math>Q v_1,\dots, Q v_n</math> is also {{nowrap|<math>M</math>.}} This is the only way in which two real vector realizations of <math>M</math> can differ: the vectors <math>v_1,\dots,v_n</math> are unique up to [[orthogonal transformation]]s. In other words, the dot products <math>v_i \cdot v_j</math> and <math>w_i \cdot w_j</math> are equal if and only if some rigid transformation of <math>\mathbb{R}^k</math> transforms the vectors <math>v_1,\dots,v_n</math> to <math>w_1, \dots, w_n</math> and 0 to 0. The same holds in the complex case, with [[unitary transformation]]s in place of orthogonal ones. That is, if the Gram matrix of vectors <math>v_1, \dots, v_n</math> is equal to the Gram matrix of vectors <math>w_1, \dots, w_n</math> in <math>\mathbb{C}^k</math> then there is a [[unitary matrix|unitary]] <math>k \times k</math> matrix <math>U</math> (meaning <math>U^\dagger U = I</math>) such that <math>v_i = U w_i</math> for <math>i = 1, \dots, n</math>.<ref>{{harvtxt|Horn|Johnson|2013}}, p. 452, Theorem 7.3.11</ref> ===Other properties=== * Because <math>G = G^\dagger</math>, it is necessarily the case that <math>G</math> and <math>G^\dagger</math> commute. That is, a real or complex Gram matrix <math>G</math> is also a [[normal matrix]]. * The Gram matrix of any [[orthonormal basis]] is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real [[rotation matrix]] is the identity matrix. Likewise, the Gram matrix of the rows or columns of a [[unitary matrix]] is the identity matrix. * The rank of the Gram matrix of vectors in <math>\mathbb{R}^k</math> or <math>\mathbb{C}^k</math> equals the dimension of the space [[Linear span|spanned]] by these vectors.<ref name="HJ-7.2.10"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)