Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gram matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Applications=== * In [[Riemannian geometry]], given an embedded <math>k</math>-dimensional [[Riemannian manifold]] <math>M\subset \mathbb{R}^n</math> and a parametrization <math>\phi: U\to M</math> for {{nowrap|<math>(x_1, \ldots, x_k)\in U\subset\mathbb{R}^k</math>,}} the volume form <math>\omega</math> on <math>M</math> induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: <math display="block">\omega = \sqrt{\det G}\ dx_1 \cdots dx_k,\quad G = \left[\left\langle \frac{\partial\phi}{\partial x_i},\frac{\partial\phi}{\partial x_j}\right\rangle\right].</math> This generalizes the classical surface integral of a parametrized surface <math>\phi:U\to S\subset \mathbb{R}^3</math> for <math>(x, y)\in U\subset\mathbb{R}^2</math>: <math display="block">\int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac{\partial\phi}{\partial x}\,{\times}\,\frac{\partial\phi}{\partial y}\right|\, dx\, dy.</math> * If the vectors are centered [[random variable]]s, the Gramian is approximately proportional to the '''[[covariance matrix]]''', with the scaling determined by the number of elements in the vector. * In [[quantum chemistry]], the Gram matrix of a set of [[basis vectors]] is the '''[[overlap matrix]]'''. * In [[control theory]] (or more generally [[systems theory]]), the '''[[controllability Gramian]]''' and '''[[observability Gramian]]''' determine properties of a linear system. * Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79β94). * In the [[finite element method]], the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace. * In [[machine learning]], [[kernel function]]s are often represented as Gram matrices.<ref>{{cite journal |last1=Lanckriet |first1=G. R. G. |first2=N. |last2=Cristianini |first3=P. |last3=Bartlett |first4=L. E. |last4=Ghaoui |first5=M. I. |last5=Jordan |title=Learning the kernel matrix with semidefinite programming |journal=Journal of Machine Learning Research |volume=5 |year=2004 |pages=27β72 [p. 29] |url=https://dl.acm.org/citation.cfm?id=894170 }}</ref> (Also see [[kernel principal component analysis|kernel PCA]]) * Since the Gram matrix over the reals is a [[symmetric matrix]], it is [[diagonalizable]] and its [[eigenvalues]] are non-negative. The diagonalization of the Gram matrix is the [[singular value decomposition]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)