Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gram–Schmidt process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== The Gram–Schmidt process == [[File:Gram-Schmidt orthonormalization process.gif|thumb|400px|The modified Gram-Schmidt process being executed on three linearly independent, non-orthogonal vectors of a basis for <math>\mathbb{R}^3</math>. Click on image for details. Modification is explained in the Numerical Stability section of this article.]] The [[vector projection]] of a vector <math>\mathbf v</math> on a nonzero vector <math>\mathbf u</math> is defined as<ref group="note">In the complex case, this assumes that the inner product is linear in the first argument and conjugate-linear in the second. In physics a more common convention is linearity in the second argument, in which case we define <math display="block">\operatorname{proj}_{\mathbf u} (\mathbf{v}) = \frac{\langle \mathbf{u}, \mathbf{v}\rangle}{\langle \mathbf{u}, \mathbf{u}\rangle} \,\mathbf{u}. </math></ref> <math display="block">\operatorname{proj}_{\mathbf u} (\mathbf{v}) = \frac{\langle \mathbf{v}, \mathbf{u}\rangle}{\langle \mathbf{u}, \mathbf{u}\rangle} \,\mathbf{u} , </math> where <math>\langle \mathbf{v}, \mathbf{u}\rangle</math> denotes the [[inner product]] of the vectors <math>\mathbf u</math> and <math>\mathbf v</math>. This means that <math>\operatorname{proj}_{\mathbf u} (\mathbf{v})</math> is the [[orthogonal projection]] of <math>\mathbf v</math> onto the line spanned by <math>\mathbf u</math>. If <math>\mathbf u</math> is the zero vector, then <math>\operatorname{proj}_{\mathbf u} (\mathbf{v})</math> is defined as the zero vector. Given <math>k</math> nonzero linearly-independent vectors <math>\mathbf{v}_1, \ldots, \mathbf{v}_k</math> the Gram–Schmidt process defines the vectors <math>\mathbf{u}_1, \ldots, \mathbf{u}_k</math> as follows: <math display="block">\begin{align} \mathbf{u}_1 & = \mathbf{v}_1, & \!\mathbf{e}_1 & = \frac{\mathbf{u}_1}{\|\mathbf{u}_1\|} \\ \mathbf{u}_2 & = \mathbf{v}_2-\operatorname{proj}_{\mathbf{u}_1} (\mathbf{v}_2), & \!\mathbf{e}_2 & = \frac{\mathbf{u}_2}{\|\mathbf{u}_2\|} \\ \mathbf{u}_3 & = \mathbf{v}_3-\operatorname{proj}_{\mathbf{u}_1} (\mathbf{v}_3) - \operatorname{proj}_{\mathbf{u}_2} (\mathbf{v}_3), & \!\mathbf{e}_3 & = \frac{\mathbf{u}_3 }{\|\mathbf{u}_3\|} \\ \mathbf{u}_4 & = \mathbf{v}_4-\operatorname{proj}_{\mathbf{u}_1} (\mathbf{v}_4)-\operatorname{proj}_{\mathbf{u}_2} (\mathbf{v}_4)-\operatorname{proj}_{\mathbf{u}_3} (\mathbf{v}_4), & \!\mathbf{e}_4 & = {\mathbf{u}_4 \over \|\mathbf{u}_4\|} \\ & {}\ \ \vdots & & {}\ \ \vdots \\ \mathbf{u}_k & = \mathbf{v}_k - \sum_{j=1}^{k-1}\operatorname{proj}_{\mathbf{u}_j} (\mathbf{v}_k), & \!\mathbf{e}_k & = \frac{\mathbf{u}_k}{\|\mathbf{u}_k\|}. \end{align}</math> The sequence <math>\mathbf{u}_1, \ldots, \mathbf{u}_k</math> is the required system of orthogonal vectors, and the normalized vectors <math>\mathbf{e}_1, \ldots, \mathbf{e}_k</math> form an [[orthonormal set]]. The calculation of the sequence <math>\mathbf{u}_1, \ldots, \mathbf{u}_k</math> is known as ''Gram–Schmidt [[orthogonalization]]'', and the calculation of the sequence <math>\mathbf{e}_1, \ldots, \mathbf{e}_k</math> is known as ''Gram–Schmidt [[orthonormalization]]''. To check that these formulas yield an orthogonal sequence, first compute <math>\langle \mathbf{u}_1, \mathbf{u}_2 \rangle</math> by substituting the above formula for <math>\mathbf{u}_2</math>: we get zero. Then use this to compute <math>\langle \mathbf{u}_1, \mathbf{u}_3 \rangle</math> again by substituting the formula for <math>\mathbf{u}_3</math>: we get zero. For arbitrary <math>k</math> the proof is accomplished by [[mathematical induction]]. Geometrically, this method proceeds as follows: to compute <math>\mathbf{u}_i</math>, it projects <math>\mathbf{v}_i</math> orthogonally onto the subspace <math>U</math> generated by <math>\mathbf{u}_1, \ldots, \mathbf{u}_{i-1}</math>, which is the same as the subspace generated by <math>\mathbf{v}_1, \ldots, \mathbf{v}_{i-1}</math>. The vector <math>\mathbf{u}_i</math> is then defined to be the difference between <math>\mathbf{v}_i</math> and this projection, guaranteed to be orthogonal to all of the vectors in the subspace <math>U</math>. The Gram–Schmidt process also applies to a linearly independent [[countably infinite]] sequence {{math|{'''v'''<sub>''i''</sub>}<sub>''i''</sub>}}. The result is an orthogonal (or orthonormal) sequence {{math|{'''u'''<sub>''i''</sub>}<sub>''i''</sub>}} such that for natural number {{mvar|n}}: the algebraic span of <math>\mathbf{v}_1, \ldots, \mathbf{v}_{n}</math> is the same as that of <math>\mathbf{u}_1, \ldots, \mathbf{u}_{n}</math>. If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the {{math|'''0'''}} vector on the <math>i</math>th step, assuming that <math>\mathbf{v}_i</math> is a linear combination of <math>\mathbf{v}_1, \ldots, \mathbf{v}_{i-1}</math>. If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs. A variant of the Gram–Schmidt process using [[transfinite recursion]] applied to a (possibly uncountably) infinite sequence of vectors <math>(v_\alpha)_{\alpha<\lambda}</math> yields a set of orthonormal vectors <math>(u_\alpha)_{\alpha<\kappa}</math> with <math>\kappa\leq\lambda</math> such that for any <math>\alpha\leq\lambda</math>, the [[Complete space#Completion|completion]] of the span of <math>\{ u_\beta : \beta<\min(\alpha,\kappa) \}</math> is the same as that of {{nowrap|<math>\{ v_\beta : \beta < \alpha\}</math>.}} In particular, when applied to a (algebraic) basis of a [[Hilbert space]] (or, more generally, a basis of any dense subspace), it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality <math>\kappa < \lambda</math> holds, even if the starting set was linearly independent, and the span of <math>(u_\alpha)_{\alpha<\kappa}</math> need not be a subspace of the span of <math>(v_\alpha)_{\alpha<\lambda}</math> (rather, it's a subspace of its completion).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)