Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Orthonormality
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Property of two or more vectors that are orthogonal and of unit length}} In [[linear algebra]], two [[vector space|vectors]] in an [[inner product space]] are '''orthonormal''' if they are [[orthogonality|orthogonal]] [[unit vector]]s. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an '''orthonormal set''' if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a [[basis (linear algebra)|basis]] is called an ''[[orthonormal basis]]''. == Intuitive overview == The construction of [[orthogonality]] of vectors is motivated by a desire to extend the intuitive notion of perpendicular vectors to higher-dimensional spaces. In the [[Cartesian coordinate system#Cartesian coordinates in two dimensions|Cartesian plane]], two [[Vector (geometry)|vectors]] are said to be ''perpendicular'' if the angle between them is 90Β° (i.e. if they form a [[right angle]]). This definition can be formalized in Cartesian space by defining the [[dot product]] and specifying that two vectors in the plane are orthogonal if their dot product is zero. Similarly, the construction of the [[norm (mathematics)|norm]] of a vector is motivated by a desire to extend the intuitive notion of the [[Norm (mathematics)#Euclidean norm|length]] of a vector to higher-dimensional spaces. In Cartesian space, the ''norm'' of a vector is the square root of the vector dotted with itself. That is, :<math>\| \mathbf{x} \| = \sqrt{ \mathbf{x} \cdot \mathbf{x}}</math> Many important results in [[linear algebra]] deal with collections of two or more orthogonal vectors. But often, it is easier to deal with vectors of [[Unit vector|unit length]]. That is, it often simplifies things to only consider vectors whose norm equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given a special name. Two vectors which are orthogonal and of length 1 are said to be ''orthonormal''. === Simple example === What does a pair of orthonormal vectors in 2-D Euclidean space look like? Let '''u''' = (x<sub>1</sub>, y<sub>1</sub>) and '''v''' = (x<sub>2</sub>, y<sub>2</sub>). Consider the restrictions on x<sub>1</sub>, x<sub>2</sub>, y<sub>1</sub>, y<sub>2</sub> required to make '''u''' and '''v''' form an orthonormal pair. * From the orthogonality restriction, '''u''' β’ '''v''' = 0. * From the unit length restriction on '''u''', ||'''u'''|| = 1. * From the unit length restriction on '''v''', ||'''v'''|| = 1. Expanding these terms gives 3 equations: #<math>x_1 x_2 + y_1 y_2 = 0 \quad</math> #<math>\sqrt{{x_1}^2 + {y_1}^2} = 1</math> #<math>\sqrt{{x_2}^2 + {y_2}^2} = 1</math> Converting from Cartesian to [[polar coordinates]], and considering Equation <math>(2)</math> and Equation <math>(3)</math> immediately gives the result r<sub>1</sub> = r<sub>2</sub> = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the [[unit circle]]. After substitution, Equation <math>(1)</math> becomes <math> \cos \theta _1 \cos \theta _2 + \sin \theta _1 \sin \theta _2 = 0</math>. Rearranging gives <math>\tan \theta _1 = - \cot \theta _2</math>. Using a [[List of trigonometric identities#Shifts and periodicity|trigonometric identity]] to convert the [[cotangent]] term gives :<math> \tan ( \theta_1 ) = \tan \left( \theta_2 + \tfrac{\pi}{2} \right) </math> :<math> \Rightarrow \theta _1 = \theta _2 + \tfrac{\pi}{2} </math> It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90Β°. == Definition == Let <math>\mathcal{V}</math> be an [[inner-product space]]. A set of vectors :<math> \left\{ u_1 , u_2 , \ldots , u_n , \ldots \right\} \in \mathcal{V} </math> is called '''orthonormal''' [[if and only if]] :<math> \forall i,j : \langle u_i , u_j \rangle = \delta_{ij} </math> where <math>\delta_{ij} \,</math> is the [[Kronecker delta]] and <math>\langle \cdot , \cdot \rangle </math> is the [[inner product]] defined over <math>\mathcal{V}</math>. == Significance == Orthonormal sets are not especially significant on their own. However, they display certain features that make them fundamental in exploring the notion of [[Diagonalizable matrix|diagonalizability]] of certain [[linear map|operators]] on vector spaces. === Properties === Orthonormal sets have certain very appealing properties, which make them particularly easy to work with. *'''Theorem'''. If {'''e'''<sub>1</sub>, '''e'''<sub>2</sub>, ..., '''e'''<sub>''n''</sub>} is an orthonormal list of vectors, then <math display="block"> \forall \textbf{a} := [a_1, \cdots, a_n]; \ \|a_1 \textbf{e}_1 + a_2 \textbf{e}_2 + \cdots + a_n \textbf{e}_n\|^2 = |a_1|^2 + |a_2|^2 + \cdots + |a_n|^2</math> *'''Theorem'''. Every orthonormal list of vectors is [[linearly independent]]. === Existence === *'''[[Gram-Schmidt theorem]]'''. If {'''v'''<sub>1</sub>, '''v'''<sub>2</sub>,...,'''v'''<sub>n</sub>} is a linearly independent list of vectors in an inner-product space <math>\mathcal{V}</math>, then there exists an orthonormal list {'''e'''<sub>1</sub>, '''e'''<sub>2</sub>,...,'''e'''<sub>n</sub>} of vectors in <math>\mathcal{V}</math> such that ''span''('''e'''<sub>1</sub>, '''e'''<sub>2</sub>,...,'''e'''<sub>n</sub>) = ''span''('''v'''<sub>1</sub>, '''v'''<sub>2</sub>,...,'''v'''<sub>n</sub>). Proof of the Gram-Schmidt theorem is [[Constructive proof|constructive]], and [[Gram-Schmidt process|discussed at length]] elsewhere. The Gram-Schmidt theorem, together with the [[axiom of choice]], guarantees that every vector space admits an orthonormal basis. This is possibly the most significant use of orthonormality, as this fact permits [[linear map|operators]] on inner-product spaces to be discussed in terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized by the [[Spectral theorem|Spectral Theorem]]. == Examples == === Standard basis === The [[standard basis]] for the [[coordinate space]] '''F'''<sup>''n''</sup> is :{| |- |{'''e'''<sub>1</sub>, '''e'''<sub>2</sub>,...,'''e'''<sub>n</sub>} where | '''e'''<sub>1</sub> = (1, 0, ..., 0) |- | | '''e'''<sub>2</sub> = (0, 1, ..., 0) |- | |{{center|<math>\vdots</math>}} |- | | '''e'''<sub>n</sub> = (0, 0, ..., 1) |} Any two vectors '''e'''<sub>i</sub>, '''e'''<sub>j</sub> where iβ j are orthogonal, and all vectors are clearly of unit length. So {'''e'''<sub>1</sub>, '''e'''<sub>2</sub>,...,'''e'''<sub>n</sub>} forms an orthonormal basis. === Real-valued functions === When referring to [[real number|real]]-valued [[function (mathematics)|functions]], usually the [[Lp space|LΒ²]] inner product is assumed unless otherwise stated. Two functions <math>\phi(x)</math> and <math>\psi(x)</math> are orthonormal over the [[interval (mathematics)|interval]] <math>[a,b]</math> if :<math>(1)\quad\langle\phi(x),\psi(x)\rangle = \int_a^b\phi(x)\psi(x)dx = 0,\quad{\rm and}</math> :<math>(2)\quad||\phi(x)||_2 = ||\psi(x)||_2 = \left[\int_a^b|\phi(x)|^2dx\right]^\frac{1}{2} = \left[\int_a^b|\psi(x)|^2dx\right]^\frac{1}{2} = 1.</math> === Fourier series === The [[Fourier series]] is a method of expressing a periodic function in terms of sinusoidal [[Schauder basis|basis]] functions. Taking '''C'''[βΟ,Ο] to be the space of all real-valued functions continuous on the interval [βΟ,Ο] and taking the inner product to be :<math>\langle f, g \rangle = \int_{-\pi}^{\pi} f(x)g(x)dx</math> it can be shown that :<math>\left\{ \frac{1}{\sqrt{2\pi}}, \frac{\sin(x)}{\sqrt{\pi}}, \frac{\sin(2x)}{\sqrt{\pi}}, \ldots, \frac{\sin(nx)}{\sqrt{\pi}}, \frac{\cos(x)}{\sqrt{\pi}}, \frac{\cos(2x)}{\sqrt{\pi}}, \ldots, \frac{\cos(nx)}{\sqrt{\pi}} \right\}, \quad n \in \mathbb{N}</math> forms an orthonormal set. However, this is of little consequence, because '''C'''[βΟ,Ο] is infinite-dimensional, and a finite set of vectors cannot span it. But, removing the restriction that ''n'' be finite makes the set [[Dense subset|dense]] in '''C'''[βΟ,Ο] and therefore an orthonormal basis of '''C'''[βΟ,Ο]. == See also == *[[Orthogonalization]] *[[Orthonormal function system]] == Sources == * {{Citation | last1=Axler | first1=Sheldon | author-link=Sheldon Axler| title=Linear Algebra Done Right | publisher=[[Springer-Verlag]] | location=Berlin, New York | edition=2nd | page= [https://books.google.com/books?id=ovIYVIlithQC&pg=PT106 106β110]| isbn=978-0-387-98258-8 | year=1997}} * {{Citation | last1=Chen | first1=Wai-Kai | title=Fundamentals of Circuits and Filters | publisher=[[CRC Press]] | location=[[Boca Raton, Florida|Boca Raton]] | edition=3rd | page=[https://books.google.com/books?id=_UVb4cxL0c0C&pg=SA6-PA62 62]| isbn=978-1-4200-5887-1 | year=2009}} [[Category:Linear algebra]] [[Category:Functional analysis]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Center
(
edit
)
Template:Citation
(
edit
)
Template:Short description
(
edit
)