Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dual space
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algebraic dual space == Given any [[vector space]] <math>V</math> over a [[field (mathematics)|field]] <math>F</math>, the '''(algebraic) dual space''' <math>V^{*}</math><ref>{{harvtxt|Katznelson|Katznelson|2008}} p. 37, §2.1.3</ref> (alternatively denoted by <math>V^{\lor}</math><ref name=":03">{{harvtxt|Tu|2011}} p. 19, §3.1</ref> or <math>V'</math><ref>{{harvtxt|Axler|2015}} p. 101, §3.94</ref><ref>{{Harvp|Halmos|1974}} p. 20, §13</ref>)<ref group="nb">For <math>V^{\lor}</math> used in this way, see ''[[iarchive:TuL.W.AnIntroductionToManifolds2e2010Springer|An Introduction to Manifolds]]'' ({{Harvnb|Tu|2011|p=19}}). This notation is sometimes used when <math>(\cdot)^*</math> is reserved for some other meaning. For instance, in the above text, <math>F^*</math> is frequently used to denote the codifferential of ''<math>F</math>'', so that <math>F^* \omega</math> represents the pullback of the form <math>\omega</math>. {{Harvtxt|Halmos|1974|p=20}} uses <math>V'</math> to denote the algebraic dual of ''<math>V</math>''. However, other authors use <math>V'</math> for the continuous dual, while reserving <math>V^*</math> for the algebraic dual ({{Harvnb|Trèves|2006|p=35}}). </ref> is defined as the set of all [[linear map]]s ''<math>\varphi: V \to F</math>'' ([[linear functional]]s). Since linear maps are vector space [[homomorphism]]s, the dual space may be denoted <math>\hom (V, F)</math>.<ref name=":03"/> The dual space <math>V^*</math> itself becomes a vector space over ''<math>F</math>'' when equipped with an addition and scalar multiplication satisfying: : <math> \begin{align} (\varphi + \psi)(x) &= \varphi(x) + \psi(x) \\ (a \varphi)(x) &= a \left(\varphi(x)\right) \end{align}</math> for all <math>\varphi, \psi \in V^*</math>, ''<math>x \in V</math>'', and <math>a \in F</math>. Elements of the algebraic dual space <math>V^*</math> are sometimes called '''covectors''', '''one-forms''', or '''[[linear form]]s'''. The pairing of a functional ''<math>\varphi</math>'' in the dual space <math>V^*</math> and an element ''<math>x</math>'' of ''<math>V</math>'' is sometimes denoted by a bracket: ''<math>\varphi (x) = [x, \varphi]</math>''<ref>{{Harvp|Halmos|1974}} p. 21, §14</ref> or ''<math>\varphi (x) = \langle x, \varphi \rangle</math>''.<ref>{{harvnb|Misner|Thorne|Wheeler|1973}}</ref> This pairing defines a nondegenerate [[bilinear mapping]]<ref group="nb">In many areas, such as [[quantum mechanics]], {{math|{{langle}}·,·{{rangle}}}} is reserved for a [[sesquilinear form]] defined on {{math|''V'' × ''V''}}.</ref> <math>\langle \cdot, \cdot \rangle : V \times V^* \to F</math> called the [[natural pairing]]. === Finite-dimensional case === {{See also|Dual basis}} If <math>V</math> is finite-dimensional, then <math>V^*</math> has the same dimension as <math>V</math>. Given a [[basis of a vector space|basis]] <math>\{\mathbf{e}_1,\dots,\mathbf{e}_n\}</math> in <math>V</math>, it is possible to construct a specific basis in <math>V^*</math>, called the [[dual basis]]. This dual basis is a set <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> of linear functionals on <math>V</math>, defined by the relation : <math> \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n </math> for any choice of coefficients <math>c^i\in F</math>. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations : <math> \mathbf{e}^i(\mathbf{e}_j) = \delta^{i}_{j} </math> where <math>\delta^{i}_{j}</math> is the [[Kronecker delta]] symbol. This property is referred to as the ''bi-orthogonality property''. {{Collapse top | Proof}} Consider <math>\{\mathbf{e}_1,\dots,\mathbf{e}_n\}</math> the basis of V. Let <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> be defined as the following: <math> \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n </math>. These are a basis of <math>V^*</math> because: # The <math>\mathbf{e}^i , i=1, 2, \dots, n, </math> are linear functionals, which map <math> x,y \in V </math> such as <math> x= \alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n </math> and <math> y = \beta_1\mathbf{e}_1 + \dots + \beta_n \mathbf{e}_n </math> to scalars <math> \mathbf{e}^i(x)=\alpha_i </math> and <math> \mathbf{e}^i(y)=\beta_i</math>. Then also, <math> x+\lambda y=(\alpha_1+\lambda \beta_1)\mathbf{e}_1 + \dots + (\alpha_n+\lambda\beta_n)\mathbf{e}_n </math> and <math> \mathbf{e}^i(x+\lambda y)=\alpha_i+\lambda\beta_i=\mathbf{e}^i(x)+\lambda \mathbf{e}^i(y) </math>. Therefore, <math> \mathbf{e}^i \in V^* </math> for <math> i= 1, 2, \dots, n </math>. # Suppose <math> \lambda_1 \mathbf{e}^1 + \cdots + \lambda_n \mathbf{e}^n =0 \in V^*</math>. Applying this functional on the basis vectors of <math> V </math> successively, lead us to <math> \lambda_1=\lambda_2= \dots=\lambda_n=0 </math> (The functional applied in <math> \mathbf{e}_i </math> results in <math> \lambda_i </math>). Therefore, <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> is linearly independent on <math>V^*</math>. #Lastly, consider <math> g \in V^* </math>. Then :<math> g(x)=g(\alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n)=\alpha_1g(\mathbf{e}_1) + \dots + \alpha_ng(\mathbf{e}_n)=\mathbf{e}^1(x)g(\mathbf{e}_1) + \dots + \mathbf{e}^n(x)g(\mathbf{e}_n) </math> and <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> generates <math>V^*</math>. Hence, it is a basis of <math> V^*</math>. {{Collapse bottom}} For example, if <math>V</math> is <math>\R^2</math>, let its basis be chosen as <math>\{\mathbf{e}_1=(1/2,1/2),\mathbf{e}_2=(0,1)\}</math>. The basis vectors are not orthogonal to each other. Then, <math>\mathbf{e}^1</math> and <math>\mathbf{e}^2</math> are [[one-form]]s (functions that map a vector to a scalar) such that <math>\mathbf{e}^1(\mathbf{e}_1)=1</math>, <math>\mathbf{e}^1(\mathbf{e}_2)=0</math>, <math>\mathbf{e}^2(\mathbf{e}_1)=0</math>, and <math>\mathbf{e}^2(\mathbf{e}_2)=1</math>. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as :<math> \begin{bmatrix} e^{11} & e^{12} \\ e^{21} & e^{22} \end{bmatrix} \begin{bmatrix} e_{11} & e_{21} \\ e_{12} & e_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. </math> Solving for the unknown values in the first matrix shows the dual basis to be <math>\{\mathbf{e}^1=(2,0),\mathbf{e}^2=(-1,1)\}</math>. Because <math>\mathbf{e}^1</math> and <math>\mathbf{e}^2</math> are functionals, they can be rewritten as <math>\mathbf{e}^1(x,y)=2x</math> and <math>\mathbf{e}^2(x,y)=-x+y</math>. In general, when <math>V</math> is <math>\R^n</math>, if <math>E=[\mathbf{e}_1|\cdots|\mathbf{e}_n]</math> is a matrix whose columns are the basis vectors and <math>\hat{E}=[\mathbf{e}^1|\cdots|\mathbf{e}^n]</math> is a matrix whose columns are the dual basis vectors, then :<math>\hat{E}^\textrm{T}\cdot E = I_n,</math> where <math>I_n</math> is the [[identity matrix]] of order <math>n</math>. The biorthogonality property of these two basis sets allows any point <math>\mathbf{x}\in V</math> to be represented as :<math>\mathbf{x} = \sum_i \langle\mathbf{x},\mathbf{e}^i \rangle \mathbf{e}_i = \sum_i \langle \mathbf{x}, \mathbf{e}_i \rangle \mathbf{e}^i,</math> even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product <math>\langle \cdot, \cdot \rangle</math> and the corresponding duality pairing are introduced, as described below in ''{{section link||Bilinear_products_and_dual_spaces}}''. In particular, <math>\R^n</math> can be interpreted as the space of columns of <math>n</math> [[real number]]s, its dual space is typically written as the space of ''rows'' of <math>n</math> real numbers. Such a row acts on <math>\R^n</math> as a linear functional by ordinary [[matrix multiplication]]. This is because a functional maps every <math>n</math>-vector <math>x</math> into a real number <math>y</math>. Then, seeing this functional as a matrix <math>M</math>, and <math>x</math> as an <math>n\times 1</math> matrix, and <math>y</math> a <math>1\times 1</math> matrix (trivially, a real number) respectively, if <math>Mx=y</math> then, by dimension reasons, <math>M</math> must be a <math>1\times n</math> matrix; that is, <math>M</math> must be a row vector. If <math>V</math> consists of the space of geometrical [[Vector (geometric)|vector]]s in the plane, then the [[level set|level curves]] of an element of <math>V^*</math> form a family of parallel lines in <math>V</math>, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of <math>V^*</math> can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses. More generally, if <math>V</math> is a vector space of any dimension, then the [[level sets]] of a linear functional in <math>V^*</math> are parallel hyperplanes in <math>V</math>, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.<ref>{{harvnb|Misner|Thorne|Wheeler|1973|loc=§2.5}}</ref> === Infinite-dimensional case === If <math>V</math> is not finite-dimensional but has a [[basis (linear algebra)|basis]]<ref group=nb name="choice">Several assertions in this article require the [[axiom of choice]] for their justification. The axiom of choice is needed to show that an arbitrary vector space has a basis: in particular it is needed to show that <math>\R^\N</math> has a basis. It is also needed to show that the dual of an infinite-dimensional vector space <math>V</math> is nonzero, and hence that the natural map from <math>V</math> to its double dual is injective.</ref> <math>\mathbf{e}_\alpha</math> indexed by an infinite set <math>A</math>, then the same construction as in the finite-dimensional case yields [[linearly independent]] elements <math>\mathbf{e}^\alpha</math> (<math>\alpha\in A</math>) of the dual space, but they will not form a basis. For instance, consider the space <math>\R^\infty</math>, whose elements are those [[sequence]]s of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers <math>\N</math>. For <math>i \in \N</math>, <math>\mathbf{e}_i</math> is the sequence consisting of all zeroes except in the <math>i</math>-th position, which is 1. The dual space of <math>\R^\infty</math> is (isomorphic to) <math>\R^\N</math>, the space of ''all'' sequences of real numbers: each real sequence <math>(a_n)</math> defines a function where the element <math>(x_n)</math> of <math>\R^\infty</math> is sent to the number :<math>\sum_n a_nx_n,</math> which is a finite sum because there are only finitely many nonzero <math>x_n</math>. The [[dimension (vector space)|dimension]] of <math>\R^\infty</math> is [[countably infinite]], whereas <math>\R^\N</math> does not have a countable basis. This observation generalizes to any<ref group=nb name="choice"/> infinite-dimensional vector space <math>V</math> over any field <math>F</math>: a choice of basis <math>\{\mathbf{e}_\alpha:\alpha\in A\}</math> identifies <math>V</math> with the space <math>(F^A)_0</math> of functions <math>f:A\to F</math> such that <math>f_\alpha=f(\alpha)</math> is nonzero for only finitely many <math>\alpha\in A</math>, where such a function <math>f</math> is identified with the vector :<math>\sum_{\alpha\in A} f_\alpha\mathbf{e}_\alpha</math> in <math>V</math> (the sum is finite by the assumption on <math>f</math>, and any <math>v\in V</math> may be written uniquely in this way by the definition of the basis). The dual space of <math>V</math> may then be identified with the space <math>F^A</math> of ''all'' functions from <math>A</math> to <math>F</math>: a linear functional <math>T</math> on <math>V</math> is uniquely determined by the values <math>\theta_\alpha=T(\mathbf{e}_\alpha)</math> it takes on the basis of <math>V</math>, and any function <math>\theta:A\to F</math> (with <math>\theta(\alpha)=\theta_\alpha</math>) defines a linear functional <math>T</math> on <math>V</math> by :<math>T\left (\sum_{\alpha\in A} f_\alpha \mathbf{e}_\alpha\right) = \sum_{\alpha \in A} f_\alpha T(e_\alpha) = \sum_{\alpha\in A} f_\alpha \theta_\alpha.</math> Again, the sum is finite because <math>f_\alpha</math> is nonzero for only finitely many <math>\alpha</math>. The set <math>(F^A)_0</math> may be identified (essentially by definition) with the [[Direct sum of modules|direct sum]] of infinitely many copies of <math>F</math> (viewed as a 1-dimensional vector space over itself) indexed by <math>A</math>, i.e. there are linear isomorphisms :<math> V\cong (F^A)_0\cong\bigoplus_{\alpha\in A} F.</math> On the other hand, <math>F^A</math> is (again by definition), the [[direct product]] of infinitely many copies of <math>F</math> indexed by <math>A</math>, and so the identification :<math>V^* \cong \left (\bigoplus_{\alpha\in A}F\right )^* \cong \prod_{\alpha\in A}F^* \cong \prod_{\alpha\in A}F \cong F^A</math> is a special case of a [[Direct sum of modules#Properties|general result]] relating direct sums (of [[module (mathematics)|module]]s) to direct products. If a vector space is not finite-dimensional, then its (algebraic) dual space is ''always'' of larger dimension (as a [[cardinal number]]) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be [[isomorphic]] to the original vector space even if the latter is infinite-dimensional. The proof of this inequality between dimensions results from the following. If <math>V</math> is an infinite-dimensional <math>F</math>-vector space, the arithmetical properties of [[cardinal numbers]] implies that :<math>\mathrm{dim}(V)=|A|<|F|^{|A|}=|V^\ast|=\mathrm{max}(|\mathrm{dim}(V^\ast)|, |F|),</math> where cardinalities are denoted as [[absolute value]]s. For proving that <math>\mathrm{dim}(V)< \mathrm{dim}(V^*),</math> it suffices to prove that <math>|F|\le |\mathrm{dim}(V^\ast)|,</math> which can be done with an argument similar to [[Cantor's diagonal argument]].<ref>{{cite book|title=Elements of mathematics: Algebra I, Chapters 1 - 3|author=Nicolas Bourbaki|page=400|editor=Hermann|isbn=0201006391|year=1974|publisher=Addison-Wesley Publishing Company |language=en}}</ref> The exact dimension of the dual is given by the [[Erdős–Kaplansky theorem]]. === Bilinear products and dual spaces === If ''V'' is finite-dimensional, then ''V'' is isomorphic to ''V''<sup>∗</sup>. But there is in general no [[natural isomorphism]] between these two spaces.<ref>{{harvnb|Mac Lane|Birkhoff|1999|loc=§VI.4}}</ref> Any [[bilinear form]] {{math|{{langle}}·,·{{rangle}}}} on ''V'' gives a mapping of ''V'' into its dual space via :<math>v\mapsto \langle v, \cdot\rangle</math> where the right hand side is defined as the functional on ''V'' taking each {{math|''w'' ∈ ''V''}} to {{math|{{langle}}''v'', ''w''{{rangle}}}}. In other words, the bilinear form determines a linear mapping :<math>\Phi_{\langle\cdot,\cdot\rangle} : V\to V^*</math> defined by :<math>\left[\Phi_{\langle\cdot,\cdot\rangle}(v), w\right] = \langle v, w\rangle.</math> If the bilinear form is [[nondegenerate form|nondegenerate]], then this is an isomorphism onto a subspace of ''V''<sup>∗</sup>. If ''V'' is finite-dimensional, then this is an isomorphism onto all of ''V''<sup>∗</sup>. Conversely, any isomorphism <math>\Phi</math> from ''V'' to a subspace of ''V''<sup>∗</sup> (resp., all of ''V''<sup>∗</sup> if ''V'' is finite dimensional) defines a unique nondegenerate bilinear form {{math|<math> \langle \cdot, \cdot \rangle_{\Phi} </math>}} on ''V'' by :<math> \langle v, w \rangle_\Phi = (\Phi (v))(w) = [\Phi (v), w].\,</math> Thus there is a one-to-one correspondence between isomorphisms of ''V'' to a subspace of (resp., all of) ''V''<sup>∗</sup> and nondegenerate bilinear forms on ''V''. If the vector space ''V'' is over the [[complex numbers|complex]] field, then sometimes it is more natural to consider [[sesquilinear form]]s instead of bilinear forms. In that case, a given sesquilinear form {{math|{{langle}}·,·{{rangle}}}} determines an isomorphism of ''V'' with the [[Complex conjugate vector space|complex conjugate]] of the dual space : <math> \Phi_{\langle \cdot, \cdot \rangle} : V\to \overline{V^*}. </math> The conjugate of the dual space <math>\overline{V^*}</math> can be identified with the set of all additive complex-valued functionals {{math|''f'' : ''V'' → '''C'''}} such that : <math> f(\alpha v) = \overline{\alpha}f(v). </math> === Injection into the double-dual === There is a [[natural transformation|natural]] [[linear map|homomorphism]] <math>\Psi</math> from <math>V</math> into the double dual <math>V^{**}=\hom (V^*, F)</math>, defined by <math>(\Psi(v))(\varphi)=\varphi(v)</math> for all <math>v\in V, \varphi\in V^*</math>. In other words, if <math>\mathrm{ev}_v:V^*\to F</math> is the evaluation map defined by <math>\varphi \mapsto \varphi(v)</math>, then <math>\Psi: V \to V^{**}</math> is defined as the map <math>v\mapsto\mathrm{ev}_v</math>. This map <math>\Psi</math> is always [[injective]];<ref group=nb name="choice"/> and it is always an [[isomorphism]] if <math>V</math> is finite-dimensional.<ref>{{Harvp|Halmos|1974}} pp. 25, 28</ref> Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a [[natural isomorphism]]. Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals. === Transpose of a linear map === <!-- [[matrix (mathematics)]] and [[transpose]] link here --> {{Main|Transpose of a linear map}} If {{math|''f'' : ''V'' → ''W''}} is a [[linear map]], then the ''[[Transpose#Transpose of a linear map|transpose]]'' (or ''dual'') {{math|''f''{{i sup|∗}} : ''W''{{i sup|∗}} → ''V''{{i sup|∗}}}} is defined by : <math> f^*(\varphi) = \varphi \circ f \, </math> for every ''<math>\varphi \in W^*</math>''. The resulting functional ''<math>f^* (\varphi)</math>'' in ''<math>V^*</math>'' is called the ''[[pullback (differential geometry)|pullback]]'' of ''<math>\varphi</math>'' along ''<math>f</math>''. The following identity holds for all ''<math>\varphi \in W^*</math>'' and ''<math>v \in V</math>'': : <math> [f^*(\varphi),\, v] = [\varphi,\, f(v)], </math> where the bracket [·,·] on the left is the natural pairing of ''V'' with its dual space, and that on the right is the natural pairing of ''W'' with its dual. This identity characterizes the transpose,<ref>{{Harvp|Halmos|1974}} §44</ref> and is formally similar to the definition of the [[adjoint of an operator|adjoint]]. The assignment {{math|''f'' ↦ ''f''{{i sup|∗}}}} produces an [[injective]] linear map between the space of linear operators from ''V'' to ''W'' and the space of linear operators from ''W''{{i sup|∗}} to ''V''{{i sup|∗}}; this homomorphism is an [[isomorphism]] if and only if ''W'' is finite-dimensional. If {{math|1=''V'' = ''W''}} then the space of linear maps is actually an [[algebra over a field|algebra]] under [[composition of maps]], and the assignment is then an [[antihomomorphism]] of algebras, meaning that {{math|1=(''fg''){{sup|∗}} = ''g''{{i sup|∗}}''f''{{i sup|∗}}}}. In the language of [[category theory]], taking the dual of vector spaces and the transpose of linear maps is therefore a [[contravariant functor]] from the category of vector spaces over ''F'' to itself. It is possible to identify (''f''{{i sup|∗}}){{sup|∗}} with ''f'' using the natural injection into the double dual. If the linear map ''f'' is represented by the [[matrix (mathematics)|matrix]] ''A'' with respect to two bases of ''V'' and ''W'', then ''f''{{i sup|∗}} is represented by the [[transpose]] matrix ''A''<sup>T</sup> with respect to the dual bases of ''W''{{i sup|∗}} and ''V''{{i sup|∗}}, hence the name. Alternatively, as ''f'' is represented by ''A'' acting on the left on column vectors, ''f''{{i sup|∗}} is represented by the same matrix acting on the right on row vectors. These points of view are related by the canonical inner product on '''R'''<sup>''n''</sup>, which identifies the space of column vectors with the dual space of row vectors. === Quotient spaces and annihilators === Let <math>S</math> be a subset of <math>V</math>. The '''[[annihilator (ring theory)|annihilator]]''' of <math>S</math> in <math>V^*</math>, denoted here <math>S^0</math>, is the collection of linear functionals <math>f\in V^*</math> such that <math>[f,s]=0</math> for all <math>s\in S</math>. That is, <math>S^0</math> consists of all linear functionals <math>f:V\to F</math> such that the restriction to <math>S</math> vanishes: <math>f|_S = 0</math>. Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the [[orthogonal complement]]. The annihilator of a subset is itself a vector space. The annihilator of the zero vector is the whole dual space: <math>\{ 0 \}^0 = V^*</math>, and the annihilator of the whole space is just the zero covector: <math>V^0 = \{ 0 \} \subseteq V^*</math>. Furthermore, the assignment of an annihilator to a subset of <math>V</math> reverses inclusions, so that if <math>\{ 0 \} \subseteq S\subseteq T\subseteq V</math>, then : <math> \{ 0 \} \subseteq T^0 \subseteq S^0 \subseteq V^* . </math> If <math>A</math> and <math>B</math> are two subsets of <math>V</math> then : <math> A^0 + B^0 \subseteq (A \cap B)^0 . </math> If <math>(A_i)_{i\in I}</math> is any family of subsets of <math>V</math> indexed by <math>i</math> belonging to some index set <math>I</math>, then : <math> \left( \bigcup_{i\in I} A_i \right)^0 = \bigcap_{i\in I} A_i^0 . </math> In particular if <math>A</math> and <math>B</math> are subspaces of <math>V</math> then : <math> (A + B)^0 = A^0 \cap B^0 </math> and<ref group=nb name="choice"/> : <math> (A \cap B)^0 = A^0 + B^0 . </math> If <math>V</math> is finite-dimensional and <math>W</math> is a [[vector subspace]], then : <math> W^{00} = W </math> after identifying <math>W</math> with its image in the second dual space under the double duality isomorphism <math>V\approx V^{**}</math>. In particular, forming the annihilator is a [[Galois connection]] on the lattice of subsets of a finite-dimensional vector space. If <math>W</math> is a subspace of <math>V</math> then the [[quotient space (linear algebra)|quotient space]] <math>V/W</math> is a vector space in its own right, and so has a dual. By the [[first isomorphism theorem]], a functional <math>f:V\to F</math> factors through <math>V/W</math> if and only if <math>W</math> is in the [[kernel (algebra)|kernel]] of <math>f</math>. There is thus an isomorphism : <math> (V/W)^* \cong W^0 .</math> As a particular consequence, if <math>V</math> is a [[direct sum of modules|direct sum]] of two subspaces <math>A</math> and <math>B</math>, then <math>V^*</math> is a direct sum of <math>A^0</math> and <math>B^0</math>. === Dimensional analysis === The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector <math>v \in V</math> can be paired with a covector <math>\varphi \in V^*</math> by the natural pairing <math>\langle x, \varphi \rangle := \varphi (x) \in F</math> to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to [[Fraction#Reduction|reducing a fraction]]. Thus while the direct sum <math>V \oplus V^*</math> is a {{tmath|2n}}-dimensional space (if {{tmath|V}} is {{tmath|n}}-dimensional), {{tmath|V^*}} behaves as an {{tmath|(-n)}}-dimensional space, in the sense that its dimensions can be canceled against the dimensions of {{tmath|V}}. This is formalized by [[tensor contraction]]. This arises in physics via [[dimensional analysis]], where the dual space has inverse units.<ref>{{cite web |url=https://terrytao.wordpress.com/2012/12/29/a-mathematical-formalisation-of-dimensional-analysis/ |title=A mathematical formalisation of dimensional analysis |last=Tao |first=Terence |author-link=Terence Tao |date=2012-12-29 |quote=Similarly, one can define <math>V^{T^{-1}}</math> as the dual space to <math>V^T</math> ... }}</ref> Under the natural pairing, these units cancel, and the resulting scalar value is [[dimensionless]], as expected. For example, in (continuous) [[Fourier analysis]], or more broadly [[time–frequency analysis]]:<ref group="nb">To be precise, continuous Fourier analysis studies the space of [[Functional (mathematics)|functionals]] with domain a vector space and the space of functionals on the dual vector space.</ref> given a one-dimensional vector space with a [[unit of time]] {{tmath|t}}, the dual space has units of [[frequency]]: occurrences ''per'' unit of time (units of {{tmath|1/t}}). For example, if time is measured in [[second]]s, the corresponding dual unit is the [[inverse second]]: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to <math>3s \cdot 2s^{-1} = 6</math>. Similarly, if the primal space measures length, the dual space measures [[inverse length]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)