Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dual space
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Finite-dimensional case === {{See also|Dual basis}} If <math>V</math> is finite-dimensional, then <math>V^*</math> has the same dimension as <math>V</math>. Given a [[basis of a vector space|basis]] <math>\{\mathbf{e}_1,\dots,\mathbf{e}_n\}</math> in <math>V</math>, it is possible to construct a specific basis in <math>V^*</math>, called the [[dual basis]]. This dual basis is a set <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> of linear functionals on <math>V</math>, defined by the relation : <math> \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n </math> for any choice of coefficients <math>c^i\in F</math>. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations : <math> \mathbf{e}^i(\mathbf{e}_j) = \delta^{i}_{j} </math> where <math>\delta^{i}_{j}</math> is the [[Kronecker delta]] symbol. This property is referred to as the ''bi-orthogonality property''. {{Collapse top | Proof}} Consider <math>\{\mathbf{e}_1,\dots,\mathbf{e}_n\}</math> the basis of V. Let <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> be defined as the following: <math> \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n </math>. These are a basis of <math>V^*</math> because: # The <math>\mathbf{e}^i , i=1, 2, \dots, n, </math> are linear functionals, which map <math> x,y \in V </math> such as <math> x= \alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n </math> and <math> y = \beta_1\mathbf{e}_1 + \dots + \beta_n \mathbf{e}_n </math> to scalars <math> \mathbf{e}^i(x)=\alpha_i </math> and <math> \mathbf{e}^i(y)=\beta_i</math>. Then also, <math> x+\lambda y=(\alpha_1+\lambda \beta_1)\mathbf{e}_1 + \dots + (\alpha_n+\lambda\beta_n)\mathbf{e}_n </math> and <math> \mathbf{e}^i(x+\lambda y)=\alpha_i+\lambda\beta_i=\mathbf{e}^i(x)+\lambda \mathbf{e}^i(y) </math>. Therefore, <math> \mathbf{e}^i \in V^* </math> for <math> i= 1, 2, \dots, n </math>. # Suppose <math> \lambda_1 \mathbf{e}^1 + \cdots + \lambda_n \mathbf{e}^n =0 \in V^*</math>. Applying this functional on the basis vectors of <math> V </math> successively, lead us to <math> \lambda_1=\lambda_2= \dots=\lambda_n=0 </math> (The functional applied in <math> \mathbf{e}_i </math> results in <math> \lambda_i </math>). Therefore, <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> is linearly independent on <math>V^*</math>. #Lastly, consider <math> g \in V^* </math>. Then :<math> g(x)=g(\alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n)=\alpha_1g(\mathbf{e}_1) + \dots + \alpha_ng(\mathbf{e}_n)=\mathbf{e}^1(x)g(\mathbf{e}_1) + \dots + \mathbf{e}^n(x)g(\mathbf{e}_n) </math> and <math>\{\mathbf{e}^1,\dots,\mathbf{e}^n\}</math> generates <math>V^*</math>. Hence, it is a basis of <math> V^*</math>. {{Collapse bottom}} For example, if <math>V</math> is <math>\R^2</math>, let its basis be chosen as <math>\{\mathbf{e}_1=(1/2,1/2),\mathbf{e}_2=(0,1)\}</math>. The basis vectors are not orthogonal to each other. Then, <math>\mathbf{e}^1</math> and <math>\mathbf{e}^2</math> are [[one-form]]s (functions that map a vector to a scalar) such that <math>\mathbf{e}^1(\mathbf{e}_1)=1</math>, <math>\mathbf{e}^1(\mathbf{e}_2)=0</math>, <math>\mathbf{e}^2(\mathbf{e}_1)=0</math>, and <math>\mathbf{e}^2(\mathbf{e}_2)=1</math>. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as :<math> \begin{bmatrix} e^{11} & e^{12} \\ e^{21} & e^{22} \end{bmatrix} \begin{bmatrix} e_{11} & e_{21} \\ e_{12} & e_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. </math> Solving for the unknown values in the first matrix shows the dual basis to be <math>\{\mathbf{e}^1=(2,0),\mathbf{e}^2=(-1,1)\}</math>. Because <math>\mathbf{e}^1</math> and <math>\mathbf{e}^2</math> are functionals, they can be rewritten as <math>\mathbf{e}^1(x,y)=2x</math> and <math>\mathbf{e}^2(x,y)=-x+y</math>. In general, when <math>V</math> is <math>\R^n</math>, if <math>E=[\mathbf{e}_1|\cdots|\mathbf{e}_n]</math> is a matrix whose columns are the basis vectors and <math>\hat{E}=[\mathbf{e}^1|\cdots|\mathbf{e}^n]</math> is a matrix whose columns are the dual basis vectors, then :<math>\hat{E}^\textrm{T}\cdot E = I_n,</math> where <math>I_n</math> is the [[identity matrix]] of order <math>n</math>. The biorthogonality property of these two basis sets allows any point <math>\mathbf{x}\in V</math> to be represented as :<math>\mathbf{x} = \sum_i \langle\mathbf{x},\mathbf{e}^i \rangle \mathbf{e}_i = \sum_i \langle \mathbf{x}, \mathbf{e}_i \rangle \mathbf{e}^i,</math> even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product <math>\langle \cdot, \cdot \rangle</math> and the corresponding duality pairing are introduced, as described below in ''{{section link||Bilinear_products_and_dual_spaces}}''. In particular, <math>\R^n</math> can be interpreted as the space of columns of <math>n</math> [[real number]]s, its dual space is typically written as the space of ''rows'' of <math>n</math> real numbers. Such a row acts on <math>\R^n</math> as a linear functional by ordinary [[matrix multiplication]]. This is because a functional maps every <math>n</math>-vector <math>x</math> into a real number <math>y</math>. Then, seeing this functional as a matrix <math>M</math>, and <math>x</math> as an <math>n\times 1</math> matrix, and <math>y</math> a <math>1\times 1</math> matrix (trivially, a real number) respectively, if <math>Mx=y</math> then, by dimension reasons, <math>M</math> must be a <math>1\times n</math> matrix; that is, <math>M</math> must be a row vector. If <math>V</math> consists of the space of geometrical [[Vector (geometric)|vector]]s in the plane, then the [[level set|level curves]] of an element of <math>V^*</math> form a family of parallel lines in <math>V</math>, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of <math>V^*</math> can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses. More generally, if <math>V</math> is a vector space of any dimension, then the [[level sets]] of a linear functional in <math>V^*</math> are parallel hyperplanes in <math>V</math>, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.<ref>{{harvnb|Misner|Thorne|Wheeler|1973|loc=Β§2.5}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)