Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear form
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Linear map from a vector space to its field of scalars}} In [[mathematics]], a '''linear form''' (also known as a '''linear functional''',<ref>{{Harvard citation text|Axler|2015}} p. 101, §3.92</ref> a '''one-form''', or a '''covector''') is a [[linear map]]<ref group=nb>In some texts the roles are reversed and vectors are defined as linear maps from covectors to scalars</ref> from a [[vector space]] to its [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]] (often, the [[real number]]s or the [[complex number]]s). If {{mvar|V}} is a vector space over a field {{mvar|k}}, the set of all linear functionals from {{mvar|V}} to {{mvar|k}} is itself a vector space over {{mvar|k}} with addition and scalar multiplication defined [[pointwise]]. This space is called the [[dual space]] of {{mvar|V}}, or sometimes the '''algebraic dual space''', when a [[topological dual space]] is also considered. It is often denoted {{math|Hom(''V'', ''k'')}},<ref name=":0">{{Harvard citation text|Tu|2011}} p. 19, §3.1</ref> or, when the field {{mvar|k}} is understood, <math>V^*</math>;<ref>{{Harvard citation text|Katznelson|Katznelson|2008}} p. 37, §2.1.3</ref> other notations are also used, such as <math>V'</math>,<ref>{{Harvard citation text|Axler|2015}} p. 101, §3.94</ref><ref>{{Harvtxt|Halmos|1974}} p. 20, §13</ref> <math>V^{\#}</math> or <math>V^{\vee}.</math><ref name=":0" /> When vectors are represented by [[column vector]]s (as is common when a [[basis (linear algebra)|basis]] is fixed), then linear functionals are represented as [[row vector]]s, and their values on specific vectors are given by [[matrix product]]s (with the row vector on the left). == Examples == The constant [[zero function]], mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is [[Surjective function|surjective]] (that is, its range is all of {{mvar|k}}). * Indexing into a vector: The second element of a three-vector is given by the one-form <math>[0, 1, 0].</math> That is, the second element of <math>[x, y, z]</math> is <math display=block>[0, 1, 0] \cdot [x, y, z] = y.</math> * [[Mean]]: The mean element of an <math>n</math>-vector is given by the one-form <math>\left[1/n, 1/n, \ldots, 1/n\right].</math> That is, <math display=block>\operatorname{mean}(v) = \left[1/n, 1/n, \ldots, 1/n\right] \cdot v.</math> * [[Sampling (signal processing)|Sampling]]: Sampling with a [[Kernel (image processing)|kernel]] can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. * [[Net present value]] of a net [[cash flow]], <math>R(t),</math> is given by the one-form <math>w(t) = (1 + i)^{-t}</math> where <math>i</math> is the [[Discount window|discount rate]]. That is, <math display=block>\mathrm{NPV}(R(t)) = \langle w, R\rangle = \int_{t=0}^\infty \frac{R(t)}{(1+i)^{t}}\,dt.</math> === Linear functionals in R<sup>''n''</sup> === Suppose that vectors in the real coordinate space <math>\R^n</math> are represented as column vectors <math display=block>\mathbf{x} = \begin{bmatrix}x_1\\ \vdots\\ x_n\end{bmatrix}.</math> For each row vector <math>\mathbf{a} = \begin{bmatrix}a_1 & \cdots & a_n\end{bmatrix}</math> there is a linear functional <math>f_{\mathbf{a}}</math> defined by <math display=block>f_{\mathbf{a}}(\mathbf{x}) = a_1 x_1 + \cdots + a_n x_n,</math> and each linear functional can be expressed in this form. This can be interpreted as either the matrix product or the dot product of the row vector <math>\mathbf{a}</math> and the column vector <math>\mathbf{x}</math>: <math display=block>f_{\mathbf{a}}(\mathbf{x}) = \mathbf{a} \cdot \mathbf{x} = \begin{bmatrix}a_1 & \cdots & a_n\end{bmatrix} \begin{bmatrix}x_1\\ \vdots\\ x_n\end{bmatrix}.</math> === Trace of a square matrix === The [[Trace (linear algebra)|trace]] <math>\operatorname{tr} (A)</math> of a square matrix <math>A</math> is the sum of all elements on its [[main diagonal]]. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a [[vector space]] from the set of all <math>n \times n</math> matrices. The trace is a linear functional on this space because <math>\operatorname{tr} (s A) = s \operatorname{tr} (A)</math> and <math>\operatorname{tr} (A + B) = \operatorname{tr} (A) + \operatorname{tr} (B)</math> for all scalars <math>s</math> and all <math>n \times n</math> matrices <math>A \text{ and } B.</math> === (Definite) Integration === Linear functionals first appeared in [[functional analysis]], the study of [[Function space|vector spaces of functions]]. A typical example of a linear functional is [[Integral|integration]]: the linear transformation defined by the [[Riemann integral]] <math display=block>I(f) = \int_a^b f(x)\, dx</math> is a linear functional from the vector space <math>C[a, b]</math> of continuous functions on the interval <math>[a, b]</math> to the real numbers. The linearity of <math>I</math> follows from the standard facts about the integral: <math display=block>\begin{align} I(f + g) &= \int_a^b[f(x) + g(x)]\, dx = \int_a^b f(x)\, dx + \int_a^b g(x)\, dx = I(f) + I(g) \\ I(\alpha f) &= \int_a^b \alpha f(x)\, dx = \alpha\int_a^b f(x)\, dx = \alpha I(f). \end{align}</math> === Evaluation === Let <math>P_n</math> denote the vector space of real-valued polynomial functions of degree <math>\leq n</math> defined on an interval <math>[a, b].</math> If <math>c \in [a, b],</math> then let <math>\operatorname{ev}_c : P_n \to \R</math> be the '''evaluation functional''' <math display=block>\operatorname{ev}_c f = f(c).</math> The mapping <math>f \mapsto f(c)</math> is linear since <math display=block>\begin{align} (f + g)(c) &= f(c) + g(c) \\ (\alpha f)(c) &= \alpha f(c). \end{align}</math> If <math>x_0, \ldots, x_n</math> are <math>n + 1</math> distinct points in <math>[a, b],</math> then the evaluation functionals <math>\operatorname{ev}_{x_i},</math> <math>i = 0, \ldots, n</math> form a [[Basis of a vector space|basis]] of the dual space of <math>P_n</math> ({{harvtxt|Lax|1996}} proves this last fact using [[Lagrange interpolation]]). === Non-example === A function <math>f</math> having the [[equation of a line]] <math>f(x) = a + r x</math> with <math>a \neq 0</math> (for example, <math>f(x) = 1 + 2 x</math>) is {{em|not}} a linear functional on <math>\R</math>, since it is not [[Linear function|linear]].<ref group="nb">For instance, <math>f(1 + 1) = a + 2 r \neq 2 a + 2 r = f(1) + f(1).</math></ref> It is, however, [[Affine-linear function|affine-linear]]. == Visualization == [[File:Gradient 1-form.svg|thumb|200px|Geometric interpretation of a 1-form '''α''' as a stack of [[hyperplane]]s of constant value, each corresponding to those vectors that '''α''' maps to a given scalar value shown next to it along with the "sense" of increase. The {{color box|purple}} zero plane is through the origin.]] In finite dimensions, a linear functional can be visualized in terms of its [[level set]]s, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel [[hyperplane]]s. This method of visualizing linear functionals is sometimes introduced in [[general relativity]] texts, such as [[Gravitation (book)|''Gravitation'']] by {{harvtxt|Misner|Thorne|Wheeler|1973}}. == Applications == === Application to quadrature === If <math>x_0, \ldots, x_n</math> are <math>n + 1</math> distinct points in {{closed-closed|''a'', ''b''}}, then the linear functionals <math>\operatorname{ev}_{x_i} : f \mapsto f\left(x_i\right)</math> defined above form a [[Basis of a vector space|basis]] of the dual space of {{math|''P<sub>n</sub>''}}, the space of polynomials of degree <math>\leq n.</math> The integration functional {{math|''I''}} is also a linear functional on {{math|''P<sub>n</sub>''}}, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients <math>a_0, \ldots, a_n</math> for which <math display="block">I(f) = a_0 f(x_0) + a_1 f(x_1) + \dots + a_n f(x_n)</math> for all <math>f \in P_n.</math> This forms the foundation of the theory of [[numerical quadrature]].<ref>{{harvnb|Lax|1996}}</ref> === In quantum mechanics === Linear functionals are particularly important in [[quantum mechanics]]. Quantum mechanical systems are represented by [[Hilbert space]]s, which are [[Antilinear|anti]]–[[Linear isomorphism|isomorphic]] to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see [[bra–ket notation]]. === Distributions === In the theory of [[generalized function]]s, certain kinds of generalized functions called [[distribution (mathematics)|distributions]] can be realized as linear functionals on spaces of [[test function]]s. ==Dual vectors and bilinear forms== [[File:1-form linear functional.svg|thumb|400px|Linear functionals (1-forms) '''α''', '''β''' and their sum '''σ''' and vectors '''u''', '''v''', '''w''', in [[three-dimensional space|3d]] [[Euclidean space]]. The number of (1-form) [[hyperplane]]s intersected by a vector equals the [[inner product]].<ref>{{Harvard citation text|Misner|Thorne|Wheeler|1973}} p. 57</ref>]] Every non-degenerate [[bilinear form]] on a finite-dimensional vector space {{mvar|V}} induces an [[isomorphism]] {{math|''V'' → ''V''{{i sup|∗}} : ''v'' ↦ ''v''{{sup|∗}}}} such that <math display="block"> v^*(w) := \langle v, w\rangle \quad \forall w \in V ,</math> where the bilinear form on {{mvar|V}} is denoted <math>\langle \,\cdot\, , \,\cdot\, \rangle</math> (for instance, in [[Euclidean space]], <math>\langle v, w \rangle = v \cdot w</math> is the [[dot product]] of {{mvar|v}} and {{mvar|w}}). The inverse isomorphism is {{nowrap|''V''{{i sup|∗}} → ''V'' : ''v''{{sup|∗}} ↦ ''v''}}, where {{mvar|v}} is the unique element of {{mvar|V}} such that <math display="block"> \langle v, w\rangle = v^*(w)</math> for all <math>w \in V.</math> The above defined vector {{nowrap|''v''{{sup|∗}} ∈ ''V''{{i sup|∗}}}} is said to be the '''dual vector''' of <math>v \in V.</math> In an infinite dimensional [[Hilbert space]], analogous results hold by the [[Riesz representation theorem]]. There is a mapping {{nowrap|''V'' ↦ ''V''{{i sup|∗}}}} from {{mvar|V}} into its {{em|[[continuous dual space]]}} ''V''{{i sup|∗}}. ==Relationship to bases== {{hatnote|Below, we assume that the dimension is finite. For a discussion of analogous results in infinite dimensions, see [[Schauder basis]].}} ===Basis of the dual space=== Let the vector space {{mvar|V}} have a basis <math>\mathbf{e}_1, \mathbf{e}_2,\dots,\mathbf{e}_n</math>, not necessarily [[orthogonal]]. Then the [[dual space]] <math>V^*</math> has a basis <math>\tilde{\omega}^1,\tilde{\omega}^2,\dots,\tilde{\omega}^n</math> called the [[dual basis]] defined by the special property that <math display="block"> \tilde{\omega}^i (\mathbf e_j) = \begin{cases} 1 &\text{if}\ i = j\\ 0 &\text{if}\ i \neq j. \end{cases} </math> Or, more succinctly, <math display="block"> \tilde{\omega}^i (\mathbf e_j) = \delta_{ij} </math> where <math>\delta_{ij}</math> is the [[Kronecker delta]]. Here the superscripts of the basis functionals are not exponents but are instead [[Covariance and contravariance of vectors|contravariant]] indices. A linear functional <math>\tilde{u}</math> belonging to the dual space <math>\tilde{V}</math> can be expressed as a [[linear combination]] of basis functionals, with coefficients ("components") {{math|''u<sub>i</sub>''}}, <math display="block">\tilde{u} = \sum_{i=1}^n u_i \, \tilde{\omega}^i. </math> Then, applying the functional <math>\tilde{u}</math> to a basis vector <math>\mathbf{e}_j</math> yields <math display="block">\tilde{u}(\mathbf e_j) = \sum_{i=1}^n \left(u_i \, \tilde{\omega}^i\right) \mathbf e_j = \sum_i u_i \left[\tilde{\omega}^i \left(\mathbf e_j\right)\right] </math> due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then <math display="block">\begin{align} \tilde{u}({\mathbf e}_j) &= \sum_i u_i \left[\tilde{\omega}^i \left({\mathbf e}_j\right)\right] \\& = \sum_i u_i {\delta}_{ij} \\ &= u_j. \end{align}</math> So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector. === The dual basis and inner product === When the space {{mvar|V}} carries an [[inner product]], then it is possible to write explicitly a formula for the dual basis of a given basis. Let {{mvar|V}} have (not necessarily orthogonal) basis <math>\mathbf{e}_1,\dots, \mathbf{e}_n.</math> In three dimensions ({{math|1=''n'' = 3}}), the dual basis can be written explicitly <math display="block"> \tilde{\omega}^i(\mathbf{v}) = \frac{1}{2} \left\langle \frac { \sum_{j=1}^3\sum_{k=1}^3\varepsilon^{ijk} \, (\mathbf e_j \times \mathbf e_k)} {\mathbf e_1 \cdot \mathbf e_2 \times \mathbf e_3} , \mathbf{v} \right\rangle ,</math> for <math>i = 1, 2, 3,</math> where ''ε'' is the [[Levi-Civita symbol]] and <math>\langle \cdot , \cdot \rangle</math> the inner product (or [[dot product]]) on {{mvar|V}}. In higher dimensions, this generalizes as follows <math display="block"> \tilde{\omega}^i(\mathbf{v}) = \left\langle \frac{\sum_{1 \le i_2 < i_3 < \dots < i_n \le n} \varepsilon^{ii_2\dots i_n}(\star \mathbf{e}_{i_2} \wedge \cdots \wedge \mathbf{e}_{i_n})}{\star(\mathbf{e}_1\wedge\cdots\wedge\mathbf{e}_n)}, \mathbf{v} \right\rangle ,</math> where <math>\star</math> is the [[Hodge star operator]]. == Over a ring == [[Module (mathematics)|Modules]] over a [[Ring (mathematics)|ring]] are generalizations of vector spaces, which removes the restriction that coefficients belong to a [[Field (mathematics)|field]]. Given a module {{mvar|M}} over a ring {{mvar|R}}, a linear form on {{mvar|M}} is a linear map from {{mvar|M}} to {{mvar|R}}, where the latter is considered as a module over itself. The space of linear forms is always denoted {{math|Hom<sub>''k''</sub>(''V'', ''k'')}}, whether {{mvar|k}} is a field or not. It is a [[right module]] if {{mvar|V}} is a left module. The existence of "enough" linear forms on a module is equivalent to [[Projective module|projectivity]].<ref>{{Cite book|last=Clark|first=Pete L.|url=http://alpha.math.uga.edu/~pete/integral2015.pdf|title=Commutative Algebra|publisher=Unpublished|at=Lemma 3.12}}</ref> {{math theorem|math_statement=An {{mvar|R}}-[[Module (mathematics)|module]] {{mvar|M}} is [[projective module|projective]] if and only if there exists a subset <math>A\subset M</math> and linear forms <math>\{f_a\mid a \in A\}</math> such that, for every <math>x\in M,</math> only finitely many <math>f_a(x)</math> are nonzero, and <math display="block">x=\sum_{a\in A}{f_a(x)a}</math> |name=Dual Basis Lemma }} == Change of field == {{anchor|Real and complex linear functionals|Real and imaginary parts of a linear functional}} {{See also|Linear complex structure|Complexification}} Suppose that <math>X</math> is a vector space over <math>\Complex.</math> Restricting scalar multiplication to <math>\R</math> gives rise to a real vector space{{sfn|Rudin|1991|pp=57}} <math>X_{\R}</math> called the {{em|[[realification]]}} of <math>X.</math> Any vector space <math>X</math> over <math>\Complex</math> is also a vector space over <math>\R,</math> endowed with a [[Linear complex structure|complex structure]]; that is, there exists a real [[vector subspace]] <math>X_{\R}</math> such that we can (formally) write <math>X = X_{\R} \oplus X_{\R}i</math> as <math>\R</math>-vector spaces. === Real versus complex linear functionals === Every linear functional on <math>X</math> is complex-valued while every linear functional on <math>X_{\R}</math> is real-valued. If <math>\dim X \neq 0</math> then a linear functional on either one of <math>X</math> or <math>X_{\R}</math> is non-trivial (meaning not identically <math>0</math>) if and only if it is surjective (because if <math>\varphi(x) \neq 0</math> then for any scalar <math>s,</math> <math>\varphi\left((s/\varphi(x)) x\right) = s</math>), where the [[Image of a function|image]] of a linear functional on <math>X</math> is <math>\C</math> while the image of a linear functional on <math>X_{\R}</math> is <math>\R.</math> Consequently, the only function on <math>X</math> that is both a linear functional on <math>X</math> and a linear function on <math>X_{\R}</math> is the trivial functional; in other words, <math>X^{\#} \cap X_{\R}^{\#} = \{ 0 \},</math> where <math>\,{\cdot}^{\#}</math> denotes the space's [[algebraic dual space]]. However, every <math>\Complex</math>-linear functional on <math>X</math> is an [[Linear operator|<math>\R</math>-linear {{em|operator}}]] (meaning that it is [[Additive function|additive]] and [[Real homogeneous|homogeneous over <math>\R</math>]]), but unless it is identically <math>0,</math> it is not an <math>\R</math>-linear {{em|functional}} on <math>X</math> because its range (which is <math>\Complex</math>) is 2-dimensional over <math>\R.</math> Conversely, a non-zero <math>\R</math>-linear functional has range too small to be a <math>\Complex</math>-linear functional as well. === Real and imaginary parts === If <math>\varphi \in X^{\#}</math> then denote its [[real part]] by <math>\varphi_{\R} := \operatorname{Re} \varphi</math> and its [[imaginary part]] by <math>\varphi_i := \operatorname{Im} \varphi.</math> Then <math>\varphi_{\R} : X \to \R</math> and <math>\varphi_i : X \to \R</math> are linear functionals on <math>X_{\R}</math> and <math>\varphi = \varphi_{\R} + i \varphi_i.</math> The fact that <math>z = \operatorname{Re} z - i \operatorname{Re} (i z) = \operatorname{Im} (i z) + i \operatorname{Im} z</math> for all <math>z \in \Complex</math> implies that for all <math>x \in X,</math>{{sfn|Rudin|1991|pp=57}} <math display=block>\begin{alignat}{4}\varphi(x) &= \varphi_{\R}(x) - i \varphi_{\R}(i x) \\ &= \varphi_i(i x) + i \varphi_i(x)\\ \end{alignat}</math> and consequently, that <math>\varphi_i(x) = - \varphi_{\R}(i x)</math> and <math>\varphi_{\R}(x) = \varphi_i(ix).</math>{{sfn|Narici|Beckenstein|2011|pp=9-11}} The assignment <math>\varphi \mapsto \varphi_{\R}</math> defines a [[Bijection|bijective]]{{sfn|Narici|Beckenstein|2011|pp=9-11}} <math>\R</math>-linear operator <math>X^{\#} \to X_{\R}^{\#}</math> whose inverse is the map <math>L_{\bull} : X_{\R}^{\#} \to X^{\#}</math> defined by the assignment <math>g \mapsto L_g</math> that sends <math>g : X_{\R} \to \R</math> to the linear functional <math>L_g : X \to \Complex</math> defined by <math display=block>L_g(x) := g(x) - i g(ix) \quad \text{ for all } x \in X.</math> The real part of <math>L_g</math> is <math>g</math> and the bijection <math>L_{\bull} : X_{\R}^{\#} \to X^{\#}</math> is an <math>\R</math>-linear operator, meaning that <math>L_{g+h} = L_g + L_h</math> and <math>L_{rg} = r L_g</math> for all <math>r \in \R</math> and <math>g, h \in X_\R^{\#}.</math>{{sfn|Narici|Beckenstein|2011|pp=9-11}} Similarly for the imaginary part, the assignment <math>\varphi \mapsto \varphi_i</math> induces an <math>\R</math>-linear bijection <math>X^{\#} \to X_{\R}^{\#}</math> whose inverse is the map <math>X_{\R}^{\#} \to X^{\#}</math> defined by sending <math>I \in X_{\R}^{\#}</math> to the linear functional on <math>X</math> defined by <math>x \mapsto I(i x) + i I(x).</math> This relationship was discovered by [[Henry Löwig]] in 1934 (although it is usually credited to F. Murray),{{sfn|Narici|Beckenstein|2011|pp=10-11}} and can be generalized to arbitrary [[Finite field extension|finite extensions of a field]] in the natural way. It has many important consequences, some of which will now be described. === Properties and relationships === Suppose <math>\varphi : X \to \Complex</math> is a linear functional on <math>X</math> with real part <math>\varphi_{\R} := \operatorname{Re} \varphi</math> and imaginary part <math>\varphi_i := \operatorname{Im} \varphi.</math> Then <math>\varphi = 0</math> if and only if <math>\varphi_{\R} = 0</math> if and only if <math>\varphi_i = 0.</math> Assume that <math>X</math> is a [[topological vector space]]. Then <math>\varphi</math> is continuous if and only if its real part <math>\varphi_{\R}</math> is continuous, if and only if <math>\varphi</math>'s imaginary part <math>\varphi_i</math> is continuous. That is, either all three of <math>\varphi, \varphi_{\R},</math> and <math>\varphi_i</math> are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "[[Bounded linear functional|bounded]]". In particular, <math>\varphi \in X^{\prime}</math> if and only if <math>\varphi_{\R} \in X_{\R}^{\prime}</math> where the prime denotes the space's [[continuous dual space]].{{sfn|Rudin|1991|pp=57}} Let <math>B \subseteq X.</math> If <math>u B \subseteq B</math> for all scalars <math>u \in \Complex</math> of [[unit length]] (meaning <math>|u| = 1</math>) then<ref group=proof>It is true if <math>B = \varnothing</math> so assume otherwise. Since <math>\left|\operatorname{Re} z\right| \leq |z|</math> for all scalars <math>z \in \Complex,</math> it follows that <math display=inline>\sup_{x \in B} \left|\varphi_{\R}(x)\right| \leq \sup_{x \in B} |\varphi(x)|.</math> If <math>b \in B</math> then let <math>r_b \geq 0</math> and <math>u_b \in \Complex</math> be such that <math>\left|u_b\right| = 1</math> and <math>\varphi(b) = r_b u_b,</math> where if <math>r_b = 0</math> then take <math>u_b := 1.</math>Then <math>|\varphi(b)| = r_b</math> and because <math display=inline>\varphi\left(\frac{1}{u_b} b\right) = r_b</math> is a real number, <math display=inline>\varphi_{\R}\left(\frac{1}{u_b} b\right) = \varphi\left(\frac{1}{u_b} b\right) = r_b.</math> By assumption <math display=inline>\frac{1}{u_b} b \in B</math> so <math display=inline>|\varphi(b)| = r_b \leq \sup_{x \in B} \left|\varphi_{\R}(x)\right|.</math> Since <math>b \in B</math> was arbitrary, it follows that <math display=inline>\sup_{x \in B} |\varphi(x)| \leq \sup_{x \in B} \left|\varphi_{\R}(x)\right|.</math> <math>\blacksquare</math></ref>{{sfn|Narici|Beckenstein|2011|pp=126-128}} <math display=block>\sup_{b \in B} |\varphi(b)| = \sup_{b \in B} \left|\varphi_{\R}(b)\right|.</math> Similarly, if <math>\varphi_i := \operatorname{Im} \varphi : X \to \R</math> denotes the complex part of <math>\varphi</math> then <math>i B \subseteq B</math> implies <math display=block>\sup_{b \in B} \left|\varphi_{\R}(b)\right| = \sup_{b \in B} \left|\varphi_i(b)\right|.</math> If <math>X</math> is a [[normed space]] with norm <math>\|\cdot\|</math> and if <math>B = \{x \in X : \| x \| \leq 1\}</math> is the closed unit ball then the [[supremum]]s above are the [[operator norm]]s (defined in the usual way) of <math>\varphi, \varphi_{\R},</math> and <math>\varphi_i</math> so that{{sfn|Narici|Beckenstein|2011|pp=126-128}} <math display=block>\|\varphi\| = \left\|\varphi_{\R}\right\| = \left\|\varphi_i \right\|.</math> This conclusion extends to the analogous statement for [[Polar set|polars]] of [[balanced set]]s in general [[topological vector space]]s. * If <math>X</math> is a complex [[Hilbert space]] with a (complex) [[inner product]] <math>\langle \,\cdot\,| \,\cdot\, \rangle</math> that is [[Antilinear map|antilinear]] in its first coordinate (and linear in the second) then <math>X_{\R}</math> becomes a real Hilbert space when endowed with the real part of <math>\langle \,\cdot\,| \,\cdot\, \rangle.</math> Explicitly, this real inner product on <math>X_{\R}</math> is defined by <math>\langle x | y \rangle_{\R} := \operatorname{Re} \langle x | y \rangle</math> for all <math>x, y \in X</math> and it induces the same norm on <math>X</math> as <math>\langle \,\cdot\,| \,\cdot\, \rangle</math> because <math>\sqrt{\langle x | x \rangle_{\R}} = \sqrt{\langle x | x \rangle}</math> for all vectors <math>x.</math> Applying the [[Riesz representation theorem]] to <math>\varphi \in X^{\prime}</math> (resp. to <math>\varphi_{\R} \in X_{\R}^{\prime}</math>) guarantees the existence of a unique vector <math>f_{\varphi} \in X</math> (resp. <math>f_{\varphi_{\R}} \in X_{\R}</math>) such that <math>\varphi(x) = \left\langle f_{\varphi} | \, x \right\rangle</math> (resp. <math>\varphi_{\R}(x) = \left\langle f_{\varphi_{\R}} | \, x \right\rangle_{\R}</math>) for all vectors <math>x.</math> The theorem also guarantees that <math>\left\|f_{\varphi}\right\| = \|\varphi\|_{X^{\prime}}</math> and <math>\left\|f_{\varphi_{\R}}\right\| = \left\|\varphi_{\R}\right\|_{X_{\R}^{\prime}}.</math> It is readily verified that <math>f_{\varphi} = f_{\varphi_{\R}}.</math> Now <math>\left\|f_{\varphi}\right\| = \left\|f_{\varphi_{\R}}\right\|</math> and the previous equalities imply that <math>\|\varphi\|_{X^{\prime}} = \left\|\varphi_{\R}\right\|_{X_{\R}^{\prime}},</math> which is the same conclusion that was reached above. == In infinite dimensions == {{see also|Continuous linear operator}}Below, all [[vector space]]s are over either the [[real number]]s <math>\R</math> or the [[complex number]]s <math>\Complex.</math> If <math>V</math> is a [[topological vector space]], the space of [[Continuous function|continuous]] linear functionals — the {{em|[[Continuous dual space|continuous dual]]}} — is often simply called the dual space. If <math>V</math> is a [[Banach space]], then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the {{em|algebraic dual space}}. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual. A linear functional {{mvar|f}} on a (not necessarily [[locally convex]]) [[topological vector space]] {{mvar|X}} is continuous if and only if there exists a continuous seminorm {{mvar|p}} on {{mvar|X}} such that <math>|f| \leq p.</math>{{sfn|Narici|Beckenstein|2011|p=126}} === Characterizing closed subspaces === Continuous linear functionals have nice properties for [[Real analysis|analysis]]: a linear functional is continuous if and only if its [[Kernel (linear operator)|kernel]] is closed,<ref>{{harvnb|Rudin|1991|loc=Theorem 1.18}}</ref> and a non-trivial continuous linear functional is an [[open map]], even if the (topological) vector space is not complete.{{sfn|Narici|Beckenstein|2011 |p=128}} ==== Hyperplanes and maximal subspaces ==== A vector subspace <math>M</math> of <math>X</math> is called '''maximal''' if <math>M \subsetneq X</math> (meaning <math>M \subseteq X</math> and <math>M \neq X</math>) and does not exist a vector subspace <math>N</math> of <math>X</math> such that <math>M \subsetneq N \subsetneq X.</math> A vector subspace <math>M</math> of <math>X</math> is maximal if and only if it is the kernel of some non-trivial linear functional on <math>X</math> (that is, <math>M = \ker f</math> for some linear functional <math>f</math> on <math>X</math> that is not identically {{math|0}}). An '''affine hyperplane''' in <math>X</math> is a translate of a maximal vector subspace. By linearity, a subset <math>H</math> of <math>X</math> is a affine hyperplane if and only if there exists some non-trivial linear functional <math>f</math> on <math>X</math> such that <math>H = f^{-1}(1) = \{ x \in X : f(x) = 1 \}.</math>{{sfn|Narici|Beckenstein|2011|pp=10-11}} If <math>f</math> is a linear functional and <math>s \neq 0</math> is a scalar then <math>f^{-1}(s) = s \left(f^{-1}(1)\right) = \left(\frac{1}{s} f\right)^{-1}(1).</math> This equality can be used to relate different level sets of <math>f.</math> Moreover, if <math>f \neq 0</math> then the kernel of <math>f</math> can be reconstructed from the affine hyperplane <math>H := f^{-1}(1)</math> by <math>\ker f = H - H.</math> ==== Relationships between multiple linear functionals ==== Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem. {{math theorem|name=Theorem{{sfn|Rudin|1991|pp=63-64}}{{sfn|Narici|Beckenstein|2011|pp=1-18}}|math_statement= If <math>f, g_1, \ldots, g_n</math> are linear functionals on {{mvar|X}}, then the following are equivalent: #{{mvar|f}} can be written as a [[linear combination]] of <math>g_1, \ldots, g_n</math>; that is, there exist scalars <math>s_1, \ldots, s_n</math> such that <math>sf = s_1 g_1 + \cdots + s_n g_n</math>; #<math>\bigcap_{i=1}^{n} \ker g_i \subseteq \ker f</math>; #there exists a real number {{mvar|r}} such that <math>|f(x)| \leq r g_i (x)</math> for all <math>x \in X</math> and all <math>i = 1, \ldots, n.</math> }} If {{mvar|f}} is a non-trivial linear functional on {{mvar|X}} with kernel {{mvar|N}}, <math>x \in X</math> satisfies <math>f(x) = 1,</math> and {{mvar|U}} is a [[Balanced set|balanced]] subset of {{mvar|X}}, then <math>N \cap (x + U) = \varnothing</math> if and only if <math>|f(u)| < 1</math> for all <math>u \in U.</math>{{sfn |Narici|Beckenstein|2011|p=128}} === Hahn–Banach theorem === {{Main|Hahn–Banach theorem}} Any (algebraic) linear functional on a [[vector subspace]] can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of <math>\R.</math> However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example, {{math theorem|name=Hahn–Banach dominated extension theorem{{sfn|Narici|Beckenstein|2011|pp=177-220}}{{harv|Rudin|1991|loc=Th. 3.2}}|math_statement= If <math>p : X \to \R</math> is a [[sublinear function]], and <math>f : M \to \R</math> is a [[linear functional]] on a [[linear subspace]] <math>M \subseteq X</math> which is dominated by {{mvar|p}} on {{mvar|M}}, then there exists a linear extension <math>F : X \to \R</math> of {{mvar|f}} to the whole space {{mvar|X}} that is dominated by {{mvar|p}}, i.e., there exists a linear functional {{mvar|F}} such that <math display="block">F(m) = f(m)</math> for all <math>m \in M,</math> and <math display="block">|F(x)| \leq p(x)</math> for all <math>x \in X.</math> }} === Equicontinuity of families of linear functionals === Let {{mvar|X}} be a [[topological vector space]] (TVS) with [[continuous dual space]] <math>X'.</math> For any subset {{math|''H''}} of <math>X',</math> the following are equivalent:{{sfn|Narici|Beckenstein|2011|pp=225-273}} # {{math|''H''}} is [[Equicontinuity|equicontinuous]]; # {{math|''H''}} is contained in the [[Polar set|polar]] of some neighborhood of <math>0</math> in {{mvar|X}}; # the [[Polar set|(pre)polar]] of {{math|''H''}} is a neighborhood of <math>0</math> in {{mvar|X}}; If {{math|''H''}} is an equicontinuous subset of <math>X'</math> then the following sets are also equicontinuous: the [[Weak-* topology|weak-*]] closure, the [[Balanced set|balanced hull]], the [[convex hull]], and the [[Absolutely convex set|convex balanced hull]].{{sfn|Narici|Beckenstein|2011|pp=225-273}} Moreover, [[Alaoglu's theorem]] implies that the weak-* closure of an equicontinuous subset of <math>X'</math> is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).{{sfn|Schaefer|Wolff|1999|loc=Corollary 4.3}}{{sfn|Narici|Beckenstein|2011|pp=225-273}} == See also == * {{annotated link|Discontinuous linear map}} * {{annotated link|Locally convex topological vector space}} * {{annotated link|Positive linear functional}} * {{annotated link|Multilinear form}} * {{annotated link|Topological vector space}} ==Notes== === Footnotes === {{reflist|group=nb}} === Proofs === {{reflist|group=proof}} == References == {{reflist}} == Bibliography == * {{Citation|last=Axler|first=Sheldon|title=Linear Algebra Done Right|volume=|pages=|publication-date=2015|series=[[Undergraduate Texts in Mathematics]]|edition=3rd|publisher=[[Springer Science+Business Media|Springer]]|isbn=978-3-319-11079-0|author-link=Sheldon Axler}} *{{citation|last1=Bishop|first1=Richard|author1-link=Richard L. Bishop|first2=Samuel|last2=Goldberg|year=1980|title=Tensor Analysis on Manifolds|publisher=Dover Publications|chapter=Chapter 4|isbn=0-486-64039-6|url=https://archive.org/details/tensoranalysison00bish}} * {{Conway A Course in Functional Analysis}} <!--{{sfn|Conway|1990|p=}}--> * {{cite book|last=Dunford|first=Nelson|title=Linear operators|publisher=Interscience Publishers|location=New York|year=1988|isbn=0-471-60848-3|oclc=18412261|language=ro}} <!--{{sfn|Dunford|1988|p=}} --> *{{citation|last=Halmos|first=Paul Richard|title=Finite-Dimensional Vector Spaces|volume=|pages=|year=1974|series=[[Undergraduate Texts in Mathematics]]|edition=1958 2nd|publisher=[[Springer Science+Business Media|Springer]]|isbn=0-387-90093-4|author-link=Paul Halmos}} *{{Citation|last=Katznelson|first=Yitzhak|title=A (Terse) Introduction to Linear Algebra|volume=|pages=|publication-date=2008|publisher=[[American Mathematical Society]]|isbn=978-0-8218-4419-9|last2=Katznelson|first2=Yonatan R.|author-link=Yitzhak Katznelson}} * {{citation|last=Lax|first=Peter|author-link=Peter Lax|title=Linear algebra|year=1996|publisher=Wiley-Interscience|isbn=978-0-471-11111-5}} * {{citation|last1=Misner|first1=Charles W.|author1-link=Charles W. Misner|first2=Kip S. |last2=Thorne|author2-link=Kip Thorne|first3=John A.|last3=Wheeler|author3-link=John Archibald Wheeler |title=Gravitation |volume=| publisher= W. H. Freeman |pages=| year=1973|isbn=0-7167-0344-0 }} * {{Narici Beckenstein Topological Vector Spaces|edition=2}} <!--{{sfn|Narici|Beckenstein|2011|p=}}--> * {{Rudin Walter Functional Analysis|edition=2}} <!--{{sfn|Rudin|1991|p=}}--> * {{Schaefer Wolff Topological Vector Spaces|edition=2}} <!--{{sfn|Schaefer|1999|p=}}--> * {{citation|last=Schutz|first=Bernard|author1-link=Bernard F. Schutz|year=1985|title=A first course in general relativity|publisher=Cambridge University Press|location=Cambridge, UK|chapter=Chapter 3|isbn=0-521-27703-5}} * {{Trèves François Topological vector spaces, distributions and kernels}} <!--{{sfn|Trèves|2006|p=}}--> *{{Citation|last=Tu|first=Loring W.|title=An Introduction to Manifolds|volume=|pages=|publication-date=2011|series=Universitext|edition=2nd|publisher=[[Springer Science+Business Media|Springer]]|isbn=978-0-8218-4419-9|author-link=Yitzhak Katznelson}} * {{Wilansky Modern Methods in Topological Vector Spaces}} <!--{{sfn|Wilansky|2013|p=}}--> {{Functional Analysis}} {{TopologicalVectorSpaces}} {{Authority control}} [[Category:Functional analysis]] [[Category:Linear algebra]] [[Category:Linear operators]] [[Category:Linear functionals]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Annotated link
(
edit
)
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Closed-closed
(
edit
)
Template:Color box
(
edit
)
Template:Conway A Course in Functional Analysis
(
edit
)
Template:Em
(
edit
)
Template:Functional Analysis
(
edit
)
Template:Harvard citation text
(
edit
)
Template:Harvnb
(
edit
)
Template:Harvtxt
(
edit
)
Template:Hatnote
(
edit
)
Template:I sup
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Math theorem
(
edit
)
Template:Mvar
(
edit
)
Template:Narici Beckenstein Topological Vector Spaces
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Rudin Walter Functional Analysis
(
edit
)
Template:Schaefer Wolff Topological Vector Spaces
(
edit
)
Template:See also
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)
Template:TopologicalVectorSpaces
(
edit
)
Template:Trèves François Topological vector spaces, distributions and kernels
(
edit
)
Template:Wilansky Modern Methods in Topological Vector Spaces
(
edit
)