Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Tensor product
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical operation on vector spaces}} {{For|generalizations of this concept|Tensor product of modules|Tensor product (disambiguation)}} In [[mathematics]], the '''tensor product''' <math>V \otimes W</math> of two [[vector space]]s <math>V</math> and <math>W</math> (over the same [[Field (mathematics)|field]]) is a vector space to which is associated a [[bilinear map]] <math>V\times W \rightarrow V\otimes W</math> that maps a pair <math>(v,w),\ v\in V, w\in W</math> to an element of <math>V \otimes W</math> denoted {{tmath|1= v \otimes w }}. An element of the form <math>v \otimes w</math> is called the '''tensor product''' of <math>v</math> and <math>w</math>. An element of <math>V \otimes W</math> is a [[tensor]], and the tensor product of two vectors is sometimes called an ''elementary tensor'' or a ''decomposable tensor''. The elementary tensors [[linear span|span]] <math>V \otimes W</math> in the sense that every element of <math>V \otimes W</math> is a sum of elementary tensors. If [[basis (linear algebra)|bases]] are given for <math>V</math> and <math>W</math>, a basis of <math>V \otimes W</math> is formed by all tensor products of a basis element of <math>V</math> and a basis element of <math>W</math>. The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from <math>V\times W</math> into another vector space <math>Z</math> factors uniquely through a [[linear map]] <math>V\otimes W\to Z</math> (see the section below titled 'Universal property'), i.e. the bilinear map is associated to a unique linear map from the tensor product <math>V \otimes W</math> to <math>Z</math>. Tensor products are used in many application areas, including physics and engineering. For example, in [[general relativity]], the [[gravitational field]] is described through the [[metric tensor]], which is a [[tensor field]] with one tensor at each point of the [[space-time]] [[manifold]], and each belonging to the tensor product of the [[cotangent space]] at the point with itself. == Definitions and constructions == The ''tensor product'' of two vector spaces is a vector space that is defined [[up to]] an [[isomorphism]]. There are several equivalent ways to define it. Most consist of defining explicitly a vector space that is called a tensor product, and, generally, the equivalence proof results almost immediately from the basic properties of the vector spaces that are so defined. The tensor product can also be defined through a [[universal property]]; see {{slink||Universal_property}}, below. As for every universal property, all [[object (category theory)|objects]] that satisfy the property are isomorphic through a unique isomorphism that is compatible with the universal property. When this definition is used, the other definitions may be viewed as constructions of objects satisfying the universal property and as proofs that there are objects satisfying the universal property, that is that tensor products exist. === From bases === Let {{mvar|V}} and {{mvar|W}} be two [[vector space]]s over a [[field (mathematics)|field]] {{mvar|F}}, with respective [[basis (linear algebra)|bases]] <math>B_V</math> and {{tmath|1= B_W }}. The ''tensor product'' <math>V \otimes W</math> of {{mvar|V}} and {{mvar|W}} is a vector space that has as a basis the set of all <math>v\otimes w</math> with <math>v\in B_V</math> and {{tmath|1= w \in B_W }}. This definition can be formalized in the following way (this formalization is rarely used in practice, as the preceding informal definition is generally sufficient): <math>V \otimes W</math> is the set of the [[function (mathematics)|functions]] from the [[Cartesian product]] <math>B_V \times B_W</math> to {{mvar|F}} that have a finite number of nonzero values. The [[pointwise operation]]s make <math>V \otimes W</math> a vector space. The function that maps <math>(v,w)</math> to {{math|1}} and the other elements of <math>B_V \times B_W</math> to {{math|0}} is denoted {{tmath|1= v\otimes w }}. The set <math>\{v\otimes w\mid v\in B_V, w\in B_W\}</math> is then straightforwardly a basis of {{tmath|1= V \otimes W }}, which is called the ''tensor product'' of the bases <math>B_V</math> and {{tmath|1= B_W }}. We can equivalently define <math>V \otimes W</math> to be the set of [[Bilinear form|bilinear forms]] on <math>V \times W</math> that are nonzero at only a finite number of elements of {{tmath|1= B_V \times B_W }}. To see this, given <math>(x,y)\in V \times W</math> and a bilinear form {{tmath|1= B : V \times W \to F }}, we can decompose <math>x</math> and <math>y</math> in the bases <math>B_V</math> and <math>B_W</math> as: <math display="block">x=\sum_{v\in B_V} x_v\, v \quad \text{and}\quad y=\sum_{w\in B_W} y_w\, w, </math> where only a finite number of <math>x_v</math>'s and <math>y_w</math>'s are nonzero, and find by the bilinearity of <math>B</math> that: <math display="block">B(x, y) =\sum_{v\in B_V}\sum_{w\in B_W} x_v y_w\, B(v, w)</math> Hence, we see that the value of <math>B</math> for any <math>(x,y)\in V \times W</math> is uniquely and totally determined by the values that it takes on {{tmath|1= B_V \times B_W }}. This lets us extend the maps <math>v\otimes w</math> defined on <math>B_V \times B_W</math> as before into bilinear maps <math>v \otimes w : V\times W \to F</math> , by letting: <math display="block">(v \otimes w)(x, y) : =\sum_{v'\in B_V}\sum_{w'\in B_W} x_{v'} y_{w'}\, (v \otimes w)(v', w') = x_v \, y_w .</math> Then we can express any bilinear form <math>B</math> as a (potentially infinite) formal linear combination of the <math>v\otimes w</math> maps according to: <math display="block">B = \sum_{v\in B_V}\sum_{w\in B_W} B(v, w)(v \otimes w)</math> making these maps similar to a [[Schauder basis]] for the vector space <math>\text{Hom}(V, W; F)</math> of all bilinear forms on {{tmath|1= V \times W }}. To instead have it be a proper Hamel [[Basis (linear algebra)|basis]], it only remains to add the requirement that <math>B</math> is nonzero at an only a finite number of elements of {{tmath|1= B_V \times B_W }}, and consider the subspace of such maps instead. In either construction, the ''tensor product of two vectors'' is defined from their decomposition on the bases. More precisely, taking the basis decompositions of <math>x\in V </math> and <math>y \in W</math> as before: <math display="block">\begin{align} x\otimes y&=\biggl(\sum_{v\in B_V} x_v\, v\biggr) \otimes \biggl(\sum_{w\in B_W} y_w\, w\biggr)\\[5mu] &=\sum_{v\in B_V}\sum_{w\in B_W} x_v y_w\, v\otimes w. \end{align}</math> This definition is quite clearly derived from the coefficients of <math>B(v, w)</math> in the expansion by bilinearity of <math>B(x, y)</math> using the bases <math>B_V</math> and {{tmath|1= B_W }}, as done above. It is then straightforward to verify that with this definition, the map <math>{\otimes} : (x,y)\mapsto x\otimes y</math> is a bilinear map from <math>V\times W</math> to <math>V\otimes W</math> satisfying the [[universal property]] that any construction of the tensor product satisfies (see below). If arranged into a rectangular array, the [[coordinate vector]] of <math>x\otimes y</math> is the [[outer product]] of the coordinate vectors of <math>x</math> and {{tmath|1= y }}. Therefore, the tensor product is a generalization of the outer product, that is, an abstraction of it beyond coordinate vectors. A limitation of this definition of the tensor product is that, if one changes bases, a different tensor product is defined. However, the decomposition on one basis of the elements of the other basis defines a [[canonical isomorphism]] between the two tensor products of vector spaces, which allows identifying them. Also, contrarily to the two following alternative definitions, this definition cannot be extended into a definition of the [[tensor product of modules]] over a [[ring (mathematics)|ring]]. === As a quotient space === A construction of the tensor product that is basis independent can be obtained in the following way. Let {{mvar|V}} and {{mvar|W}} be two [[vector space]]s over a [[field (mathematics)|field {{mvar|F}}]]. One considers first a vector space {{mvar|L}} that has the [[Cartesian product]] <math>V\times W</math> as a [[basis (linear algebra)|basis]]. That is, the basis elements of {{mvar|L}} are the [[ordered pair|pairs]] <math>(v,w)</math> with <math>v\in V</math> and {{tmath|1= w\in W }}. To get such a vector space, one can define it as the vector space of the [[function (mathematics)|functions]] <math>V\times W \to F</math> that have a finite number of nonzero values and identifying <math>(v,w)</math> with the function that takes the value {{math|1}} on <math>(v,w)</math> and {{math|0}} otherwise. Let {{mvar|R}} be the [[linear subspace]] of {{mvar|L}} that is spanned by the relations that the tensor product must satisfy. More precisely, {{mvar|R}} is [[Linear span|spanned by]] the elements of one of the forms: : <math>\begin{align} (v_1 + v_2, w)&-(v_1, w)-(v_2, w),\\ (v, w_1+w_2)&-(v, w_1)-(v, w_2),\\ (sv,w)&-s(v,w),\\ (v,sw)&-s(v,w), \end{align}</math> where {{tmath|1= v, v_1, v_2\in V }}, <math>w, w_1, w_2 \in W</math> and {{tmath|1= s\in F }}. Then, the tensor product is defined as the [[quotient space (linear algebra)|quotient space]]: : <math>V\otimes W=L/R,</math> and the image of <math>(v,w)</math> in this quotient is denoted {{tmath|1= v\otimes w }}. It is straightforward to prove that the result of this construction satisfies the [[universal property]] considered below. (A very similar construction can be used to define the [[tensor product of modules]].) === Universal property === [[File:Another universal tensor prod.svg|class=skin-invert-image|right|thumb|200px|Universal property of tensor product: if {{math|''h''}} is bilinear, there is a unique linear map {{math|{{overset|lh=0.7|~|''h''}}}} that makes the diagram [[commutative diagram|commutative]] (that is, {{math|1=''h'' = {{overset|lh=0.7|~|''h''}} ∘ ''φ''}}).]] In this section, the [[universal property]] satisfied by the tensor product is described. As for every universal property, two objects that satisfy the property are related by a unique [[isomorphism]]. It follows that this is a (non-constructive) way to define the tensor product of two vector spaces. In this context, the preceding constructions of tensor products may be viewed as proofs of existence of the tensor product so defined. A consequence of this approach is that every property of the tensor product can be deduced from the universal property, and that, in practice, one may forget the method that has been used to prove its existence. The "universal-property definition" of the tensor product of two vector spaces is the following (recall that a [[bilinear map]] is a function that is ''separately'' [[linear map|linear]] in each of its arguments): :The ''tensor product'' of two vector spaces {{mvar|V}} and {{mvar|W}} is a vector space denoted as {{tmath|1= V\otimes W }}, together with a bilinear map <math>{\otimes} : (v,w)\mapsto v\otimes w</math> from <math>V\times W</math> to {{tmath|1= V\otimes W }}, such that, for every bilinear map {{tmath|1= h : V\times W\to Z }}, there is a ''unique'' linear map {{tmath|1= \tilde h : V\otimes W\to Z }}, such that <math>h=\tilde h \circ {\otimes}</math> (that is, <math>h(v, w)= \tilde h(v\otimes w)</math> for every <math>v\in V</math> and {{tmath|1= w\in W }}).<!-- Commenting out considerations that are misplaced here; kept for a possible use elsewhere. This characterization can simplify proofs about the tensor product. For example, the tensor product is symmetric, meaning there is a [[canonical isomorphism]]: <math display="block">V \otimes W \cong W \otimes V.</math> To construct, say, a map from <math>V \otimes W</math> to {{tmath|1= W \otimes V }}, it suffices to give a bilinear map <math>h: V \times W \to W \otimes V</math> that maps <math>(v,w)</math> to {{tmath|1= w \otimes v }}. Then the universal property of <math>V \otimes W</math> means <math>h</math> factors into a map {{tmath|1= \tilde{h}:V \otimes W \to W \otimes V }}. A map <math>\tilde{g}:W \otimes V \to V \otimes W</math> in the opposite direction is similarly defined, and one checks that the two linear maps <math>\tilde{h}</math> and <math>\tilde{g}</math> are [[Inverse function|inverse]] to one another by again using their universal properties. The universal property is extremely useful in showing that a map to a tensor product is injective. For example, suppose we want to show that <math>\R \otimes \R</math> is isomorphic to {{tmath|1= \R }}. Since all simple tensors are of the form {{tmath|1= a \otimes b = (ab) \otimes 1 }}, and hence all elements of the tensor product are of the form <math>x \otimes 1</math> by additivity in the first coordinate, we have a natural candidate for an isomorphism <math>\R \rightarrow \R \otimes \R</math> given by mapping <math>x</math> to {{tmath|1= x \otimes 1 }}, and this map is trivially surjective. Showing injectivity directly would involve somehow showing that there are no non-trivial relationships between <math>x \otimes 1</math> and <math>y \otimes 1</math> for {{tmath|1= x \neq y }}, which seems daunting. However, we know that there is a bilinear map <math>\R \times \R \rightarrow \R</math> given by multiplying the coordinates together, and the universal property of the tensor product then furnishes a map of vector spaces <math>\R \otimes \R \rightarrow \R</math> which maps <math>x \otimes 1</math> to {{tmath|1= x }}, and hence is an inverse of the previously constructed homomorphism, immediately implying the desired result. A priori, it is not even clear that this inverse map is well-defined, but the universal property and associated bilinear map together imply this is the case. Similar reasoning can be used to show that the tensor product is associative, that is, there are natural isomorphisms <math display="block">V_1 \otimes \left(V_2\otimes V_3\right) \cong \left(V_1\otimes V_2\right) \otimes V_3.</math> Therefore, it is customary to omit the parentheses and write {{tmath|1= V_1 \otimes V_2 \otimes V_3 }}, so the ''ijk-th'' component of <math> \mathbf{u} \otimes \mathbf{v} \otimes \mathbf{w} </math> is <math display="block">(\mathbf{u} \otimes \mathbf{v}\otimes \mathbf{w})_{ijk} = u_i v_j w_k,</math> similar to the first [[#Tensors in finite dimensions, and the outer product|example]] on this page. The category of vector spaces with tensor product is an example of a [[symmetric monoidal category]]. --> === Linearly disjoint === Like the universal property above, the following characterization may also be used to determine whether or not a given vector space and given bilinear map form a tensor product.{{sfn|Trèves|2006|pp=403-404}} {{math theorem|math_statement= Let {{tmath|1= X, Y }}, and <math>Z</math> be complex vector spaces and let <math>T : X \times Y \to Z</math> be a bilinear map. Then <math>(Z, T)</math> is a tensor product of <math>X</math> and <math>Y</math> if and only if{{sfn|Trèves|2006|pp=403-404}} the image of <math>T</math> spans all of <math>Z</math> (that is, {{tmath|1= \operatorname{span} \; T(X \times Y) = Z }}), and also <math>X</math> and <math>Y</math> are {{em|<math>T</math>-linearly disjoint}}, which by definition means that for all positive integers <math>n</math> and all elements <math>x_1, \ldots, x_n \in X</math> and <math>y_1, \ldots, y_n \in Y</math> such that {{tmath|1= \sum_{i=1}^n T\left(x_i, y_i\right) = 0 }}, # if all <math>x_1, \ldots, x_n</math> are [[linearly independent]] then all <math>y_i</math> are {{tmath|1= 0 }}, and # if all <math>y_1, \ldots, y_n</math> are linearly independent then all <math>x_i</math> are {{tmath|1= 0 }}. Equivalently, <math>X</math> and <math>Y</math> are <math>T</math>-linearly disjoint if and only if for all linearly independent sequences <math>x_1, \ldots, x_m</math> in <math>X</math> and all linearly independent sequences <math>y_1, \ldots, y_n</math> in {{tmath|1= Y }}, the vectors <math>\left\{T\left(x_i, y_j\right) : 1 \leq i \leq m, 1 \leq j \leq n\right\}</math> are linearly independent. }} For example, it follows immediately that if {{tmath|1=X=\C^m}} and {{tmath|1=Y=\C^n}}, where <math>m</math> and <math>n</math> are positive integers, then one may set <math>Z = \Complex^{mn}</math> and define the bilinear map as <math display=block>\begin{align} T : \Complex^m \times \Complex^n &\to \Complex^{mn}\\ (x, y) = ((x_1, \ldots, x_m), (y_1, \ldots, y_n)) &\mapsto (x_i y_j)_{\stackrel{i=1,\ldots,m}{j=1,\ldots,n}}\end{align}</math> to form the tensor product of <math>X </math> and {{tmath|1= Y }}.{{sfn|Trèves|2006|pp=407}} Often, this map <math>T</math> is denoted by <math>\,\otimes\,</math> so that <math>x \otimes y = T(x, y).</math> As another example, suppose that <math>\Complex^S</math> is the vector space of all complex-valued functions on a set <math>S</math> with addition and scalar multiplication defined pointwise (meaning that <math>f + g</math> is the map <math>s \mapsto f(s) + g(s)</math> and <math>c f</math> is the map {{tmath|1= s \mapsto c f(s) }}). Let <math>S</math> and <math>T</math> be any sets and for any <math>f \in \Complex^S</math> and {{tmath|1= g \in \Complex^T }}, let <math>f \otimes g \in \Complex^{S \times T}</math> denote the function defined by {{tmath|1= (s, t) \mapsto f(s) g(t) }}. If <math>X \subseteq \Complex^S</math> and <math>Y \subseteq \Complex^T</math> are vector subspaces then the vector subspace <math>Z := \operatorname{span} \left\{f \otimes g : f \in X, g \in Y\right\}</math> of <math>\Complex^{S \times T}</math> together with the bilinear map: <math display=block>\begin{alignat}{4} \;&& X \times Y &&\;\to \;& Z \\[0.3ex] && (f, g) &&\;\mapsto\;& f \otimes g \\ \end{alignat}</math> form a tensor product of <math>X</math> and {{tmath|1= Y }}.{{sfn|Trèves|2006|pp=407}} == Properties == === Dimension === If {{math|''V''}} and {{math|''W''}} are vector spaces of finite [[dimension (linear algebra)|dimension]], then <math>V\otimes W</math> is finite-dimensional, and its dimension is the product of the dimensions of {{math|''V''}} and {{math|''W''}}. This results from the fact that a basis of <math>V\otimes W</math> is formed by taking all tensor products of a basis element of {{math|''V''}} and a basis element of {{math|''W''}}. === Associativity === The tensor product is [[associative]] in the sense that, given three vector spaces {{tmath|1= U, V, W }}, there is a canonical isomorphism: : <math>(U\otimes V)\otimes W\cong U\otimes (V\otimes W),</math> that maps <math>(u\otimes v)\otimes w</math> to {{tmath|1= u\otimes (v \otimes w) }}. This allows omitting parentheses in the tensor product of more than two vector spaces or vectors. === Commutativity as vector space operation === The tensor product of two vector spaces <math>V</math> and <math>W</math> is [[commutative ]] in the sense that there is a canonical isomorphism: : <math> V \otimes W \cong W\otimes V,</math> that maps <math>v \otimes w</math> to {{tmath|1= w \otimes v }}. On the other hand, even when {{tmath|1= V=W }}, the tensor product of vectors is not commutative; that is {{tmath|1= v\otimes w \neq w \otimes v }}, in general. {{anchor|Tensor powers and braiding}} The map <math>x\otimes y \mapsto y\otimes x</math> from <math>V\otimes V</math> to itself induces a linear [[automorphism]] that is called a '''{{vanchor|braiding map}}'''. More generally and as usual (see [[tensor algebra]]), let <math>V^{\otimes n}</math> denote the tensor product of {{mvar|n}} copies of the vector space {{mvar|V}}. For every [[permutation]] {{mvar|s}} of the first {{mvar|n}} positive integers, the map: : <math>x_1\otimes \cdots \otimes x_n \mapsto x_{s(1)}\otimes \cdots \otimes x_{s(n)}</math> induces a linear automorphism of {{tmath|1= V^{\otimes n}\to V^{\otimes n} }}, which is called a braiding map. == Tensor product of linear maps == {{redirect|Tensor product of linear maps|the generalization for modules|Tensor product of modules#Tensor product of linear maps and a change of base ring}} Given a linear map {{tmath|1= f : U\to V }}, and a vector space {{mvar|W}}, the ''tensor product:'' : <math>f\otimes W : U\otimes W\to V\otimes W</math> is the unique linear map such that: : <math>(f\otimes W)(u\otimes w)=f(u)\otimes w.</math> The tensor product <math>W\otimes f</math> is defined similarly. Given two linear maps <math>f : U\to V</math> and {{tmath|1= g : W\to Z }}, their tensor product: : <math>f\otimes g : U\otimes W\to V\otimes Z</math> is the unique linear map that satisfies: : <math>(f\otimes g)(u\otimes w)=f(u)\otimes g(w).</math> One has: : <math>f\otimes g= (f\otimes Z)\circ (U\otimes g) = (V\otimes g)\circ (f\otimes W).</math> In terms of [[category theory]], this means that the tensor product is a [[bifunctor]] from the [[category (mathematics)|category]] of vector spaces to itself.<ref>{{cite book| last1=Hazewinkel|first1=Michiel|last2=Gubareni|first2=Nadezhda Mikhaĭlovna| last3=Gubareni|first3=Nadiya|last4=Kirichenko|first4=Vladimir V.|title=Algebras, rings and modules|page=100| publisher=Springer|year=2004|isbn=978-1-4020-2690-4}}</ref> If {{mvar|f}} and {{mvar|g}} are both [[injective]] or [[surjective]], then the same is true for all above defined linear maps. In particular, the tensor product with a vector space is an [[exact functor]]; this means that every [[exact sequence]] is mapped to an exact sequence ([[tensor product of modules|tensor products of modules]] do not transform injections into injections, but they are [[right exact functor]]s). By choosing bases of all vector spaces involved, the linear maps {{math|''f''}} and {{math|''g''}} can be represented by [[matrix (mathematics)|matrices]]. Then, depending on how the tensor <math>v \otimes w</math> is vectorized, the matrix describing the tensor product <math>f \otimes g</math> is the [[Kronecker product]] of the two matrices. For example, if {{math|''V'', ''X'', ''W''}}, and {{math|''U''}} above are all two-dimensional and bases have been fixed for all of them, and {{math|''f''}} and {{math|''g''}} are given by the matrices: <math display="block">A=\begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \\ \end{bmatrix}, \qquad B=\begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix},</math> respectively, then the tensor product of these two matrices is: <math> \begin{align} \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \\ \end{bmatrix} \otimes \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix} &= \begin{bmatrix} a_{1,1} \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix} & a_{1,2} \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix} \\[3pt] a_{2,1} \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix} & a_{2,2} \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \\ \end{bmatrix} \\ \end{bmatrix} \\ &= \begin{bmatrix} a_{1,1} b_{1,1} & a_{1,1} b_{1,2} & a_{1,2} b_{1,1} & a_{1,2} b_{1,2} \\ a_{1,1} b_{2,1} & a_{1,1} b_{2,2} & a_{1,2} b_{2,1} & a_{1,2} b_{2,2} \\ a_{2,1} b_{1,1} & a_{2,1} b_{1,2} & a_{2,2} b_{1,1} & a_{2,2} b_{1,2} \\ a_{2,1} b_{2,1} & a_{2,1} b_{2,2} & a_{2,2} b_{2,1} & a_{2,2} b_{2,2} \\ \end{bmatrix}. \end{align} </math> The resultant rank is at most 4, and thus the resultant dimension is 4. {{em|rank}} here denotes the [[tensor rank]] i.e. the number of requisite indices (while the [[matrix rank]] counts the number of degrees of freedom in the resulting array). {{tmath|1= \operatorname{Tr} A \otimes B = \operatorname{Tr} A \times \operatorname{Tr} B }}. A [[dyadic product]] is the special case of the tensor product between two vectors of the same dimension. == General tensors == {{See also|Tensor}} For non-negative integers {{math|''r''}} and {{math|''s''}} a type <math>(r, s)</math> [[tensor]] on a vector space {{math|''V''}} is an element of: <math display="block">T^r_s(V) = \underbrace{ V \otimes \cdots \otimes V}_r \otimes \underbrace{ V^* \otimes \cdots \otimes V^*}_s = V^{\otimes r} \otimes \left(V^*\right)^{\otimes s}.</math> Here <math>V^*</math> is the [[dual vector space]] (which consists of all [[linear map]]s {{math|''f''}} from {{math|''V''}} to the ground field {{math|''K''}}). There is a product map, called the {{em|(tensor) product of tensors}}:{{refn|{{harvp|Bourbaki|1989|p=244}} defines the usage "tensor product of ''x'' and ''y''", elements of the respective modules.}} <math display="block">T^r_s (V) \otimes_K T^{r'}_{s'} (V) \to T^{r+r'}_{s+s'}(V).</math> It is defined by grouping all occurring "factors" {{math|''V''}} together: writing <math>v_i</math> for an element of {{math|''V''}} and <math>f_i</math> for an element of the dual space: <math display="block">(v_1 \otimes f_1) \otimes (v'_1) = v_1 \otimes v'_1 \otimes f_1.</math> If {{math|''V''}} is finite dimensional, then picking a basis of {{math|''V''}} and the corresponding [[dual basis]] of <math>V^*</math> naturally induces a basis of <math>T_s^r(V)</math> (this basis is described in the [[Kronecker product#Abstract properties|article on Kronecker products]]). In terms of these bases, the [[Coordinate vector|components]] of a (tensor) product of two (or more) [[tensor]]s can be computed. For example, if {{math|''F''}} and {{math|''G''}} are two [[covariance and contravariance of vectors|covariant]] tensors of orders {{math|''m''}} and {{math|''n''}} respectively (i.e. <math>F \in T_m^0</math> and {{tmath|1= G \in T_n^0 }}), then the components of their tensor product are given by:<ref>Analogous formulas also hold for [[covariance and contravariance of vectors|contravariant]] tensors, as well as tensors of mixed variance. Although in many cases such as when there is an [[inner product]] defined, the distinction is irrelevant.</ref> <math display="block">(F \otimes G)_{i_1 i_2 \cdots i_{m+n}} = F_{i_1 i_2 \cdots i_m} G_{i_{m+1} i_{m+2} i_{m+3} \cdots i_{m+n}}.</math> Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let {{math|'''U'''}} be a tensor of type {{math|(1, 1)}} with components {{tmath|1= U^{\alpha}_{\beta} }}, and let {{math|'''V'''}} be a tensor of type <math>(1, 0)</math> with components {{tmath|1= V^{\gamma} }}. Then: <math display="block">\left(U \otimes V\right)^\alpha {}_\beta {}^\gamma = U^\alpha {}_\beta V^\gamma</math> and: <math display="block">(V \otimes U)^{\mu\nu} {}_\sigma = V^\mu U^\nu {}_\sigma.</math> Tensors equipped with their product operation form an [[algebra over a field|algebra]], called the [[tensor algebra]]. === Evaluation map and tensor contraction === For tensors of type {{math|(1, 1)}} there is a canonical '''evaluation map:''' <math display="block">V \otimes V^* \to K</math> defined by its action on pure tensors: <math display="block">v \otimes f \mapsto f(v).</math> More generally, for tensors of type {{tmath|1= (r, s) }}, with {{math|''r'', ''s'' > 0}}, there is a map, called [[tensor contraction]]: <math display="block">T^r_s (V) \to T^{r-1}_{s-1}(V).</math> (The copies of <math>V</math> and <math>V^*</math> on which this map is to be applied must be specified.) On the other hand, if <math>V</math> is {{em|finite-dimensional}}, there is a canonical map in the other direction (called the '''coevaluation map'''): <math display="block">\begin{cases} K \to V \otimes V^* \\ \lambda \mapsto \sum_i \lambda v_i \otimes v^*_i \end{cases}</math> where <math>v_1, \ldots, v_n</math> is any basis of {{tmath|1= V }}, and <math>v_i^*</math> is its [[dual basis]]. This map does not depend on the choice of basis.<ref>{{Cite web| url= https://unapologetic.wordpress.com/2008/11/13/the-coevaluation-on-vector-spaces/|title=The Coevaluation on Vector Spaces|date=2008-11-13| website=The Unapologetic Mathematician|access-date=2017-01-26| url-status=live |archive-url =https://web.archive.org/web/20170202080439/https://unapologetic.wordpress.com/2008/11/13/the-coevaluation-on-vector-spaces/| archive-date =2017-02-02}}</ref> The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases.<ref>See [[Compact closed category]].</ref> === Adjoint representation === The tensor product <math>T^r_s(V)</math> may be naturally viewed as a module for the [[Lie algebra]] <math>\mathrm{End}(V)</math> by means of the diagonal action: for simplicity let us assume {{tmath|1= r = s = 1 }}, then, for each {{tmath|1= u \in \mathrm{End}(V) }}, <math display="block">u(a \otimes b) = u(a) \otimes b - a \otimes u^*(b),</math> where <math>u^* \in \mathrm{End}\left(V^*\right)</math> is the [[transpose]] of {{math|''u''}}, that is, in terms of the obvious pairing on {{tmath|1= V \otimes V^* }}, <math display="block">\langle u(a), b \rangle = \langle a, u^*(b) \rangle.</math> There is a canonical isomorphism <math>T^1_1(V) \to \mathrm{End}(V)</math> given by: <math display="block">(a \otimes b)(x) = \langle x, b \rangle a.</math> Under this isomorphism, every {{math|''u''}} in <math>\mathrm{End}(V)</math> may be first viewed as an endomorphism of <math>T^1_1(V)</math> and then viewed as an endomorphism of {{tmath|1= \mathrm{End}(V) }}. In fact it is the [[Adjoint representation of a Lie algebra|adjoint representation]] {{math|ad(''u'')}} of {{tmath|1= \mathrm{End}(V) }}. == Linear maps as tensors == Given two finite dimensional vector spaces {{math|''U''}}, {{math|''V''}} over the same field {{math|''K''}}, denote the [[dual space]] of {{math|''U''}} as {{math|''U*''}}, and the {{math|''K''}}-vector space of all linear maps from {{math|''U''}} to {{math|''V''}} as {{math|Hom(''U'',''V'')}}. There is an isomorphism: <math display="block">U^* \otimes V \cong \mathrm{Hom}(U, V),</math> defined by an action of the pure tensor <math>f \otimes v \in U^*\otimes V</math> on an element of {{tmath|1= U }}, <math display="block">(f \otimes v)(u) = f(u) v.</math> Its "inverse" can be defined using a basis <math>\{u_i\}</math> and its dual basis <math>\{u^*_i\}</math> as in the section "[[#Evaluation map and tensor contraction|Evaluation map and tensor contraction]]" above: <math display="block">\begin{cases} \mathrm{Hom} (U,V) \to U^* \otimes V \\ F \mapsto \sum_i u^*_i \otimes F(u_i). \end{cases}</math> This result implies: <math display="block">\dim(U \otimes V) = \dim(U)\dim(V),</math> which automatically gives the important fact that <math>\{u_i\otimes v_j\}</math> forms a basis of <math>U \otimes V</math> where <math>\{u_i\}, \{v_j\}</math> are bases of {{math|''U''}} and {{math|''V''}}. Furthermore, given three vector spaces {{math|''U''}}, {{math|''V''}}, {{math|''W''}} the tensor product is linked to the vector space of ''all'' linear maps, as follows: <math display="block">\mathrm{Hom} (U \otimes V, W) \cong \mathrm{Hom} (U, \mathrm{Hom}(V, W)).</math> This is an example of [[adjoint functor]]s: the tensor product is "left adjoint" to Hom. == Tensor products of modules over a ring == {{main|Tensor product of modules}} The tensor product of two [[module (mathematics)|modules]] {{math|''A''}} and {{math|''B''}} over a ''[[commutative ring|commutative]]'' [[ring (mathematics)|ring]] {{math|''R''}} is defined in exactly the same way as the tensor product of vector spaces over a field: <math display="block">A \otimes_R B := F (A \times B) / G ,</math> where now <math>F(A \times B)</math> is the [[Free module|free {{math|''R''}}-module]] generated by the cartesian product and {{math|''G''}} is the {{math|''R''}}-module generated by [[Tensor_product_of_modules#Balanced_product|these relations]]. More generally, the tensor product can be defined even if the ring is [[Noncommutative ring|non-commutative]]. In this case {{math|''A''}} has to be a right-{{math|''R''}}-module and {{math|''B''}} is a left-{{math|''R''}}-module, and instead of the last two relations above, the relation: <math display="block">(ar,b)\sim (a,rb)</math> is imposed. If {{math|''R''}} is non-commutative, this is no longer an {{math|''R''}}-module, but just an [[abelian group]]. The universal property also carries over, slightly modified: the map <math>\varphi : A \times B \to A \otimes_R B</math> defined by <math>(a, b) \mapsto a \otimes b</math> is a [[Tensor product of modules#Balanced product|middle linear map]] (referred to as "the canonical middle linear map"<ref> {{cite book|last=Hungerford|first=Thomas W.|title=Algebra| publisher=Springer|year=1974|isbn=0-387-90518-9}}</ref>); that is, it satisfies:<ref name=chen> {{citation|last=Chen|first=Jungkai Alfred|title=Advanced Algebra II|chapter=Tensor product|chapter-url=http://www.math.ntu.edu.tw/~jkchen/S04AA/S04AAL10.pdf|type=lecture notes|date=Spring 2004|place=National Taiwan University|url-status=live|archive-url=https://web.archive.org/web/20160304040639/http://www.math.ntu.edu.tw/~jkchen/S04AA/S04AAL10.pdf|archive-date=2016-03-04}}</ref> <math display="block">\begin{align} \varphi(a + a', b) &= \varphi(a, b) + \varphi(a', b) \\ \varphi(a, b + b') &= \varphi(a, b) + \varphi(a, b') \\ \varphi(ar, b) &= \varphi(a, rb) \end{align}</math> The first two properties make {{math|''φ''}} a bilinear map of the [[abelian group]] {{tmath|1= A \times B }}. For any middle linear map <math>\psi</math> of {{tmath|1= A \times B }}, a unique group homomorphism {{math|''f''}} of <math>A \otimes_R B</math> satisfies {{tmath|1= \psi = f \circ \varphi }}, and this property determines <math>\varphi</math> within group isomorphism. See the [[tensor product of modules|main article]] for details. === Tensor product of modules over a non-commutative ring === Let ''A'' be a right ''R''-module and ''B'' be a left ''R''-module. Then the tensor product of ''A'' and ''B'' is an abelian group defined by: <math display="block">A \otimes_R B := F (A \times B) / G</math> where <math>F (A \times B)</math> is a [[free abelian group]] over <math>A \times B</math> and G is the subgroup of <math>F (A \times B)</math> generated by relations: <math display="block">\begin{align} &\forall a, a_1, a_2 \in A, \forall b, b_1, b_2 \in B, \text{ for all } r \in R:\\ &(a_1,b) + (a_2,b) - (a_1 + a_2,b),\\ &(a,b_1) + (a,b_2) - (a,b_1+b_2),\\ &(ar,b) - (a,rb).\\ \end{align}</math> The universal property can be stated as follows. Let ''G'' be an abelian group with a map <math>q:A\times B \to G</math> that is bilinear, in the sense that: <math display="block">\begin{align} q(a_1 + a_2, b) &= q(a_1, b) + q(a_2, b),\\ q(a, b_1 + b_2) &= q(a, b_1) + q(a, b_2),\\ q(ar, b) &= q(a, rb). \end{align}</math> Then there is a unique map <math>\overline{q}:A\otimes B \to G</math> such that <math>\overline{q}(a\otimes b) = q(a,b)</math> for all <math>a \in A</math> and {{tmath|1= b \in B }}. Furthermore, we can give <math>A \otimes_R B</math> a module structure under some extra conditions: # If ''A'' is a (''S'',''R'')-bimodule, then <math>A \otimes_R B</math> is a left ''S''-module, where {{tmath|1= s(a\otimes b):=(sa)\otimes b }}. # If ''B'' is a (''R'',''S'')-bimodule, then <math>A \otimes_R B</math> is a right ''S''-module, where {{tmath|1= (a\otimes b)s:=a\otimes (bs) }}. # If ''A'' is a (''S'',''R'')-bimodule and ''B'' is a (''R'',''T'')-bimodule, then <math>A \otimes_R B</math> is a (''S'',''T'')-bimodule, where the left and right actions are defined in the same way as the previous two examples. # If ''R'' is a commutative ring, then ''A'' and ''B'' are (''R'',''R'')-bimodules where <math>ra:=ar</math> and {{tmath|1= br:=rb }}. By 3), we can conclude <math>A \otimes_R B</math> is a (''R'',''R'')-bimodule. === Computing the tensor product === For vector spaces, the tensor product <math>V \otimes W</math> is quickly computed since bases of {{math|''V''}} of {{math|''W''}} immediately determine a basis of {{tmath|1= V \otimes W }}, as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, {{math|'''Z'''/''n'''''Z'''}} is not a free abelian group ({{math|'''Z'''}}-module). The tensor product with {{math|'''Z'''/''n'''''Z'''}} is given by: <math display="block">M \otimes_\mathbf{Z} \mathbf{Z}/n\mathbf{Z} = M/nM.</math> More generally, given a [[presentation of a module|presentation]] of some {{math|''R''}}-module {{math|''M''}}, that is, a number of generators <math>m_i \in M, i \in I</math> together with relations: <math display="block">\sum_{j \in J} a_{ji} m_i = 0,\qquad a_{ij} \in R,</math> the tensor product can be computed as the following [[cokernel]]: <math display="block">M \otimes_R N = \operatorname{coker} \left(N^J \to N^I\right)</math> Here {{tmath|1= N^J = \oplus_{j \in J} N }}, and the map <math>N^J \to N^I</math> is determined by sending some <math>n \in N</math> in the {{math|''j''}}th copy of <math>N^J</math> to <math>a_{ij} n</math> (in {{tmath|1= N^I }}). Colloquially, this may be rephrased by saying that a presentation of {{math|''M''}} gives rise to a presentation of {{tmath|1= M \otimes_R N }}. This is referred to by saying that the tensor product is a [[right exact functor]]. It is not in general left exact, that is, given an injective map of {{math|''R''}}-modules {{tmath|1= M_1 \to M_2 }}, the tensor product: <math display="block">M_1 \otimes_R N \to M_2 \otimes_R N</math> is not usually injective. For example, tensoring the (injective) map given by multiplication with {{math|''n''}}, {{math|''n'' : '''Z''' → '''Z'''}} with {{math|'''Z'''/''n'''''Z'''}} yields the zero map {{math|0 : '''Z'''/''n'''''Z''' → '''Z'''/''n'''''Z'''}}, which is not injective. Higher [[Tor functor]]s measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the [[derived tensor product]]. == Tensor product of algebras == {{main|Tensor product of algebras}} Let {{math|''R''}} be a commutative ring. The tensor product of {{math|''R''}}-modules applies, in particular, if {{math|''A''}} and {{math|''B''}} are [[Algebra (ring theory)|{{math|''R''}}-algebras]]. In this case, the tensor product <math>A \otimes_R B</math> is an {{math|''R''}}-algebra itself by putting: <math display="block">(a_1 \otimes b_1) \cdot (a_2 \otimes b_2) = (a_1 \cdot a_2) \otimes (b_1 \cdot b_2).</math> For example: <math display="block">R[x] \otimes_R R[y] \cong R[x, y].</math> A particular example is when {{math|''A''}} and {{math|''B''}} are fields containing a common subfield {{math|''R''}}. The [[tensor product of fields]] is closely related to [[Galois theory]]: if, say, {{math|1=''A'' = ''R''[''x''] / ''f''(''x'')}}, where {{math|''f''}} is some [[irreducible polynomial]] with coefficients in {{math|''R''}}, the tensor product can be calculated as: <math display="block">A \otimes_R B \cong B[x] / f(x)</math> where now {{math|''f''}} is interpreted as the same polynomial, but with its coefficients regarded as elements of {{math|''B''}}. In the larger field {{math|''B''}}, the polynomial may become reducible, which brings in Galois theory. For example, if {{math|1=''A'' = ''B''}} is a [[Galois extension]] of {{math|''R''}}, then: <math display="block">A \otimes_R A \cong A[x] / f(x)</math> is isomorphic (as an {{math|''A''}}-algebra) to the {{tmath|1= A^{\operatorname{deg}(f)} }}. == Eigenconfigurations of tensors == Square [[Matrix (mathematics)|matrices]] <math>A</math> with entries in a [[Field (mathematics)|field]] <math>K</math> represent [[linear map]]s of [[vector space]]s, say {{tmath|1= K^n \to K^n }}, and thus linear maps <math>\psi : \mathbb{P}^{n-1} \to \mathbb{P}^{n-1}</math> of [[projective spaces]] over {{tmath|1= K }}. If <math>A</math> is [[Invertible matrix|nonsingular]] then <math>\psi</math> is [[well-defined]] everywhere, and the [[Eigenvalues and eigenvectors|eigenvectors]] of <math>A</math> correspond to the fixed points of {{tmath|1= \psi }}. The ''eigenconfiguration'' of <math>A</math> consists of <math>n</math> points in {{tmath|1= \mathbb{P}^{n-1} }}, provided <math>A</math> is generic and <math>K</math> is [[Algebraically closed field|algebraically closed]]. The fixed points of nonlinear maps are the eigenvectors of tensors. Let <math>A = (a_{i_1 i_2 \cdots i_d})</math> be a <math>d</math>-dimensional tensor of format <math>n \times n \times \cdots \times n</math> with entries <math>(a_{i_1 i_2 \cdots i_d})</math> lying in an algebraically closed field <math>K</math> of [[Characteristic (algebra)|characteristic]] zero. Such a tensor <math>A \in (K^{n})^{\otimes d}</math> defines [[Morphism of algebraic varieties|polynomial maps]] <math>K^n \to K^n</math> and <math>\mathbb{P}^{n-1} \to \mathbb{P}^{n-1}</math> with coordinates: <math display="block">\psi_i(x_1, \ldots, x_n) = \sum_{j_2=1}^n \sum_{j_3=1}^n \cdots \sum_{j_d = 1}^n a_{i j_2 j_3 \cdots j_d} x_{j_2} x_{j_3}\cdots x_{j_d} \;\; \mbox{for } i = 1, \ldots, n</math> Thus each of the <math>n</math> coordinates of <math>\psi</math> is a [[homogeneous polynomial]] <math>\psi_i</math> of degree <math>d-1</math> in {{tmath|1= \mathbf{x} = \left(x_1, \ldots, x_n\right) }}. The eigenvectors of <math>A</math> are the solutions of the constraint: <math display="block">\mbox{rank} \begin{pmatrix} x_1 & x_2 & \cdots & x_n \\ \psi_1(\mathbf{x}) & \psi_2(\mathbf{x}) & \cdots & \psi_n(\mathbf{x}) \end{pmatrix} \leq 1 </math> and the eigenconfiguration is given by the [[Algebraic variety|variety]] of the <math>2 \times 2</math> [[Minor (linear algebra)|minors]] of this matrix.<ref>{{cite arXiv |last1=Abo |first1=H. |last2=Seigal |first2=A. |author2-link=Anna Seigal |last3=Sturmfels |first3=B. |author3-link=Bernd Sturmfels |title=Eigenconfigurations of Tensors |date=2015 |class=math.AG |eprint=1505.05729 }}</ref> == Other examples of tensor products == === Topological tensor products === {{main|Topological tensor product|Tensor product of Hilbert spaces}} [[Hilbert space]]s generalize finite-dimensional vector spaces to arbitrary dimensions. There is [[tensor product of Hilbert spaces|an analogous operation]], also called the "tensor product," that makes Hilbert spaces a [[symmetric monoidal category]]. It is essentially constructed as the [[Complete_metric_space#Completion|metric space completion]] of the algebraic tensor product discussed above. However, it does not satisfy the obvious analogue of the universal property defining tensor products;<ref>{{cite web|url=https://www-users.cse.umn.edu/~garrett/m/v/nonexistence_tensors.pdf|date=July 22, 2010|title=Non-existence of tensor products of Hilbert spaces|first=Paul|last=Garrett}}</ref> the morphisms for that property must be restricted to [[Hilbert–Schmidt operator]]s.<ref>{{Cite book | last1=Kadison | first1=Richard V. | last2=Ringrose | first2=John R. | title=Fundamentals of the theory of operator algebras | volume=I | publisher=[[American Mathematical Society]] | location=Providence, R.I. | series=[[Graduate Studies in Mathematics]] | isbn=978-0-8218-0819-1 | mr= 1468229 | year=1997 | at=Thm. 2.6.4.}}</ref> In situations where the imposition of an inner product is inappropriate, one can still attempt to complete the algebraic tensor product, as a [[topological tensor product]]. However, such a construction is no longer uniquely specified: in many cases, there are multiple natural topologies on the algebraic tensor product. === Tensor product of graded vector spaces === {{main|Graded vector space#Operations on graded vector spaces}} Some vector spaces can be decomposed into [[direct sum]]s of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition). === Tensor product of representations === {{main|Tensor product of representations}} Vector spaces endowed with an additional multiplicative structure are called [[algebra over a field|algebras]]. The tensor product of such algebras is described by the [[Littlewood–Richardson rule]]. === Tensor product of quadratic forms === {{main|Tensor product of quadratic forms}} === Tensor product of multilinear forms === Given two [[multilinear form]]s <math>f(x_1,\dots,x_k)</math> and <math>g (x_1,\dots, x_m)</math> on a vector space <math>V</math> over the field <math>K</math> their tensor product is the multilinear form: <math display="block">(f \otimes g) (x_1,\dots,x_{k+m}) = f(x_1,\dots,x_k) g(x_{k+1},\dots,x_{k+m}).</math><ref name="An Introduction to Manifolds">{{cite book |title=An Introduction to Manifolds | first=L. W. | last=Tu |series=Universitext |publisher=Springer | page=25 | isbn=978-1-4419-7399-3 | year=2010}}</ref> This is a special case of the [[#General tensors|product of tensors]] if they are seen as multilinear maps (see also [[Tensor#As multilinear maps|tensors as multilinear maps]]). Thus the components of the tensor product of multilinear forms can be computed by the [[Kronecker product]]. === Tensor product of sheaves of modules === {{main|Sheaf of modules}} === Tensor product of line bundles === {{main|Vector bundle#Operations on vector bundles}} {{See also|Tensor product bundle}} === Tensor product of fields === {{main|Tensor product of fields}} === Tensor product of graphs === {{main|Tensor product of graphs}} It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the [[Product (category theory)|category-theoretic product]] in the category of graphs and [[graph homomorphism]]s. However it is actually the [[Kronecker product|Kronecker tensor product]] of the [[adjacency matrix|adjacency matrices]] of the graphs. Compare also the section [[#Tensor product of linear maps|Tensor product of linear maps]] above. === Monoidal categories === The most general setting for the tensor product is the [[monoidal category]]. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects. == Quotient algebras == A number of important subspaces of the [[tensor algebra]] can be constructed as [[quotient space (linear algebra)|quotients]]: these include the [[exterior algebra]], the [[symmetric algebra]], the [[Clifford algebra]], the [[Weyl algebra]], and the [[universal enveloping algebra]] in general. The exterior algebra is constructed from the [[exterior product]]. Given a vector space {{math|''V''}}, the exterior product <math>V \wedge V</math> is defined as: <math display="block">V \wedge V := V \otimes V \big/ \{v\otimes v \mid v\in V\}.</math> When the underlying field of {{math|''V''}} does not have characteristic 2, then this definition is equivalent to: <math display="block">V \wedge V := V \otimes V \big/ \bigl\{v_1 \otimes v_2 + v_2 \otimes v_1 \mid (v_1, v_2) \in V^2\bigr\}.</math> The image of <math>v_1 \otimes v_2</math> in the exterior product is usually denoted <math>v_1 \wedge v_2</math> and satisfies, by construction, {{tmath|1= v_1 \wedge v_2 = - v_2 \wedge v_1}}. Similar constructions are possible for <math>V \otimes \dots \otimes V</math> ({{math|''n''}} factors), giving rise to {{tmath|1= \Lambda^n V }}, the {{math|''n''}}th [[exterior power]] of {{math|''V''}}. The latter notion is the basis of [[differential form|differential {{math|''n''}}-forms]]. The symmetric algebra is constructed in a similar manner, from the [[symmetric tensor#symmetric product|symmetric product]]: <math display="block">V \odot V := V \otimes V \big/ \bigl\{ v_1 \otimes v_2 - v_2 \otimes v_1 \mid (v_1, v_2) \in V^2\bigr\}.</math> More generally: <math display="block">\operatorname{Sym}^n V := \underbrace{V \otimes \dots \otimes V}_n \big/ (\dots \otimes v_i \otimes v_{i+1} \otimes \dots - \dots \otimes v_{i+1} \otimes v_{i} \otimes \dots)</math> That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called [[symmetric tensor]]s. == Tensor product in programming == === Array programming languages === [[Array programming languages]] may have this pattern built in. For example, in [[APL programming language|APL]] the tensor product is expressed as <code>○.×</code> (for example <code>A ○.× B</code> or <code>A ○.× B ○.× C</code>). In [[J programming language|J]] the tensor product is the dyadic form of <code>*/</code> (for example <code>a */ b</code> or <code>a */ b */ c</code>). J's treatment also allows the representation of some tensor fields, as <code>a</code> and <code>b</code> may be functions instead of constants. This product of two functions is a derived function, and if <code>a</code> and <code>b</code> are [[Differentiable function|differentiable]], then <code>a */ b</code> is differentiable. However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, [[MATLAB]]), and/or may not support [[higher-order function]]s such as the [[Jacobian matrix and determinant|Jacobian derivative]] (for example, [[Fortran]]/APL). == See also == {{wiktionary}} * {{annotated link|Dyadics}} * {{annotated link|Extension of scalars}} * {{annotated link|Monoidal category}} * {{annotated link|Tensor algebra}} * {{annotated link|Tensor contraction}} * {{annotated link|Topological tensor product}} == Notes == {{reflist}} == References == * {{cite book |first = Nicolas|last=Bourbaki|author-link=Nicolas Bourbaki | title = Elements of mathematics, Algebra I| publisher = Springer-Verlag | year = 1989|isbn=3-540-64243-9}} * {{cite web|last=Gowers|first=Timothy|author-link=Tim Gowers|url=https://www.dpmms.cam.ac.uk/~wtg10/tensors3.html|title=How to lose your fear of tensor products|url-status=live|archive-url=https://web.archive.org/web/20210507014709/https://www.dpmms.cam.ac.uk/~wtg10/tensors3.html|archive-date=7 May 2021}} * {{cite book |first=Pierre A.|last=Grillet|title=Abstract Algebra|year=2007|publisher=Springer Science+Business Media, LLC| isbn= 978-0387715674}} * {{cite book |author-link=Paul Halmos|first=Paul|last=Halmos|title=Finite dimensional vector spaces| year=1974 |publisher= Springer |isbn= 0-387-90093-4}} * {{cite book |first=Thomas W.|last=Hungerford|author-link=Thomas W. Hungerford| title=Algebra |year=2003 |publisher= Springer |isbn= 0387905189}} * {{Lang Algebra|edition=3r}} * {{cite book |first1=S.|last1=Mac Lane|author-link1=Saunders Mac Lane|author-link2=Garrett Birkhoff |last2= Birkhoff |first2= G. |title= Algebra|publisher=AMS Chelsea|year=1999|isbn=0-8218-1646-2}} * {{cite book |first1=M.|last1=Aguiar|first2=S.|last2=Mahajan| title = Monoidal functors, species and Hopf algebras|publisher = CRM Monograph Series Vol 29 |year=2010|isbn=978-0-8218-4776-3}} * {{Trèves François Topological vector spaces, distributions and kernels}} <!--{{sfn|Trèves|2006|p=}}--> * {{cite web |url=http://pages.bangor.ac.uk/~mas010/nonabtens.html |title=Bibliography on the nonabelian tensor product of groups }} {{tensors}} {{DEFAULTSORT:Tensor Product}} [[Category:Operations on vectors]] [[Category:Operations on structures]] [[Category:Bilinear maps]] [[Category:Functors]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Annotated link
(
edit
)
Template:Citation
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Em
(
edit
)
Template:For
(
edit
)
Template:Lang Algebra
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Math theorem
(
edit
)
Template:Mvar
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Refn
(
edit
)
Template:See also
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Tensors
(
edit
)
Template:Tmath
(
edit
)
Template:Trèves François Topological vector spaces, distributions and kernels
(
edit
)
Template:Vanchor
(
edit
)
Template:Wiktionary
(
edit
)