Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Tensor
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== As multidimensional arrays === A tensor may be represented as a (potentially multidimensional) array. Just as a [[Vector space|vector]] in an {{mvar|n}}-[[dimension (vector space)|dimensional]] space is represented by a [[multidimensional array|one-dimensional]] array with {{mvar|n}} components with respect to a given [[Basis (linear algebra)#Ordered bases and coordinates|basis]], any tensor with respect to a basis is represented by a multidimensional array. For example, a [[linear operator]] is represented in a basis as a two-dimensional square {{math|''n'' Γ ''n''}} array. The numbers in the multidimensional array are known as the ''components'' of the tensor. They are denoted by indices giving their position in the array, as [[subscript and superscript|subscripts and superscripts]], following the symbolic name of the tensor. For example, the components of an order-{{math|2}} tensor {{mvar|T}} could be denoted {{math|''T''<sub>''ij''</sub>}}β―, where {{mvar|i}} and {{mvar|j}} are indices running from {{math|1}} to {{mvar|n}}, or also by {{math|''T''{{thinsp}}{{su|lh=0.8|b=''j''|p=''i''}}}}. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while {{math|''T''<sub>''ij''</sub>}} and {{math|''T''{{thinsp}}{{su|lh=0.8|b=''j''|p=''i''}}}} can both be expressed as ''n''-by-''n'' matrices, and are numerically related via [[Raising and lowering indices|index juggling]], the difference in their transformation laws indicates it would be improper to add them together. The total number of indices ({{mvar|m}}) required to identify each component uniquely is equal to the ''dimension'' or the number of ''ways'' of an array, which is why a tensor is sometimes referred to as an {{mvar|m}}-dimensional array or an {{mvar|m}}-way array. The total number of indices is also called the ''order'', ''degree'' or ''rank'' of a tensor,<ref name=DeLathauwerEtAl2000 >{{cite journal| last1= De Lathauwer |first1= Lieven| last2= De Moor |first2= Bart| last3= Vandewalle |first3= Joos| date=2000|title=A Multilinear Singular Value Decomposition |journal= [[SIAM J. Matrix Anal. Appl.]]|volume=21|issue= 4|pages=1253β1278|doi= 10.1137/S0895479896305696|s2cid= 14344372|url= https://alterlab.org/teaching/BME6780/papers+patents/De_Lathauwer_2000.pdf}}</ref><ref name=Vasilescu2002Tensorfaces >{{cite book |first1=M.A.O. |last1=Vasilescu |first2=D. |last2=Terzopoulos |title=Computer Vision β ECCV 2002 |chapter=Multilinear Analysis of Image Ensembles: TensorFaces |series=Lecture Notes in Computer Science |volume=2350 |pages=447β460 |doi=10.1007/3-540-47969-4_30 |date=2002 |isbn=978-3-540-43745-1 |s2cid=12793247 |chapter-url=http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |access-date=2022-12-29 |archive-date=2022-12-29 |archive-url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |url-status=dead }}</ref><ref name=KoldaBader2009 >{{cite journal| last1= Kolda |first1= Tamara| last2= Bader |first2= Brett| date=2009|title=Tensor Decompositions and Applications |journal= [[SIAM Review]]|volume=51|issue= 3|pages=455β500|doi= 10.1137/07070111X|bibcode= 2009SIAMR..51..455K|s2cid= 16074195|url= https://www.kolda.net/publication/TensorReview.pdf}}</ref> although the term "rank" generally has [[tensor rank|another meaning]] in the context of matrices and tensors. Just as the components of a vector change when we change the [[basis (linear algebra)|basis]] of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a ''transformation law'' that details how the components of the tensor respond to a [[change of basis]]. The components of a vector can respond in two distinct ways to a [[change of basis]] (see ''[[Covariance and contravariance of vectors]]''), where the new [[basis vectors]] <math>\mathbf{\hat{e}}_i</math> are expressed in terms of the old basis vectors <math>\mathbf{e}_j</math> as, :<math>\mathbf{\hat{e}}_i = \sum_{j=1}^n \mathbf{e}_j R^j_i = \mathbf{e}_j R^j_i .</math> Here ''R''<sup>'' j''</sup><sub>''i''</sub> are the entries of the change of basis matrix, and in the rightmost expression the [[summation]] sign was suppressed: this is the [[Einstein summation convention]], which will be used throughout this article.<ref group="Note">The Einstein summation convention, in brief, requires the sum to be taken over all values of the index whenever the same symbol appears as a subscript and superscript in the same term. For example, under this convention <math>B_i C^i = B_1 C^1 + B_2 C^2 + \cdots + B_n C^n</math></ref> The components ''v''<sup>''i''</sup> of a column vector '''v''' transform with the [[matrix inverse|inverse]] of the matrix ''R'', :<math>\hat{v}^i = \left(R^{-1}\right)^i_j v^j,</math> where the hat denotes the components in the new basis. This is called a ''contravariant'' transformation law, because the vector components transform by the ''inverse'' of the change of basis. In contrast, the components, ''w''<sub>''i''</sub>, of a covector (or row vector), '''w''', transform with the matrix ''R'' itself, :<math>\hat{w}_i = w_j R^j_i .</math> This is called a ''covariant'' transformation law, because the covector components transform by the ''same matrix'' as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called ''contravariant'' and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called ''covariant'' and is denoted with a lower index (subscript). As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array <math>T</math> that transforms under a change of basis matrix <math>R = \left(R^j_i\right)</math> by <math>\hat{T} = R^{-1}TR</math>. For the individual matrix entries, this transformation law has the form <math>\hat{T}^{i'}_{j'} = \left(R^{-1}\right)^{i'}_i T^i_j R^j_{j'}</math> so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1). Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above: :<math>\mathbf{v} = \hat{v}^i \,\mathbf{\hat{e}}_i = \left( \left(R^{-1}\right)^i_j {v}^j \right) \left( \mathbf{{e}}_k R^k_i \right) = \left( \left(R^{-1}\right)^i_j R^k_i \right) {v}^j \mathbf{{e}}_k = \delta_j^k {v}^j \mathbf{{e}}_k = {v}^k \,\mathbf{{e}}_k = {v}^i \,\mathbf{{e}}_i </math>, where <math>\delta^k_j</math> is the [[Kronecker delta]], which functions similarly to the [[identity matrix]], and has the effect of renaming indices (''j'' into ''k'' in this example). This shows several features of the component notation: the ability to re-arrange terms at will ([[commutativity]]), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like <math>{v}^i \,\mathbf{{e}}_i</math> can immediately be seen to be geometrically identical in all coordinate systems. Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components <math>(Tv)^i</math> are given by <math>(Tv)^i = T^i_j v^j</math>. These components transform contravariantly, since :<math>\left(\widehat{Tv}\right)^{i'} = \hat{T}^{i'}_{j'} \hat{v}^{j'} = \left[ \left(R^{-1}\right)^{i'}_i T^i_j R^j_{j'} \right] \left[ \left(R^{-1}\right)^{j'}_k v^k \right] = \left(R^{-1}\right)^{i'}_i (Tv)^i .</math> The transformation law for an order {{math|''p'' + ''q''}} tensor with ''p'' contravariant indices and ''q'' covariant indices is thus given as, :<math> \hat{T}^{i'_1, \ldots, i'_p}_{j'_1, \ldots, j'_q} = \left(R^{-1}\right)^{i'_1}_{i_1} \cdots \left(R^{-1}\right)^{i'_p}_{i_p} </math> <math> T^{i_1, \ldots, i_p}_{j_1, \ldots, j_q} </math> <math> R^{j_1}_{j'_1}\cdots R^{j_q}_{j'_q}. </math> Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or ''type'' {{math|(''p'', ''q'')}}. The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), {{math|''p'' + ''q''}} in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type {{math|(''p'', ''q'')}} is also called a {{math|(''p'', ''q'')}}-tensor for short. This discussion motivates the following formal definition:<ref name="Sharpe2000">{{cite book|first=R.W. |last=Sharpe|title=Differential Geometry: Cartan's Generalization of Klein's Erlangen Program|url={{google books |plainurl=y |id=Ytqs4xU5QKAC| page=194}}|date=2000|publisher=Springer |isbn=978-0-387-94732-7| page=194}}</ref><ref>{{citation|chapter-url={{google books |plainurl=y |id=WROiC9st58gC}}|first=Jan Arnoldus|last=Schouten|author-link=Jan Arnoldus Schouten|title=Tensor analysis for physicists|year=1954|publisher=Courier Corporation|isbn=978-0-486-65582-6|chapter=Chapter II|url=https://archive.org/details/isbn_9780486655826}}</ref> {{blockquote|'''Definition.''' A tensor of type (''p'', ''q'') is an assignment of a multidimensional array :<math>T^{i_1\dots i_p}_{j_{1}\dots j_{q}}[\mathbf{f}]</math> to each basis {{math|'''f''' {{=}} ('''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub>)}} of an ''n''-dimensional vector space such that, if we apply the change of basis :<math>\mathbf{f}\mapsto \mathbf{f}\cdot R = \left( \mathbf{e}_i R^i_1, \dots, \mathbf{e}_i R^i_n \right)</math> then the multidimensional array obeys the transformation law :<math> T^{i'_1\dots i'_p}_{j'_1\dots j'_q}[\mathbf{f} \cdot R] = \left(R^{-1}\right)^{i'_1}_{i_1} \cdots \left(R^{-1}\right)^{i'_p}_{i_p} </math> <math> T^{i_1, \ldots, i_p}_{j_1, \ldots, j_q}[\mathbf{f}] </math> <math> R^{j_1}_{j'_1}\cdots R^{j_q}_{j'_q} . </math> }} The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.<ref name="Kline" /> An equivalent definition of a tensor uses the [[representation theory|representations]] of the [[general linear group]]. There is an [[Group action (mathematics)|action]] of the general linear group on the set of all [[ordered basis|ordered bases]] of an ''n''-dimensional vector space. If <math>\mathbf f = (\mathbf f_1, \dots, \mathbf f_n)</math> is an ordered basis, and <math>R = \left(R^i_j\right)</math> is an invertible <math>n\times n</math> matrix, then the action is given by :<math>\mathbf fR = \left(\mathbf f_i R^i_1, \dots, \mathbf f_i R^i_n\right).</math> Let ''F'' be the set of all ordered bases. Then ''F'' is a [[principal homogeneous space]] for GL(''n''). Let ''W'' be a vector space and let <math>\rho</math> be a representation of GL(''n'') on ''W'' (that is, a [[group homomorphism]] <math>\rho: \text{GL}(n) \to \text{GL}(W)</math>). Then a tensor of type <math>\rho</math> is an [[equivariant map]] <math>T: F \to W</math>. Equivariance here means that :<math>T(FR) = \rho\left(R^{-1}\right)T(F).</math> When <math>\rho</math> is a [[tensor representation]] of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds,<ref>{{citation | last1=Kobayashi|first1=Shoshichi|last2=Nomizu|first2=Katsumi | title = Foundations of Differential Geometry|volume=1| publisher=[[Wiley Interscience]] | year=1996|edition=New|isbn=978-0-471-15733-5|title-link=Foundations of Differential Geometry}}</ref> and readily generalizes to other groups.<ref name="Sharpe2000" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)