Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Einstein notation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Shorthand notation for tensor operations}} In [[mathematics]], especially the usage of [[linear algebra]] in [[mathematical physics]] and [[differential geometry]], '''Einstein notation''' (also known as the '''Einstein summation convention''' or '''Einstein summation notation''') is a notational convention that implies [[summation]] over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of [[Ricci calculus]]; however, it is often used in physics applications that do not distinguish between [[Tangent space|tangent]] and [[cotangent space]]s. It was introduced to physics by [[Albert Einstein]] in 1916.<ref name=Ein1916>{{cite journal|last=Einstein |first=Albert |author-link=Albert Einstein |title=The Foundation of the General Theory of Relativity |journal=Annalen der Physik |year=1916 |volume=354 |issue=7 |page=769 |url=http://www.alberteinstein.info/gallery/gtext3.html |doi=10.1002/andp.19163540702 |format=[[PDF]] |access-date=2006-09-03 |archive-url=https://web.archive.org/web/20060829045130/http://www.alberteinstein.info/gallery/gtext3.html |archive-date=2006-08-29 | bibcode=1916AnP...354..769E |url-status = dead }}</ref> == Introduction == ===Statement of convention=== According to this convention, when an index variable appears twice in a single [[Addend|term]] and is not otherwise defined (see [[Free and bound variables]]), it implies summation of that term over all the values of the index. So where the indices can range over the [[Set (mathematics)|set]] {{math|{1, 2, 3}<nowiki/>}}, <math display="block">y = \sum_{i = 1}^3 x^i e_i = x^1 e_1 + x^2 e_2 + x^3 e_3 </math> is simplified by the convention to: <math display="block">y = x^i e_i </math> The upper indices are not [[Exponentiation|exponents]] but are indices of coordinates, [[coefficient]]s or [[basis vector]]s. That is, in this context {{math|''x''<sup>2</sup>}} should be understood as the second component of {{math|''x''}} rather than the square of {{math|''x''}} (this can occasionally lead to ambiguity). The upper index position in {{math|''x''<sup>''i''</sup>}} is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see ''{{section link|#Application}}'' below). Typically, {{math|(''x''<sup>1</sup> ''x''<sup>2</sup> ''x''<sup>3</sup>)}} would be equivalent to the traditional {{math|(''x'' ''y'' ''z'')}}. In [[general relativity]], a common convention is that * the [[Greek alphabet]] is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are {{math|''μ'', ''ν'', ...}}), * the [[Latin alphabet]] is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are {{math|''i'', ''j'', ...}}), In general, indices can range over any [[Indexed family|indexing set]], including an [[infinite set]]. This should not be confused with a typographically similar convention used to distinguish between [[tensor index notation]] and the closely related but distinct basis-independent [[abstract index notation]]. An index that is summed over is a ''summation index'', in this case "{{math|''i'' }}". It is also called a [[bound variable|dummy index]] since any symbol can replace "{{math|''i'' }}" without changing the meaning of the expression (provided that it does not collide with other index symbols in the same term). An index that is not summed over is a [[free variable|''free index'']] and should appear only once per term. If such an index does appear, it usually also appears in every other term in an equation. An example of a free index is the "{{math|''i'' }}" in the equation <math>v_i = a_i b_j x^j</math>, which is equivalent to the equation <math display="inline">v_i = \sum_j(a_{i} b_{j} x^{j})</math>. ===Application=== Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term.<ref name="wolfram">{{cite web |url=http://mathworld.wolfram.com/EinsteinSummation.html |title=Einstein Summation |access-date=13 April 2011 |publisher=Wolfram Mathworld }}</ref> When dealing with [[Covariance and contravariance of vectors|covariant and contravariant]] vectors, where the position of an index indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see ''{{section link||Superscripts and subscripts versus only subscripts}}'' below. ==Vector representations== === Superscripts and subscripts versus only subscripts === In terms of [[covariance and contravariance of vectors]], * upper indices represent components of [[Covariance and contravariance of vectors|contravariant vectors]] ([[coordinate vector|vector]]s), * lower indices represent components of [[covariant vector|covariant]] vectors ([[covector]]s). They transform contravariantly or covariantly, respectively, with respect to [[change of basis]]. In recognition of this fact, the following notation uses the same symbol both for a vector or covector and its ''components'', as in: <math display="block">\begin{align} v = v^i e_i = \begin{bmatrix} e_1 & e_2 & \cdots & e_n \end{bmatrix} \begin{bmatrix} v^1 \\ v^2 \\ \vdots \\ v^n \end{bmatrix} \\ w = w_i e^i = \begin{bmatrix} w_1 & w_2 & \cdots & w_n \end{bmatrix} \begin{bmatrix} e^1 \\ e^2 \\ \vdots \\ e^n \end{bmatrix} \end{align}</math> where <math> v </math> is the vector and <math> v^i </math> are its components (not the <math> i </math>th covector <math> v </math>), <math> w </math> is the covector and <math> w_i </math> are its components. The basis vector elements <math>e_i</math> are each column vectors, and the covector basis elements <math>e^i</math> are each row covectors. (See also {{slink|#Abstract description}}; [[dual basis|duality]], below and the [[Dual basis#Examples|examples]]) In the presence of a [[Degenerate bilinear form|non-degenerate form]] (an [[isomorphism]] {{math|''V'' → ''V''{{i sup|∗}}}}, for instance a [[Riemannian metric]] or [[Minkowski metric]]), one can [[raising and lowering indices|raise and lower indices]]. A basis gives such a form (via the [[dual basis]]), hence when working on {{math|'''R'''<sup>''n''</sup>}} with a [[Euclidean metric]] and a fixed [[orthonormal basis]], one has the option to work with only subscripts. However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; see [[Covariance and contravariance of vectors]]. ===Mnemonics=== In the above example, vectors are represented as {{math|''n'' × 1}} [[matrix (mathematics)|matrices]] (column vectors), while covectors are represented as {{math|1 × ''n''}} matrices (row covectors). When using the column vector convention: * "'''Up'''per indices go '''up''' to down; '''l'''ower indices go '''l'''eft to right." * "'''Co'''variant tensors are '''row''' vectors that have indices that are '''below''' ('''co-row-below''')." * Covectors are row vectors: <math display="block">\begin{bmatrix} w_1 & \cdots & w_k \end{bmatrix}.</math> Hence the lower index indicates which ''column'' you are in. * Contravariant vectors are column vectors: <math display="block">\begin{bmatrix} v^1 \\ \vdots \\ v^k \end{bmatrix}</math> Hence the upper index indicates which ''row'' you are in. === Abstract description === The virtue of Einstein notation is that it represents the [[Invariant (mathematics)|invariant]] quantities with a simple notation. In physics, a [[Scalar (physics)|scalar]] is invariant under transformations of basis. In particular, a [[Lorentz scalar]] is invariant under a [[Lorentz transformation]]. The individual terms in the sum are not. When the basis is changed, the ''components'' of a vector change by a [[linear transformation]] described by a matrix. This led Einstein to propose the convention that repeated indices imply the summation is to be done. As for covectors, they change by the [[inverse matrix]]. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. The value of the Einstein convention is that it applies to other [[vector space]]s built from {{math|''V''}} using the [[tensor product]] and [[dual space|duality]]. For example, {{math|''V'' ⊗ ''V''}}, the tensor product of {{math|''V''}} with itself, has a basis consisting of tensors of the form {{math|1='''e'''<sub>''ij''</sub> = '''e'''<sub>''i''</sub> ⊗ '''e'''<sub>''j''</sub>}}. Any tensor {{math|'''T'''}} in {{math|''V'' ⊗ ''V''}} can be written as: <math display="block">\mathbf{T} = T^{ij}\mathbf{e}_{ij}.</math> {{math|''V'' *}}, the dual of {{math|''V''}}, has a basis {{math|'''e'''<sup>1</sup>}}, {{math|'''e'''<sup>2</sup>}}, ..., {{math|'''e'''<sup>''n''</sup>}} which obeys the rule <math display="block">\mathbf{e}^i (\mathbf{e}_j) = \delta^i_j.</math> where {{math|''δ''}} is the [[Kronecker delta]]. As <math display="block">\operatorname{Hom}(V, W) = V^* \otimes W</math> the row/column coordinates on a matrix correspond to the upper/lower indices on the tensor product. == Common operations in this notation == In Einstein notation, the usual element reference <math>A_{mn}</math> for the <math>m</math>-th row and <math>n</math>-th column of matrix <math>A</math> becomes <math>{A^m}_{n}</math>. We can then write the following operations in Einstein notation as follows. === Inner product === The [[inner product]] of two vectors is the sum of the products of their corresponding components, with the indices of one vector lowered (see [[#Raising and lowering indices]]): <math display="block">\langle\mathbf u,\mathbf v\rangle = \langle\mathbf e_i, \mathbf e_j\rangle u^i v^j = u_j v^j</math> In the case of an [[orthonormal basis]], we have <math>u^j = u_j</math>, and the expression simplifies to: <math display="block">\langle\mathbf u,\mathbf v\rangle = \sum_j u^j v^j = u_j v^j</math> === Vector cross product === In three dimensions, the [[cross product]] of two vectors with respect to a [[Orientation (vector space)|positively oriented]] orthonormal basis, meaning that <math>\mathbf e_1\times\mathbf e_2=\mathbf e_3</math>, can be expressed as: <math display="block">\mathbf{u} \times \mathbf{v} = \varepsilon^i_{\,jk} u^j v^k \mathbf{e}_i</math> Here, <math>\varepsilon^i_{\,jk} = \varepsilon_{ijk}</math> is the [[Levi-Civita symbol]]. Since the basis is orthonormal, raising the index <math>i</math> does not alter the value of <math>\varepsilon_{ijk}</math>, when treated as a tensor. === Matrix-vector multiplication === The product of a matrix {{math|''A<sub>ij</sub>''}} with a column vector {{math|''v<sub>j</sub>''}} is: <math display="block">\mathbf{u}_{i} = (\mathbf{A} \mathbf{v})_{i} = \sum_{j=1}^N A_{ij} v_{j}</math> equivalent to <math display="block">u^i = {A^i}_j v^j </math> This is a special case of matrix multiplication. === Matrix multiplication === The [[matrix multiplication|matrix product]] of two matrices {{math|''A<sub>ij</sub>''}} and {{math|''B<sub>jk</sub>''}} is: <math display="block">\mathbf{C}_{ik} = (\mathbf{A} \mathbf{B})_{ik} =\sum_{j=1}^N A_{ij} B_{jk}</math> equivalent to <math display="block">{C^i}_k = {A^i}_j {B^j}_k</math> === Trace === For a [[square matrix]] {{math|''A<sup>i</sup><sub>j</sub>''}}, the [[Trace (linear algebra)|trace]] is the sum of the diagonal elements, hence the sum over a common index {{math|''A<sup>i</sup><sub>i</sub>''}}. === Outer product === The [[outer product]] of the column vector {{math|''u<sup>i</sup>''}} by the row vector {{math|''v<sub>j</sub>''}} yields an {{math|''m'' × ''n''}} matrix {{math|'''A'''}}: <math display="block">{A^i}_j = u^i v_j = {(u v)^i}_j</math> Since {{math|''i''}} and {{math|''j''}} represent two ''different'' indices, there is no summation and the indices are not eliminated by the multiplication. === Raising and lowering indices === Given a [[tensor]], one can [[Raising and lowering indices|raise an index or lower an index]] by contracting the tensor with the [[metric tensor]], {{math|''g<sub>μν</sub>''}}. For example, taking the tensor {{math|''T<sup>α</sup><sub>β</sub>''}}, one can lower an index: <math display="block">g_{\mu\sigma} {T^\sigma}_\beta = T_{\mu\beta}</math> Or one can raise an index: <math display="block">g^{\mu\sigma} {T_\sigma}^\alpha = T^{\mu\alpha}</math> == See also == * [[Tensor]] * [[Abstract index notation]] * [[Bra–ket notation]] * [[Penrose graphical notation]] * [[Levi-Civita symbol]] * [[DeWitt notation]] ==Notes== #{{note label|a}}This applies only for numerical indices. The situation is the opposite for [[abstract indices]]. Then, vectors themselves carry upper abstract indices and covectors carry lower abstract indices, as per the example in the [[#Introduction|introduction]] of this article. Elements of a basis of vectors may carry a lower ''numerical'' index and an upper ''abstract'' index. ==References== {{reflist}} ==Bibliography== * {{springer|id=E/e035220|first=L. P.|last=Kuptsov|title=Einstein rule}}. ==External links== {{wikibooks|General Relativity|Einstein Summation Notation}} * {{cite news |last=Rawlings |first=Steve |url=http://www-astro.physics.ox.ac.uk/~sr/lectures/vectors/lecture10final.pdfc |title=Lecture 10 – Einstein Summation Convention and Vector Identities |publisher=Oxford University |date=2007-02-01 |access-date=2008-07-02 |archive-url=https://web.archive.org/web/20170106185911/http://www-astro.physics.ox.ac.uk/~sr/lectures/vectors/lecture10final.pdfc |archive-date=2017-01-06 |url-status=dead }} * {{cite web |title=Vector Calculation in Index Notation (Einstein's Summation Convention) |url=https://www.goldsilberglitzer.at/Rezepte/Rezept004E.pdf}} * {{cite web |title=Understanding NumPy's einsum |url=https://stackoverflow.com/a/33641428 |website=Stack Overflow}} {{tensors}} [[Category:Mathematical notation]] [[Category:Multilinear algebra]] [[Category:Tensors]] [[Category:Riemannian geometry]] [[Category:Mathematical physics]] [[Category:Albert Einstein]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite journal
(
edit
)
Template:Cite news
(
edit
)
Template:Cite web
(
edit
)
Template:Math
(
edit
)
Template:Note label
(
edit
)
Template:Reflist
(
edit
)
Template:Section link
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Tensors
(
edit
)
Template:Wikibooks
(
edit
)