Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Euclidean vector
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties and operations{{anchor|Properties|Operations}}== {{see also|Vector notation#Operations}} The following section uses the [[Cartesian coordinate system]] with basis vectors <math display=block>{\mathbf e}_1 = (1,0,0),\ {\mathbf e}_2 = (0,1,0),\ {\mathbf e}_3 = (0,0,1)</math> and assumes that all vectors have the origin as a common base point. A vector '''a''' will be written as <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3.</math> ===Equality=== Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3</math> and <math display=block>{\mathbf b} = b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3</math> are equal if <math display=block>a_1 = b_1,\quad a_2=b_2,\quad a_3=b_3.\,</math> === Opposite, parallel, and antiparallel vectors {{anchor|antiparallel|opposite|parallel}}=== Two vectors are ''opposite'' if they have the same magnitude but [[opposite direction (geometry)|opposite direction]];<ref name=HMCS/> so two vectors <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3</math> and <math display=block>{\mathbf b} = b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3</math> are opposite if <math display=block>a_1 = -b_1,\quad a_2=-b_2,\quad a_3=-b_3.\,</math> Two vectors are ''[[equidirectional]]'' (or ''codirectional'') if they have the same direction but not necessarily the same magnitude.<ref name=HMCS/> Two vectors are ''parallel'' if they have either the same or opposite direction, but not necessarily the same magnitude; two vectors are ''antiparallel'' if they have strictly opposite direction, but not necessarily the same magnitude.{{efn|"Can be brought to the same straight line by means of parallel displacement".<ref name=HMCS>{{cite book |last1=Harris |first1=John W. |last2=Stöcker |first2=Horst |year=1998 |title=Handbook of mathematics and computational science |publisher=Birkhäuser |isbn=0-387-94746-9 |at=Chapter 6, p. 332 |url=https://books.google.com/books?id=DnKLkOb_YfIC&pg=PA332 }}</ref>}} ===Addition and subtraction=== {{Further|Vector space}} The sum of '''a''' and '''b''' of two vectors may be defined as <math display=block>\mathbf{a}+\mathbf{b} =(a_1+b_1)\mathbf{e}_1 +(a_2+b_2)\mathbf{e}_2 +(a_3+b_3)\mathbf{e}_3.</math> The resulting vector is sometimes called the '''resultant vector''' of '''a''' and '''b'''. The addition may be represented graphically by placing the tail of the arrow '''b''' at the head of the arrow '''a''', and then drawing an arrow from the tail of '''a''' to the head of '''b'''. The new arrow drawn represents the vector '''a''' + '''b''', as illustrated below:<ref name=":2" /> [[Image:Vector addition.svg|class=skin-invert-image|250px|center|The addition of two vectors '''a''' and '''b''']] This addition method is sometimes called the ''parallelogram rule'' because '''a''' and '''b''' form the sides of a [[parallelogram]] and '''a''' + '''b''' is one of the diagonals. If '''a''' and '''b''' are bound vectors that have the same base point, this point will also be the base point of '''a''' + '''b'''. One can check geometrically that '''a''' + '''b''' = '''b''' + '''a''' and ('''a''' + '''b''') + '''c''' = '''a''' + ('''b''' + '''c'''). The difference of '''a''' and '''b''' is <math display=block>\mathbf{a}-\mathbf{b} =(a_1-b_1)\mathbf{e}_1 +(a_2-b_2)\mathbf{e}_2 +(a_3-b_3)\mathbf{e}_3.</math> Subtraction of two vectors can be geometrically illustrated as follows: to subtract '''b''' from '''a''', place the tails of '''a''' and '''b''' at the same point, and then draw an arrow from the head of '''b''' to the head of '''a'''. This new arrow represents the vector '''(-b)''' + '''a''', with '''(-b)''' being the opposite of '''b''', see drawing. And '''(-b)''' + '''a''' = '''a''' − '''b'''. [[Image:Vector subtraction.svg|class=skin-invert-image|125px|center|The subtraction of two vectors '''a''' and '''b''']] ===Scalar multiplication=== {{main|Scalar multiplication}} [[Image:Scalar multiplication by r=3.svg|class=skin-invert-image|250px|thumb|right|Scalar multiplication of a vector by a factor of 3 stretches the vector out.]] A vector may also be multiplied, or re-''scaled'', by any [[real number]] ''r''. In the context of [[vector analysis|conventional vector algebra]], these real numbers are often called '''scalars''' (from ''scale'') to distinguish them from vectors. The operation of multiplying a vector by a scalar is called ''scalar multiplication''. The resulting vector is <math display=block>r\mathbf{a}=(ra_1)\mathbf{e}_1 +(ra_2)\mathbf{e}_2 +(ra_3)\mathbf{e}_3.</math> Intuitively, multiplying by a scalar ''r'' stretches a vector out by a factor of ''r''. Geometrically, this can be visualized (at least in the case when ''r'' is an integer) as placing ''r'' copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If ''r'' is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (''r'' = −1 and ''r'' = 2) are given below: [[Image:Scalar multiplication of vectors2.svg|class=skin-invert-image|250px|thumb|left|The scalar multiplications −'''a''' and 2'''a''' of a vector '''a''']] Scalar multiplication is [[Distributivity|distributive]] over vector addition in the following sense: ''r''('''a''' + '''b''') = ''r'''''a''' + ''r'''''b''' for all vectors '''a''' and '''b''' and all scalars ''r''. One can also show that '''a''' − '''b''' = '''a''' + (−1)'''b'''. <!-- The set of all geometrical vectors, together with the operations of vector addition and scalar multiplication, satisfies all the axioms of a [[vector space]]. Similarly, the set of all bound vectors with a common base point forms a vector space. This is where the term "vector space" originated. --> {{clear}} ===Length===<!-- This section is linked from [[Law of cosines]] --> The ''[[length]]'', ''[[Magnitude (mathematics)|magnitude]]'' or ''[[Norm (mathematics)|norm]]'' of the vector '''a''' is denoted by ‖'''a'''‖ or, less commonly, |'''a'''|, which is not to be confused with the [[absolute value]] (a scalar "norm"). The length of the vector '''a''' can be computed with the ''[[Euclidean norm]]'', <math display=block>\left\|\mathbf{a}\right\|=\sqrt{a_1^2+a_2^2+a_3^2},</math> which is a consequence of the [[Pythagorean theorem]] since the basis vectors '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub> are orthogonal unit vectors. This happens to be equal to the square root of the [[dot product]], discussed below, of the vector with itself: <math display=block>\left\|\mathbf{a}\right\|=\sqrt{\mathbf{a}\cdot\mathbf{a}}.</math> ====Unit vector==== [[Image:Vector normalization.svg|class=skin-invert-image|thumb|right|The normalization of a vector '''a''' into a unit vector '''â''']] {{main|Unit vector}} A ''unit vector'' is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector.<ref name="1.1: Vectors"/> This is known as ''normalizing'' a vector. A unit vector is often indicated with a hat as in '''â'''. To normalize a vector {{nowrap|1='''a''' = (''a''<sub>1</sub>, ''a''<sub>2</sub>, ''a''<sub>3</sub>)}}, scale the vector by the reciprocal of its length ‖'''a'''‖. That is: <math display=block>\mathbf{\hat{a}} = \frac{\mathbf{a}}{\left\|\mathbf{a}\right\|} = \frac{a_1}{\left\|\mathbf{a}\right\|}\mathbf{e}_1 + \frac{a_2}{\left\|\mathbf{a}\right\|}\mathbf{e}_2 + \frac{a_3}{\left\|\mathbf{a}\right\|}\mathbf{e}_3</math> ====Zero vector==== {{main|Zero vector}} The ''zero vector'' is the vector with length zero. Written out in coordinates, the vector is {{nowrap|(0, 0, 0)}}, and it is commonly denoted <math>\vec{0}</math>, '''0''', or simply 0. Unlike any other vector, it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector that is a multiple of the zero vector). The sum of the zero vector with any vector '''a''' is '''a''' (that is, {{nowrap|1='''0''' + '''a''' = '''a'''}}). ===Dot product=== {{main|Dot product}} The ''dot product'' of two vectors '''a''' and '''b''' (sometimes called the ''[[inner product space|inner product]]'', or, since its result is a scalar, the ''scalar product'') is denoted by '''a''' ∙ '''b,''' and is defined as: <math display=block>\mathbf{a}\cdot\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta,</math> where ''θ'' is the measure of the [[angle]] between '''a''' and '''b''' (see [[trigonometric function]] for an explanation of cosine). Geometrically, this means that '''a''' and '''b''' are drawn with a common start point, and then the length of '''a''' is multiplied with the length of the component of '''b''' that points in the same direction as '''a'''. The dot product can also be defined as the sum of the products of the components of each vector as <math display=block>\mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3.</math> ===Cross product=== {{main|Cross product}} The ''cross product'' (also called the ''vector product'' or ''outer product'') is only meaningful in three or [[Seven-dimensional cross product|seven]] dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted '''a''' × '''b''', is a vector perpendicular to both '''a''' and '''b''' and is defined as <math display=block>\mathbf{a}\times\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\sin(\theta)\,\mathbf{n}</math> where ''θ'' is the measure of the angle between '''a''' and '''b''', and '''n''' is a unit vector [[perpendicular]] to both '''a''' and '''b''' which completes a [[Right-hand rule|right-handed]] system. The right-handedness constraint is necessary because there exist ''two'' unit vectors that are perpendicular to both '''a''' and '''b''', namely, '''n''' and (−'''n'''). [[Image:Cross product vector.svg|class=skin-invert-image|thumb|right|An illustration of the cross product]] The cross product '''a''' × '''b''' is defined so that '''a''', '''b''', and '''a''' × '''b''' also becomes a right-handed system (although '''a''' and '''b''' are not necessarily [[orthogonal]]). This is the [[right-hand rule]]. The length of '''a''' × '''b''' can be interpreted as the area of the parallelogram having '''a''' and '''b''' as sides. The cross product can be written as <math display=block>{\mathbf a}\times{\mathbf b} = (a_2 b_3 - a_3 b_2) {\mathbf e}_1 + (a_3 b_1 - a_1 b_3) {\mathbf e}_2 + (a_1 b_2 - a_2 b_1) {\mathbf e}_3.</math> For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a [[pseudovector]] instead of a vector (see below). ===Scalar triple product=== {{main|Triple product#Scalar triple product|l1=Scalar triple product}} The ''scalar triple product'' (also called the ''box product'' or ''mixed triple product'') is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by ('''a''' '''b''' '''c''') and defined as: <math display=block>(\mathbf{a}\ \mathbf{b}\ \mathbf{c}) =\mathbf{a}\cdot(\mathbf{b}\times\mathbf{c}).</math> It has three primary uses. First, the absolute value of the box product is the volume of the [[parallelepiped]] which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are [[linear independence|linearly dependent]], which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors '''a''', '''b''' and '''c''' are right-handed. In components (''with respect to a right-handed orthonormal basis''), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the [[determinant]] of the 3-by-3 [[Matrix (mathematics)|matrix]] having the three vectors as rows <math display=block>(\mathbf{a}\ \mathbf{b}\ \mathbf{c})=\begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{vmatrix}</math> The scalar triple product is linear in all three entries and anti-symmetric in the following sense: <math display=block> (\mathbf{a}\ \mathbf{b}\ \mathbf{c}) = (\mathbf{c}\ \mathbf{a}\ \mathbf{b}) = (\mathbf{b}\ \mathbf{c}\ \mathbf{a})= -(\mathbf{a}\ \mathbf{c}\ \mathbf{b}) = -(\mathbf{b}\ \mathbf{a}\ \mathbf{c}) = -(\mathbf{c}\ \mathbf{b}\ \mathbf{a}).</math> ===Conversion between multiple Cartesian bases=== All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the ''e'' basis {'''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub>}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the ''e'' basis, a vector '''a''' is expressed, by definition, as <math display=block>\mathbf{a} = p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3.</math> The scalar components in the ''e'' basis are, by definition, <math display=block>\begin{align} p &= \mathbf{a}\cdot\mathbf{e}_1, \\ q &= \mathbf{a}\cdot\mathbf{e}_2, \\ r &= \mathbf{a}\cdot\mathbf{e}_3. \end{align}</math> In another orthonormal basis ''n'' = {'''n'''<sub>1</sub>, '''n'''<sub>2</sub>, '''n'''<sub>3</sub>} that is not necessarily aligned with ''e'', the vector '''a''' is expressed as <math display=block>\mathbf{a} = u\mathbf{n}_1 + v\mathbf{n}_2 + w\mathbf{n}_3</math> and the scalar components in the ''n'' basis are, by definition, <math display=block>\begin{align} u &= \mathbf{a}\cdot\mathbf{n}_1, \\ v &= \mathbf{a}\cdot\mathbf{n}_2, \\ w &= \mathbf{a}\cdot\mathbf{n}_3. \end{align}</math> The values of ''p'', ''q'', ''r'', and ''u'', ''v'', ''w'' relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector '''a''' in both cases. It is common to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express ''u'', ''v'', ''w'' in terms of ''p'', ''q'', ''r'' is to use column matrices along with a [[direction cosine matrix]] containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form <math display=block>\begin{align} u &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_1, \\ v &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_2, \\ w &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_3. \end{align}</math> Distributing the dot-multiplication gives <math display=block>\begin{align} u &= p\mathbf{e}_1\cdot\mathbf{n}_1 + q\mathbf{e}_2\cdot\mathbf{n}_1 + r\mathbf{e}_3\cdot\mathbf{n}_1, \\ v &= p\mathbf{e}_1\cdot\mathbf{n}_2 + q\mathbf{e}_2\cdot\mathbf{n}_2 + r\mathbf{e}_3\cdot\mathbf{n}_2, \\ w &= p\mathbf{e}_1\cdot\mathbf{n}_3 + q\mathbf{e}_2\cdot\mathbf{n}_3 + r\mathbf{e}_3\cdot\mathbf{n}_3. \end{align}</math> Replacing each dot product with a unique scalar gives <math display=block>\begin{align} u &= c_{11}p + c_{12}q + c_{13}r, \\ v &= c_{21}p + c_{22}q + c_{23}r, \\ w &= c_{31}p + c_{32}q + c_{33}r, \end{align}</math> and these equations can be expressed as the single matrix equation <math display=block>\begin{bmatrix} u \\ v \\ w \\ \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ c_{31} & c_{32} & c_{33} \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix}.</math> This matrix equation relates the scalar components of '''a''' in the ''n'' basis (''u'',''v'', and ''w'') with those in the ''e'' basis (''p'', ''q'', and ''r''). Each matrix element ''c''<sub>''jk''</sub> is the [[Direction cosine#Cartesian coordinates|direction cosine]] relating '''n'''<sub>''j''</sub> to '''e'''<sub>''k''</sub>.<ref name="dynon16">{{harvnb|Kane|Levinson|1996|pp=20–22}}</ref> The term ''direction cosine'' refers to the [[cosine]] of the angle between two unit vectors, which is also equal to their [[#Dot product|dot product]].<ref name="dynon16"/> Therefore, <math display=block>\begin{align} c_{11} &= \mathbf{n}_1\cdot\mathbf{e}_1 \\ c_{12} &= \mathbf{n}_1\cdot\mathbf{e}_2 \\ c_{13} &= \mathbf{n}_1\cdot\mathbf{e}_3 \\ c_{21} &= \mathbf{n}_2\cdot\mathbf{e}_1 \\ c_{22} &= \mathbf{n}_2\cdot\mathbf{e}_2 \\ c_{23} &= \mathbf{n}_2\cdot\mathbf{e}_3 \\ c_{31} &= \mathbf{n}_3\cdot\mathbf{e}_1 \\ c_{32} &= \mathbf{n}_3\cdot\mathbf{e}_2 \\ c_{33} &= \mathbf{n}_3\cdot\mathbf{e}_3 \end{align}</math> By referring collectively to '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub> as the ''e'' basis and to '''n'''<sub>1</sub>, '''n'''<sub>2</sub>, '''n'''<sub>3</sub> as the ''n'' basis, the matrix containing all the ''c''<sub>''jk''</sub> is known as the "[[transformation matrix]] from ''e'' to ''n''", or the "[[rotation matrix]] from ''e'' to ''n''" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from ''e'' to ''n''"<ref name="dynon16"/> (because it contains direction cosines). The properties of a rotation matrix are such that its [[matrix inverse|inverse]] is equal to its [[matrix transpose|transpose]]. This means that the "rotation matrix from ''e'' to ''n''" is the transpose of "rotation matrix from ''n'' to ''e''". The properties of a direction cosine matrix, C are:<ref>{{Cite book|title=Applied mathematics in integrated navigation systems|last=Rogers |first=Robert M. |date=2007|publisher=American Institute of Aeronautics and Astronautics|isbn=9781563479274|edition=3rd|location=Reston, Va.|oclc=652389481}}</ref> * the determinant is unity, |C| = 1; * the inverse is equal to the transpose; * the rows and columns are orthogonal unit vectors, therefore their dot products are zero. The advantage of this method is that a direction cosine matrix can usually be obtained independently by using [[Euler angles]] or a [[quaternion]] to relate the two vector bases, so the basis conversions can be performed directly, without having to work out all the dot products described above. By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.<ref name="dynon16"/> ===Other dimensions=== With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as <math display=block>(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)+(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2,</math> and in four dimensions as <math display=block>\begin{align} (a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3 + a_4{\mathbf e}_4) &+ (b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3 + b_4{\mathbf e}_4) =\\ (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2 &+ (a_3+b_3){\mathbf e}_3 + (a_4+b_4){\mathbf e}_4. \end{align}</math> The cross product does not readily generalise to other dimensions, though the closely related [[Exterior algebra#Areas in the plane|exterior product]] does, whose result is a [[bivector]]. In two dimensions this is simply a [[pseudoscalar]] <math display=block>(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)\wedge(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1 b_2 - a_2 b_1)\mathbf{e}_1 \mathbf{e}_2.</math> A [[seven-dimensional cross product]] is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)