Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Euclidean vector
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Geometric object that has length and direction}} {{For-text|mathematical vectors in general|[[Vector (mathematics and physics)]]|other uses|[[Vector (disambiguation)]]}} [[File:Vector from A to B.svg|class=skin-invert-image|thumb|A vector <math display=inline>\stackrel \rightarrow{a}</math> pointing from point ''A'' to point ''B'']] In [[mathematics]], [[physics]], and [[engineering]], a '''Euclidean vector''' or simply a '''vector''' (sometimes called a '''geometric vector'''<ref>{{harvnb|Ivanov|2001}}</ref> or '''spatial vector'''<ref>{{harvnb|Heinbockel|2001}}</ref>) is a geometric object that has [[Magnitude (mathematics)|magnitude]] (or [[Euclidean norm|length]]) and [[Direction (geometry)|direction]]. Euclidean vectors can be added and scaled to form a [[vector space]]. A ''[[vector quantity]]'' is a vector-valued [[physical quantity]], including [[units of measurement]] and possibly a [[support (mathematics)|support]], formulated as a ''[[directed line segment]]''. A vector is frequently depicted graphically as an arrow connecting an ''initial point'' ''A'' with a ''terminal point'' ''B'',<ref>{{harvnb|Itô|1993|p=1678}}; {{harvnb|Pedoe|1988}}</ref> and denoted by <math display=inline>\stackrel \longrightarrow{AB}.</math> A vector is what is needed to "carry" the point ''A'' to the point ''B''; the Latin word {{lang|la|vector}} means 'carrier'.<ref>Latin: {{lang|la|vectus}}, [[perfect participle]] of {{lang|la|vehere}}, 'to carry', {{lang|la|veho}} = 'I carry'. For historical development of the word ''vector'', see {{OED|vector ''n.''}} and {{cite web|author = Jeff Miller| url = http://jeff560.tripod.com/v.html | title = Earliest Known Uses of Some of the Words of Mathematics | access-date = 2007-05-25}}</ref> It was first used by 18th century [[astronomers]] investigating planetary revolution around the Sun.<ref>{{cite book|title=The Oxford English Dictionary.|year=2001|publisher=Clarendon Press|location=London|isbn=9780195219425|edition=2nd.}}</ref> The magnitude of the vector is the distance between the two points, and the direction refers to the direction of [[Displacement (geometry)|displacement]] from ''A'' to ''B''. Many [[algebraic operation]]s on [[real number]]s such as [[addition]], [[subtraction]], [[multiplication]], and [[Additive inverse|negation]] have close analogues for vectors,<ref name=":1">{{Cite web|title=vector {{!}} Definition & Facts|url=https://www.britannica.com/science/vector-mathematics|access-date=2020-08-19|website=Encyclopedia Britannica|language=en}}</ref> operations which obey the familiar algebraic laws of [[commutativity]], [[associativity]], and [[distributivity]]. These operations and associated laws qualify [[Euclidean space|Euclidean]] vectors as an example of the more generalized concept of vectors defined simply as elements of a [[vector space]]. Vectors play an important role in [[physics]]: the [[velocity]] and [[acceleration]] of a moving object and the [[force]]s acting on it can all be described with vectors.<ref name=":2">{{Cite web|title=Vectors|url=https://www.mathsisfun.com/algebra/vectors.html|access-date=2020-08-19|website=www.mathsisfun.com}}</ref> Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, [[position (vector)|position]] or [[displacement (vector)|displacement]]), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the [[coordinate system]] used to describe it. Other vector-like objects that describe [[physical quantities]] and transform in a similar way under changes of the coordinate system include [[pseudovector]]s and [[tensor]]s.<ref>{{Cite web|last=Weisstein|first=Eric W.|title=Vector|url=https://mathworld.wolfram.com/Vector.html|access-date=2020-08-19|website=mathworld.wolfram.com|language=en}}</ref> ==History== The vector concept, as it is known today, is the result of a gradual development over a period of more than 200 years. About a dozen people contributed significantly to its development.<ref name="Crowe">[[Michael J. Crowe]], [[A History of Vector Analysis]]; see also his {{cite web |url=http://www.nku.edu/~curtin/crowe_oresme.pdf |title=lecture notes |access-date=2010-09-04 |url-status=dead |archive-url=https://web.archive.org/web/20040126161844/http://www.nku.edu/~curtin/crowe_oresme.pdf |archive-date=January 26, 2004 }} on the subject.</ref> In 1835, [[Giusto Bellavitis]] abstracted the basic idea when he established the concept of [[equipollence (geometry)|equipollence]]. Working in a Euclidean plane, he made equipollent any pair of [[parallel (geometry)|parallel]] line segments of the same length and orientation. Essentially, he realized an [[equivalence relation]] on the pairs of points (bipoints) in the plane, and thus erected the first space of vectors in the plane.<ref name="Crowe"/>{{rp|52–4}} The term ''vector'' was introduced by [[William Rowan Hamilton]] as part of a [[quaternion]], which is a sum {{math|1=''q'' = ''s'' + ''v''}} of a [[real number]] {{math|''s''}} (also called ''scalar'') and a 3-dimensional ''vector''. Like Bellavitis, Hamilton viewed vectors as representative of [[equivalence class|classes]] of equipollent directed segments. As [[complex number]]s use an [[imaginary unit]] to complement the [[real line]], Hamilton considered the vector {{math|''v''}} to be the ''imaginary part'' of a quaternion:<ref>W. R. Hamilton (1846) ''London, Edinburgh & Dublin Philosophical Magazine'' 3rd series 29 27</ref> {{blockquote|The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion.}} Several other mathematicians developed vector-like systems in the middle of the nineteenth century, including [[Augustin Cauchy]], [[Hermann Grassmann]], [[August Möbius]], [[Comte de Saint-Venant]], and [[Matthew O'Brien (mathematician)|Matthew O'Brien]]. Grassmann's 1840 work ''Theorie der Ebbe und Flut'' (Theory of the Ebb and Flow) was the first system of spatial analysis that is similar to today's system, and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870s.<ref name="Crowe"/> [[Peter Guthrie Tait]] carried the quaternion standard after Hamilton. His 1867 ''Elementary Treatise of Quaternions'' included extensive treatment of the nabla or [[del|del operator]] ∇. In 1878, ''[[Elements of Dynamic]]'' was published by [[William Kingdon Clifford]]. Clifford simplified the quaternion study by isolating the [[dot product]] and [[cross product]] of two vectors from the complete quaternion product. This approach made vector calculations available to engineers—and others working in three dimensions and skeptical of the fourth. [[Josiah Willard Gibbs]], who was exposed to quaternions through [[James Clerk Maxwell]]'s ''Treatise on Electricity and Magnetism'', separated off their vector part for independent treatment. The first half of Gibbs's ''Elements of Vector Analysis'', published in 1881, presents what is essentially the modern system of vector analysis.<ref name="Crowe" /><ref name=":1" /> In 1901, [[Edwin Bidwell Wilson]] published ''[[Vector Analysis]]'', adapted from Gibbs's lectures, which banished any mention of quaternions in the development of vector calculus. ==Overview== In [[physics]] and [[engineering]], a vector is typically regarded as a geometric entity characterized by a [[magnitude (mathematics)|magnitude]] and a [[relative direction]]. It is formally defined as a [[directed line segment]], or arrow, in a [[Euclidean space]].<ref>{{harvnb|Itô|1993|p=1678}}</ref> In [[pure mathematics]], a [[vector (mathematics)|vector]] is defined more generally as any element of a [[vector space]]. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above-mentioned geometric entities are a special kind of abstract vectors, as they are elements of a special kind of vector space called [[Euclidean space]]. This particular article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as '''''geometric''''', '''''spatial''''', or '''''Euclidean''''' vectors. A Euclidean vector may possess a definite ''initial point'' and ''terminal point''; such a condition may be emphasized calling the result a '''''bound vector'''''.<ref>Formerly known as ''located vector''. See {{harvnb|Lang|1986|page=9}}.</ref> When only the magnitude and direction of the vector matter, and the particular initial or terminal points are of no importance, the vector is called a '''''free vector'''''. The distinction between bound and free vectors is especially relevant in mechanics, where a [[force]] applied to a body has a point of contact (see [[resultant force]] and [[Couple (mechanics)|couple]]). Two arrows <math>\stackrel {\,\longrightarrow}{AB}</math> and <math>\stackrel {\,\longrightarrow}{A'B'}</math> in space represent the same free vector if they have the same magnitude and direction: that is, they are [[equipollence (geometry)|equipollent]] if the quadrilateral ''ABB′A′'' is a [[parallelogram]]. If the Euclidean space is equipped with a choice of [[origin (mathematics)|origin]], then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin. The term ''vector'' also has generalizations to higher dimensions, and to more formal approaches with much wider applications. ===Further information=== In classical [[Euclidean geometry]] (i.e., [[synthetic geometry]]), vectors were introduced (during the 19th century) as [[equivalence class]]es under [[Equipollence (geometry)|equipollence]], of [[ordered pair]]s of points; two pairs {{math|(''A'', ''B'')}} and {{math|(''C'', ''D'')}} being equipollent if the points {{math|''A'', ''B'', ''D'', ''C''}}, in this order, form a [[parallelogram]]. Such an equivalence class is called a ''vector'', more precisely, a Euclidean vector.<ref>In some old texts, the pair {{math|(''A'', ''B'')}} is called a ''bound vector'', and its equivalence class is called a ''free vector''.</ref> The equivalence class of {{math|(''A'', ''B'')}} is often denoted <math>\overrightarrow{AB}.</math> A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the [[line segment]] {{math|(''A'', ''B'')}}) and same direction (e.g., the direction from {{mvar|A}} to {{mvar|B}}).<ref name="1.1: Vectors">{{Cite web|date=2013-11-07|title=1.1: Vectors|url=https://math.libretexts.org/Bookshelves/Calculus/Supplemental_Modules_(Calculus)/Vector_Calculus/1%3A_Vector_Basics/1.1%3A_Vectors|access-date=2020-08-19|website=Mathematics LibreTexts|language=en}}</ref> In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to [[Scalar (mathematics)|scalar]]s, which have no direction.<ref name=":2"/> For example, [[velocity]], [[force]]s and [[acceleration]] are represented by vectors. In modern geometry, Euclidean spaces are often defined from [[linear algebra]]. More precisely, a Euclidean space {{mvar|E}} is defined as a set to which is associated an [[inner product space]] of finite dimension over the reals <math>\overrightarrow{E},</math> and a [[Group action (mathematics)|group action]] of the [[additive group]] of <math>\overrightarrow{E},</math> which is [[free action|free]] and [[transitive action|transitive]] (See [[Affine space]] for details of this construction). The elements of <math>\overrightarrow{E}</math> are called [[translation (geometry)|translations]]. It has been proven that the two definitions of Euclidean spaces are equivalent, and that the equivalence classes under equipollence may be identified with translations. Sometimes, Euclidean vectors are considered without reference to a Euclidean space. In this case, a Euclidean vector is an element of a normed vector space of finite dimension over the reals, or, typically, an element of the [[real coordinate space]] <math>\mathbb R^n</math> equipped with the [[dot product]]. This makes sense, as the addition in such a vector space acts freely and transitively on the vector space itself. That is, <math>\mathbb R^n</math> is a Euclidean space, with itself as an associated vector space, and the dot product as an inner product. The Euclidean space <math>\mathbb R^n</math> is often presented as ''the'' [[standard Euclidean space]] of dimension {{mvar|n}}. This is motivated by the fact that every Euclidean space of dimension {{mvar|n}} is [[isomorphism|isomorphic]] to the Euclidean space <math>\mathbb R^n.</math> More precisely, given such a Euclidean space, one may choose any point {{mvar|O}} as an [[origin (geometry)|origin]]. By [[Gram–Schmidt process]], one may also find an [[orthonormal basis]] of the associated vector space (a basis such that the inner product of two basis vectors is 0 if they are different and 1 if they are equal). This defines [[Cartesian coordinates]] of any point {{mvar|P}} of the space, as the coordinates on this basis of the vector <math>\overrightarrow{OP}.</math> These choices define an isomorphism of the given Euclidean space onto <math>\mathbb R^n,</math> by mapping any point to the [[tuple|{{mvar|n}}-tuple]] of its Cartesian coordinates, and every vector to its [[coordinate vector]]. ===Examples in one dimension=== Since the physicist's concept of [[force (physics)|force]] has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force ''F'' of 15 [[Newton (unit)|newtons]]. If the positive [[Cartesian coordinate system|axis]] is also directed rightward, then ''F'' is represented by the vector 15 N, and if positive points leftward, then the vector for ''F'' is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δ''s'' of 4 [[meter (unit)|meters]] would be 4 m or −4 m, depending on its direction, and its magnitude would be 4 m regardless. ===In physics and engineering=== Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is [[velocity]], the magnitude of which is [[speed]]. For instance, the velocity ''5 meters per second upward'' could be represented by the vector (0, 5) (in 2 dimensions with the positive ''y''-axis as 'up'). Another quantity represented by a vector is [[force]], since it has a magnitude and direction and follows the rules of vector addition.<ref name=":2" /> Vectors also describe many other physical quantities, such as linear displacement, [[displacement (vector)|displacement]], linear acceleration, [[angular acceleration]], [[linear momentum]], and [[angular momentum]]. Other physical vectors, such as the [[electric field|electric]] and [[magnetic field]], are represented as a system of vectors at each point of a physical space; that is, a [[vector field]]. Examples of quantities that have magnitude and direction, but fail to follow the rules of vector addition, are angular displacement and electric current. Consequently, these are not vectors. ===In Cartesian space=== In the [[Cartesian coordinate system]], a bound vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points {{math|''A'' {{=}} (1, 0, 0)}} and {{math|''B'' {{=}} (0, 1, 0)}} in space determine the bound vector <math>\overrightarrow{AB}</math> pointing from the point {{math|''x'' {{=}} 1}} on the ''x''-axis to the point {{math|''y'' {{=}} 1}} on the ''y''-axis. In Cartesian coordinates, a free vector may be thought of in terms of a corresponding bound vector, in this sense, whose initial point has the coordinates of the origin {{math|''O'' {{=}} (0, 0, 0)}}. It is then determined by the coordinates of that bound vector's terminal point. Thus the free vector represented by (1, 0, 0) is a vector of unit length—pointing along the direction of the positive ''x''-axis. This coordinate representation of free vectors allows their algebraic features to be expressed in a convenient numerical fashion. For example, the sum of the two (free) vectors (1, 2, 3) and (−2, 0, 4) is the (free) vector <math display=block>(1, 2, 3) + (-2, 0, 4) = (1-2, 2+0, 3+4) = (-1, 2, 7)\,.</math> ===Euclidean and affine vectors=== In the geometrical and physical settings, it is sometimes possible to associate, in a natural way, a ''length'' or magnitude and a direction to vectors. In addition, the notion of direction is strictly associated with the notion of an ''angle'' between two vectors. If the [[dot product]] of two vectors is defined—a scalar-valued product of two vectors—then it is also possible to define a length; the dot product gives a convenient algebraic characterization of both angle (a function of the dot product between any two non-zero vectors) and length (the square root of the dot product of a vector by itself). In three dimensions, it is further possible to define the [[cross product]], which supplies an algebraic characterization of the [[area]] and [[orientation (geometry)|orientation]] in space of the [[parallelogram]] defined by two vectors (used as sides of the parallelogram). In any dimension (and, in particular, higher dimensions), it is possible to define the [[exterior product]], which (among other things) supplies an algebraic characterization of the area and orientation in space of the ''n''-dimensional [[parallelepiped#Parallelotope|parallelotope]] defined by ''n'' vectors. In a [[pseudo-Euclidean space]], a vector's squared length can be positive, negative, or zero. An important example is [[Minkowski space]] (which is important to our understanding of [[special relativity]]). However, it is not always possible or desirable to define the length of a vector. This more general type of spatial vector is the subject of [[vector space]]s (for free vectors) and [[affine space]]s (for bound vectors, as each represented by an ordered pair of "points"). One physical example comes from [[thermodynamics]], where many quantities of interest can be considered vectors in a space with no notion of length or angle.<ref name="thermo-forms" >[http://www.av8n.com/physics/thermo-forms.htm Thermodynamics and Differential Forms]</ref> ===Generalizations=== In physics, as well as mathematics, a vector is often identified with a [[tuple]] of components, or list of numbers, that act as scalar coefficients for a set of [[basis vector]]s. When the basis is transformed, for example by rotation or stretching, then the components of any vector in terms of that basis also transform in an opposite sense. The vector itself has not changed, but the basis has, so the components of the vector must change to compensate. The vector is called ''covariant'' or ''contravariant'', depending on how the transformation of the vector's components is related to the transformation of the basis. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement), or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as [[gradient]]. If you change units (a special case of a [[change of basis]]) from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm—a contravariant change in numerical value. In contrast, a gradient of 1 [[Kelvin|K]]/m becomes 0.001 K/mm—a covariant change in value (for more, see [[covariance and contravariance of vectors]]). [[Tensor]]s are another type of quantity that behave in this way; a vector is one type of [[tensor]]. In pure [[mathematics]], a vector is any element of a [[vector space]] over some [[field (mathematics)|field]] and is often represented as a [[coordinate vector]]. The vectors described in this article are a very special case of this general definition, because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction". ==Representations== {{further|Vector representation}} [[Image:vector from A to B.svg|class=skin-invert-image|right|200px|Vector arrow pointing from ''A'' to ''B'']] Vectors are usually denoted in [[lowercase]] boldface, as in <math>\mathbf{u}</math>''', <math>\mathbf{v}</math>''' and <math>\mathbf{w}</math>, or in lowercase italic boldface, as in '''''a'''''. ([[Uppercase]] letters are typically used to represent [[matrix (mathematics)|matrices]].) Other conventions include <math>\vec{a}</math> or <u>''a''</u>, especially in handwriting. Alternatively, some use a [[tilde]] (~) or a wavy underline drawn beneath the symbol, e.g. <math>\underset{^\sim}a</math>, which is a convention for indicating boldface type. If the vector represents a directed [[distance]] or [[displacement (vector)|displacement]] from a point ''A'' to a point ''B'' (see figure), it can also be denoted as <math>\stackrel{\longrightarrow}{AB}</math> or <u>''AB''</u>. In [[German language|German]] literature, it was especially common to represent vectors with small [[fraktur]] letters such as <math>\mathfrak{a}</math>. Vectors are usually shown in graphs or other diagrams as arrows (directed [[line segment]]s), as illustrated in the figure. Here, the point ''A'' is called the ''origin'', ''tail'', ''base'', or ''initial point'', and the point ''B'' is called the ''head'', ''tip'', ''endpoint'', ''terminal point'' or ''final point''. The length of the arrow is proportional to the vector's [[magnitude (mathematics)|magnitude]], while the direction in which the arrow points indicates the vector's direction. [[Image:Notation for vectors in or out of a plane.svg|class=skin-invert-image|right|200px]] On a two-dimensional diagram, a vector [[perpendicular]] to the [[plane (mathematics)|plane]] of the diagram is sometimes desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an [[arrow (weapon)|arrow]] head on and viewing the flights of an arrow from the back. [[Image:Position vector.svg|class=skin-invert-image|thumb|right|A vector in the Cartesian plane, showing the position of a point ''A'' with coordinates (2, 3).]] [[Image:3D Vector.svg|class=skin-invert-image|300px|right]] In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an ''n''-dimensional Euclidean space can be represented as [[coordinate vector]]s in a [[Cartesian coordinate system]]. The endpoint of a vector can be identified with an ordered list of ''n'' real numbers (''n''-[[tuple]]). These numbers are the [[Cartesian coordinate|coordinates]] of the endpoint of the vector, with respect to a given [[Cartesian coordinate system]], and are typically called the ''[[scalar component]]s'' (or ''scalar projections'') of the vector on the axes of the coordinate system. As an example in two dimensions (see figure), the vector from the origin ''O'' = (0, 0) to the point ''A'' = (2, 3) is simply written as <math display=block>\mathbf{a} = (2,3).</math> The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation <math>\overrightarrow{OA}</math> is usually deemed not necessary (and is indeed rarely used). In ''three dimensional'' Euclidean space (or {{math|'''R'''<sup>3</sup>}}), vectors are identified with triples of scalar components: <math display=block>\mathbf{a} = (a_1, a_2, a_3).</math> also written, <math display=block>\mathbf{a} = (a_x, a_y, a_z).</math> This can be generalised to ''n-dimensional'' Euclidean space (or {{math|'''R'''<sup>''n''</sup>}}). <math display=block>\mathbf{a} = (a_1, a_2, a_3, \cdots, a_{n-1}, a_n).</math> These numbers are often arranged into a [[column vector]] or [[row vector]], particularly when dealing with [[matrix (mathematics)|matrices]], as follows: <math display=block>\mathbf{a} = \begin{bmatrix} a_1\\ a_2\\ a_3\\ \end{bmatrix} = [ a_1\ a_2\ a_3 ]^{\operatorname{T}}. </math> Another way to represent a vector in ''n''-dimensions is to introduce the [[standard basis]] vectors. For instance, in three dimensions, there are three of them: <math display=block>{\mathbf e}_1 = (1,0,0),\ {\mathbf e}_2 = (0,1,0),\ {\mathbf e}_3 = (0,0,1).</math> These have the intuitive interpretation as vectors of unit length pointing up the ''x''-, ''y''-, and ''z''-axis of a [[Cartesian coordinate system]], respectively. In terms of these, any vector '''a''' in {{math|'''R'''<sup>3</sup>}} can be expressed in the form: <math display=block>\mathbf{a} = (a_1,a_2,a_3) = a_1(1,0,0) + a_2(0,1,0) + a_3(0,0,1), \ </math> or <math display=block>\mathbf{a} = \mathbf{a}_1 + \mathbf{a}_2 + \mathbf{a}_3 = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3,</math> where '''a'''<sub>1</sub>, '''a'''<sub>2</sub>, '''a'''<sub>3</sub> are called the '''[[vector component]]s''' (or '''vector projections''') of '''a''' on the basis vectors or, equivalently, on the corresponding Cartesian axes ''x'', ''y'', and ''z'' (see figure), while ''a''<sub>1</sub>, ''a''<sub>2</sub>, ''a''<sub>3</sub> are the respective [[scalar component]]s (or scalar projections). In introductory physics textbooks, the standard basis vectors are often denoted <math>\mathbf{i},\mathbf{j},\mathbf{k}</math> instead (or <math>\mathbf{\hat{x}}, \mathbf{\hat{y}}, \mathbf{\hat{z}}</math>, in which the [[hat symbol]] <math>\mathbf{\hat{}}</math> typically denotes [[unit vector]]s). In this case, the scalar and vector components are denoted respectively ''a<sub>x</sub>'', ''a<sub>y</sub>'', ''a<sub>z</sub>'', and '''a'''<sub>''x''</sub>, '''a'''<sub>''y''</sub>, '''a'''<sub>''z''</sub> (note the difference in boldface). Thus, <math display=block>\mathbf{a} = \mathbf{a}_x + \mathbf{a}_y + \mathbf{a}_z = a_x{\mathbf i} + a_y{\mathbf j} + a_z{\mathbf k}.</math> The notation '''e'''<sub>''i''</sub> is compatible with the [[index notation]] and the [[summation convention]] commonly used in higher level mathematics, physics, and engineering. === {{anchor|Vector component|Decomposition}} Decomposition or resolution=== {{Further|Basis (linear algebra)}} As explained [[#Representations|above]], a vector is often described by a set of vector components that [[#Addition and subtraction|add up]] to form the given vector. Typically, these components are the [[Vector projection|projections]] of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be ''decomposed'' or ''resolved with respect to'' that set. [[Image:Surface normal tangent.svg|class=skin-invert-image|right|thumb|Illustration of tangential and normal components of a vector to a surface.]] The decomposition or resolution<ref>[[Josiah Willard Gibbs|Gibbs, J.W.]] (1901). ''Vector Analysis: A Text-book for the Use of Students of Mathematics and Physics, Founded upon the Lectures of J. Willard Gibbs'', by E.B. Wilson, Chares Scribner's Sons, New York, p. 15: "Any vector {{math|'''r'''}} coplanar with two non-collinear vectors {{math|'''a'''}} and {{math|'''b'''}} may be resolved into two components parallel to {{math|'''a'''}} and {{math|'''b'''}} respectively. This resolution may be accomplished by constructing the parallelogram ..."</ref> of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected. Moreover, the use of Cartesian unit vectors such as <math>\mathbf{\hat{x}}, \mathbf{\hat{y}}, \mathbf{\hat{z}}</math> as a [[Basis (linear algebra)|basis]] in which to represent a vector is not mandated. Vectors can also be expressed in terms of an arbitrary basis, including the unit vectors of a [[cylindrical coordinate system]] (<math>\boldsymbol{\hat{\rho}}, \boldsymbol{\hat{\phi}}, \mathbf{\hat{z}}</math>) or [[spherical coordinate system]] (<math>\mathbf{\hat{r}}, \boldsymbol{\hat{\theta}}, \boldsymbol{\hat{\phi}}</math>). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry, respectively. The choice of a basis does not affect the properties of a vector or its behaviour under transformations. A vector can also be broken up with respect to "non-fixed" basis vectors that change their [[orientation (geometry)|orientation]] as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively ''normal'', and ''tangent'' to a surface (see figure). Moreover, the ''radial'' and ''[[tangential component]]s'' of a vector relate to the ''[[radius]] of [[rotation]]'' of an object. The former is [[Parallel (geometry)|parallel]] to the radius and the latter is [[Perpendicular|orthogonal]] to it.<ref>{{Cite web |url=http://www.physics.uoguelph.ca/tutorials/torque/Q.torque.intro.angacc.html |title=U. Guelph Physics Dept., "Torque and Angular Acceleration" |access-date=2007-01-05 |archive-date=2007-01-22 |archive-url=https://web.archive.org/web/20070122155954/http://www.physics.uoguelph.ca/tutorials/torque/Q.torque.intro.angacc.html |url-status=dead }}</ref> In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a ''global'' coordinate system, or [[inertial reference frame]]). ==Properties and operations{{anchor|Properties|Operations}}== {{see also|Vector notation#Operations}} The following section uses the [[Cartesian coordinate system]] with basis vectors <math display=block>{\mathbf e}_1 = (1,0,0),\ {\mathbf e}_2 = (0,1,0),\ {\mathbf e}_3 = (0,0,1)</math> and assumes that all vectors have the origin as a common base point. A vector '''a''' will be written as <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3.</math> ===Equality=== Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3</math> and <math display=block>{\mathbf b} = b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3</math> are equal if <math display=block>a_1 = b_1,\quad a_2=b_2,\quad a_3=b_3.\,</math> === Opposite, parallel, and antiparallel vectors {{anchor|antiparallel|opposite|parallel}}=== Two vectors are ''opposite'' if they have the same magnitude but [[opposite direction (geometry)|opposite direction]];<ref name=HMCS/> so two vectors <math display=block>{\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3</math> and <math display=block>{\mathbf b} = b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3</math> are opposite if <math display=block>a_1 = -b_1,\quad a_2=-b_2,\quad a_3=-b_3.\,</math> Two vectors are ''[[equidirectional]]'' (or ''codirectional'') if they have the same direction but not necessarily the same magnitude.<ref name=HMCS/> Two vectors are ''parallel'' if they have either the same or opposite direction, but not necessarily the same magnitude; two vectors are ''antiparallel'' if they have strictly opposite direction, but not necessarily the same magnitude.{{efn|"Can be brought to the same straight line by means of parallel displacement".<ref name=HMCS>{{cite book |last1=Harris |first1=John W. |last2=Stöcker |first2=Horst |year=1998 |title=Handbook of mathematics and computational science |publisher=Birkhäuser |isbn=0-387-94746-9 |at=Chapter 6, p. 332 |url=https://books.google.com/books?id=DnKLkOb_YfIC&pg=PA332 }}</ref>}} ===Addition and subtraction=== {{Further|Vector space}} The sum of '''a''' and '''b''' of two vectors may be defined as <math display=block>\mathbf{a}+\mathbf{b} =(a_1+b_1)\mathbf{e}_1 +(a_2+b_2)\mathbf{e}_2 +(a_3+b_3)\mathbf{e}_3.</math> The resulting vector is sometimes called the '''resultant vector''' of '''a''' and '''b'''. The addition may be represented graphically by placing the tail of the arrow '''b''' at the head of the arrow '''a''', and then drawing an arrow from the tail of '''a''' to the head of '''b'''. The new arrow drawn represents the vector '''a''' + '''b''', as illustrated below:<ref name=":2" /> [[Image:Vector addition.svg|class=skin-invert-image|250px|center|The addition of two vectors '''a''' and '''b''']] This addition method is sometimes called the ''parallelogram rule'' because '''a''' and '''b''' form the sides of a [[parallelogram]] and '''a''' + '''b''' is one of the diagonals. If '''a''' and '''b''' are bound vectors that have the same base point, this point will also be the base point of '''a''' + '''b'''. One can check geometrically that '''a''' + '''b''' = '''b''' + '''a''' and ('''a''' + '''b''') + '''c''' = '''a''' + ('''b''' + '''c'''). The difference of '''a''' and '''b''' is <math display=block>\mathbf{a}-\mathbf{b} =(a_1-b_1)\mathbf{e}_1 +(a_2-b_2)\mathbf{e}_2 +(a_3-b_3)\mathbf{e}_3.</math> Subtraction of two vectors can be geometrically illustrated as follows: to subtract '''b''' from '''a''', place the tails of '''a''' and '''b''' at the same point, and then draw an arrow from the head of '''b''' to the head of '''a'''. This new arrow represents the vector '''(-b)''' + '''a''', with '''(-b)''' being the opposite of '''b''', see drawing. And '''(-b)''' + '''a''' = '''a''' − '''b'''. [[Image:Vector subtraction.svg|class=skin-invert-image|125px|center|The subtraction of two vectors '''a''' and '''b''']] ===Scalar multiplication=== {{main|Scalar multiplication}} [[Image:Scalar multiplication by r=3.svg|class=skin-invert-image|250px|thumb|right|Scalar multiplication of a vector by a factor of 3 stretches the vector out.]] A vector may also be multiplied, or re-''scaled'', by any [[real number]] ''r''. In the context of [[vector analysis|conventional vector algebra]], these real numbers are often called '''scalars''' (from ''scale'') to distinguish them from vectors. The operation of multiplying a vector by a scalar is called ''scalar multiplication''. The resulting vector is <math display=block>r\mathbf{a}=(ra_1)\mathbf{e}_1 +(ra_2)\mathbf{e}_2 +(ra_3)\mathbf{e}_3.</math> Intuitively, multiplying by a scalar ''r'' stretches a vector out by a factor of ''r''. Geometrically, this can be visualized (at least in the case when ''r'' is an integer) as placing ''r'' copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If ''r'' is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (''r'' = −1 and ''r'' = 2) are given below: [[Image:Scalar multiplication of vectors2.svg|class=skin-invert-image|250px|thumb|left|The scalar multiplications −'''a''' and 2'''a''' of a vector '''a''']] Scalar multiplication is [[Distributivity|distributive]] over vector addition in the following sense: ''r''('''a''' + '''b''') = ''r'''''a''' + ''r'''''b''' for all vectors '''a''' and '''b''' and all scalars ''r''. One can also show that '''a''' − '''b''' = '''a''' + (−1)'''b'''. <!-- The set of all geometrical vectors, together with the operations of vector addition and scalar multiplication, satisfies all the axioms of a [[vector space]]. Similarly, the set of all bound vectors with a common base point forms a vector space. This is where the term "vector space" originated. --> {{clear}} ===Length===<!-- This section is linked from [[Law of cosines]] --> The ''[[length]]'', ''[[Magnitude (mathematics)|magnitude]]'' or ''[[Norm (mathematics)|norm]]'' of the vector '''a''' is denoted by ‖'''a'''‖ or, less commonly, |'''a'''|, which is not to be confused with the [[absolute value]] (a scalar "norm"). The length of the vector '''a''' can be computed with the ''[[Euclidean norm]]'', <math display=block>\left\|\mathbf{a}\right\|=\sqrt{a_1^2+a_2^2+a_3^2},</math> which is a consequence of the [[Pythagorean theorem]] since the basis vectors '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub> are orthogonal unit vectors. This happens to be equal to the square root of the [[dot product]], discussed below, of the vector with itself: <math display=block>\left\|\mathbf{a}\right\|=\sqrt{\mathbf{a}\cdot\mathbf{a}}.</math> ====Unit vector==== [[Image:Vector normalization.svg|class=skin-invert-image|thumb|right|The normalization of a vector '''a''' into a unit vector '''â''']] {{main|Unit vector}} A ''unit vector'' is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector.<ref name="1.1: Vectors"/> This is known as ''normalizing'' a vector. A unit vector is often indicated with a hat as in '''â'''. To normalize a vector {{nowrap|1='''a''' = (''a''<sub>1</sub>, ''a''<sub>2</sub>, ''a''<sub>3</sub>)}}, scale the vector by the reciprocal of its length ‖'''a'''‖. That is: <math display=block>\mathbf{\hat{a}} = \frac{\mathbf{a}}{\left\|\mathbf{a}\right\|} = \frac{a_1}{\left\|\mathbf{a}\right\|}\mathbf{e}_1 + \frac{a_2}{\left\|\mathbf{a}\right\|}\mathbf{e}_2 + \frac{a_3}{\left\|\mathbf{a}\right\|}\mathbf{e}_3</math> ====Zero vector==== {{main|Zero vector}} The ''zero vector'' is the vector with length zero. Written out in coordinates, the vector is {{nowrap|(0, 0, 0)}}, and it is commonly denoted <math>\vec{0}</math>, '''0''', or simply 0. Unlike any other vector, it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector that is a multiple of the zero vector). The sum of the zero vector with any vector '''a''' is '''a''' (that is, {{nowrap|1='''0''' + '''a''' = '''a'''}}). ===Dot product=== {{main|Dot product}} The ''dot product'' of two vectors '''a''' and '''b''' (sometimes called the ''[[inner product space|inner product]]'', or, since its result is a scalar, the ''scalar product'') is denoted by '''a''' ∙ '''b,''' and is defined as: <math display=block>\mathbf{a}\cdot\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta,</math> where ''θ'' is the measure of the [[angle]] between '''a''' and '''b''' (see [[trigonometric function]] for an explanation of cosine). Geometrically, this means that '''a''' and '''b''' are drawn with a common start point, and then the length of '''a''' is multiplied with the length of the component of '''b''' that points in the same direction as '''a'''. The dot product can also be defined as the sum of the products of the components of each vector as <math display=block>\mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3.</math> ===Cross product=== {{main|Cross product}} The ''cross product'' (also called the ''vector product'' or ''outer product'') is only meaningful in three or [[Seven-dimensional cross product|seven]] dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted '''a''' × '''b''', is a vector perpendicular to both '''a''' and '''b''' and is defined as <math display=block>\mathbf{a}\times\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\sin(\theta)\,\mathbf{n}</math> where ''θ'' is the measure of the angle between '''a''' and '''b''', and '''n''' is a unit vector [[perpendicular]] to both '''a''' and '''b''' which completes a [[Right-hand rule|right-handed]] system. The right-handedness constraint is necessary because there exist ''two'' unit vectors that are perpendicular to both '''a''' and '''b''', namely, '''n''' and (−'''n'''). [[Image:Cross product vector.svg|class=skin-invert-image|thumb|right|An illustration of the cross product]] The cross product '''a''' × '''b''' is defined so that '''a''', '''b''', and '''a''' × '''b''' also becomes a right-handed system (although '''a''' and '''b''' are not necessarily [[orthogonal]]). This is the [[right-hand rule]]. The length of '''a''' × '''b''' can be interpreted as the area of the parallelogram having '''a''' and '''b''' as sides. The cross product can be written as <math display=block>{\mathbf a}\times{\mathbf b} = (a_2 b_3 - a_3 b_2) {\mathbf e}_1 + (a_3 b_1 - a_1 b_3) {\mathbf e}_2 + (a_1 b_2 - a_2 b_1) {\mathbf e}_3.</math> For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a [[pseudovector]] instead of a vector (see below). ===Scalar triple product=== {{main|Triple product#Scalar triple product|l1=Scalar triple product}} The ''scalar triple product'' (also called the ''box product'' or ''mixed triple product'') is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by ('''a''' '''b''' '''c''') and defined as: <math display=block>(\mathbf{a}\ \mathbf{b}\ \mathbf{c}) =\mathbf{a}\cdot(\mathbf{b}\times\mathbf{c}).</math> It has three primary uses. First, the absolute value of the box product is the volume of the [[parallelepiped]] which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are [[linear independence|linearly dependent]], which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors '''a''', '''b''' and '''c''' are right-handed. In components (''with respect to a right-handed orthonormal basis''), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the [[determinant]] of the 3-by-3 [[Matrix (mathematics)|matrix]] having the three vectors as rows <math display=block>(\mathbf{a}\ \mathbf{b}\ \mathbf{c})=\begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{vmatrix}</math> The scalar triple product is linear in all three entries and anti-symmetric in the following sense: <math display=block> (\mathbf{a}\ \mathbf{b}\ \mathbf{c}) = (\mathbf{c}\ \mathbf{a}\ \mathbf{b}) = (\mathbf{b}\ \mathbf{c}\ \mathbf{a})= -(\mathbf{a}\ \mathbf{c}\ \mathbf{b}) = -(\mathbf{b}\ \mathbf{a}\ \mathbf{c}) = -(\mathbf{c}\ \mathbf{b}\ \mathbf{a}).</math> ===Conversion between multiple Cartesian bases=== All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the ''e'' basis {'''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub>}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the ''e'' basis, a vector '''a''' is expressed, by definition, as <math display=block>\mathbf{a} = p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3.</math> The scalar components in the ''e'' basis are, by definition, <math display=block>\begin{align} p &= \mathbf{a}\cdot\mathbf{e}_1, \\ q &= \mathbf{a}\cdot\mathbf{e}_2, \\ r &= \mathbf{a}\cdot\mathbf{e}_3. \end{align}</math> In another orthonormal basis ''n'' = {'''n'''<sub>1</sub>, '''n'''<sub>2</sub>, '''n'''<sub>3</sub>} that is not necessarily aligned with ''e'', the vector '''a''' is expressed as <math display=block>\mathbf{a} = u\mathbf{n}_1 + v\mathbf{n}_2 + w\mathbf{n}_3</math> and the scalar components in the ''n'' basis are, by definition, <math display=block>\begin{align} u &= \mathbf{a}\cdot\mathbf{n}_1, \\ v &= \mathbf{a}\cdot\mathbf{n}_2, \\ w &= \mathbf{a}\cdot\mathbf{n}_3. \end{align}</math> The values of ''p'', ''q'', ''r'', and ''u'', ''v'', ''w'' relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector '''a''' in both cases. It is common to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express ''u'', ''v'', ''w'' in terms of ''p'', ''q'', ''r'' is to use column matrices along with a [[direction cosine matrix]] containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form <math display=block>\begin{align} u &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_1, \\ v &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_2, \\ w &= (p\mathbf{e}_1 + q\mathbf{e}_2 + r\mathbf{e}_3)\cdot\mathbf{n}_3. \end{align}</math> Distributing the dot-multiplication gives <math display=block>\begin{align} u &= p\mathbf{e}_1\cdot\mathbf{n}_1 + q\mathbf{e}_2\cdot\mathbf{n}_1 + r\mathbf{e}_3\cdot\mathbf{n}_1, \\ v &= p\mathbf{e}_1\cdot\mathbf{n}_2 + q\mathbf{e}_2\cdot\mathbf{n}_2 + r\mathbf{e}_3\cdot\mathbf{n}_2, \\ w &= p\mathbf{e}_1\cdot\mathbf{n}_3 + q\mathbf{e}_2\cdot\mathbf{n}_3 + r\mathbf{e}_3\cdot\mathbf{n}_3. \end{align}</math> Replacing each dot product with a unique scalar gives <math display=block>\begin{align} u &= c_{11}p + c_{12}q + c_{13}r, \\ v &= c_{21}p + c_{22}q + c_{23}r, \\ w &= c_{31}p + c_{32}q + c_{33}r, \end{align}</math> and these equations can be expressed as the single matrix equation <math display=block>\begin{bmatrix} u \\ v \\ w \\ \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ c_{31} & c_{32} & c_{33} \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix}.</math> This matrix equation relates the scalar components of '''a''' in the ''n'' basis (''u'',''v'', and ''w'') with those in the ''e'' basis (''p'', ''q'', and ''r''). Each matrix element ''c''<sub>''jk''</sub> is the [[Direction cosine#Cartesian coordinates|direction cosine]] relating '''n'''<sub>''j''</sub> to '''e'''<sub>''k''</sub>.<ref name="dynon16">{{harvnb|Kane|Levinson|1996|pp=20–22}}</ref> The term ''direction cosine'' refers to the [[cosine]] of the angle between two unit vectors, which is also equal to their [[#Dot product|dot product]].<ref name="dynon16"/> Therefore, <math display=block>\begin{align} c_{11} &= \mathbf{n}_1\cdot\mathbf{e}_1 \\ c_{12} &= \mathbf{n}_1\cdot\mathbf{e}_2 \\ c_{13} &= \mathbf{n}_1\cdot\mathbf{e}_3 \\ c_{21} &= \mathbf{n}_2\cdot\mathbf{e}_1 \\ c_{22} &= \mathbf{n}_2\cdot\mathbf{e}_2 \\ c_{23} &= \mathbf{n}_2\cdot\mathbf{e}_3 \\ c_{31} &= \mathbf{n}_3\cdot\mathbf{e}_1 \\ c_{32} &= \mathbf{n}_3\cdot\mathbf{e}_2 \\ c_{33} &= \mathbf{n}_3\cdot\mathbf{e}_3 \end{align}</math> By referring collectively to '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub> as the ''e'' basis and to '''n'''<sub>1</sub>, '''n'''<sub>2</sub>, '''n'''<sub>3</sub> as the ''n'' basis, the matrix containing all the ''c''<sub>''jk''</sub> is known as the "[[transformation matrix]] from ''e'' to ''n''", or the "[[rotation matrix]] from ''e'' to ''n''" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from ''e'' to ''n''"<ref name="dynon16"/> (because it contains direction cosines). The properties of a rotation matrix are such that its [[matrix inverse|inverse]] is equal to its [[matrix transpose|transpose]]. This means that the "rotation matrix from ''e'' to ''n''" is the transpose of "rotation matrix from ''n'' to ''e''". The properties of a direction cosine matrix, C are:<ref>{{Cite book|title=Applied mathematics in integrated navigation systems|last=Rogers |first=Robert M. |date=2007|publisher=American Institute of Aeronautics and Astronautics|isbn=9781563479274|edition=3rd|location=Reston, Va.|oclc=652389481}}</ref> * the determinant is unity, |C| = 1; * the inverse is equal to the transpose; * the rows and columns are orthogonal unit vectors, therefore their dot products are zero. The advantage of this method is that a direction cosine matrix can usually be obtained independently by using [[Euler angles]] or a [[quaternion]] to relate the two vector bases, so the basis conversions can be performed directly, without having to work out all the dot products described above. By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.<ref name="dynon16"/> ===Other dimensions=== With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as <math display=block>(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)+(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2,</math> and in four dimensions as <math display=block>\begin{align} (a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3 + a_4{\mathbf e}_4) &+ (b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3 + b_4{\mathbf e}_4) =\\ (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2 &+ (a_3+b_3){\mathbf e}_3 + (a_4+b_4){\mathbf e}_4. \end{align}</math> The cross product does not readily generalise to other dimensions, though the closely related [[Exterior algebra#Areas in the plane|exterior product]] does, whose result is a [[bivector]]. In two dimensions this is simply a [[pseudoscalar]] <math display=block>(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)\wedge(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1 b_2 - a_2 b_1)\mathbf{e}_1 \mathbf{e}_2.</math> A [[seven-dimensional cross product]] is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products. ==Physics== {{main|Vector quantity}} Vectors have many uses in physics and other sciences. ===Length and units=== In abstract vector spaces, the length of the arrow depends on a [[Dimensionless number|dimensionless]] [[Scale (measurement)|scale]]. If it represents, for example, a force, the "scale" is of [[Dimensional analysis|physical dimension]] length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1 m:50 N and 1:250 respectively. Equal length of vectors of different dimension has no particular significance unless there is some [[proportionality constant]] inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance. ===Vector-valued functions=== {{main|Vector-valued function}} Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter ''t''. For instance, if '''r''' represents the position vector of a particle, then '''r'''(''t'') gives a [[parametric equation|parametric]] representation of the trajectory of the particle. Vector-valued functions can be [[derivative|differentiated]] and [[integral|integrated]] by differentiating or integrating the components of the vector, and many of the familiar rules from [[calculus]] continue to hold for the derivative and integral of vector-valued functions. ===Position, velocity and acceleration=== The position of a point '''x''' = (''x''<sub>1</sub>, ''x''<sub>2</sub>, ''x''<sub>3</sub>) in three-dimensional space can be represented as a [[position vector]] whose base point is the origin <math display=block>{\mathbf x} = x_1 {\mathbf e}_1 + x_2{\mathbf e}_2 + x_3{\mathbf e}_3.</math> The position vector has dimensions of [[length]]. Given two points '''x''' = (''x''<sub>1</sub>, ''x''<sub>2</sub>, ''x''<sub>3</sub>), '''y''' = (''y''<sub>1</sub>, ''y''<sub>2</sub>, ''y''<sub>3</sub>) their [[Displacement (vector)|displacement]] is a vector <math display=block>{\mathbf y}-{\mathbf x}=(y_1-x_1){\mathbf e}_1 + (y_2-x_2){\mathbf e}_2 + (y_3-x_3){\mathbf e}_3.</math> which specifies the position of ''y'' relative to ''x''. The length of this vector gives the straight-line distance from ''x'' to ''y''. Displacement has the dimensions of length. The [[velocity]] '''v''' of a point or particle is a vector, its length gives the [[speed]]. For constant velocity the position at time ''t'' will be <math display=block>{\mathbf x}_t= t {\mathbf v} + {\mathbf x}_0,</math> where '''x'''<sub>0</sub> is the position at time ''t'' = 0. Velocity is the [[#Ordinary derivative|time derivative]] of position. Its dimensions are length/time. [[Acceleration]] '''a''' of a point is vector which is the [[#Ordinary derivative|time derivative]] of velocity. Its dimensions are length/time<sup>2</sup>. ===Force, energy, work=== [[Force]] is a vector with dimensions of mass×length/time<sup>2</sup> (N m s <sup>−2</sup>) and [[Newton's second law]] is the scalar multiplication <math display=block>{\mathbf F} = m{\mathbf a}</math> Work is the dot product of [[force]] and [[displacement (vector)|displacement]] <math display=block>W = {\mathbf F} \cdot ({\mathbf x}_2 - {\mathbf x}_1).</math> <!-- In physics, scalars may also have a unit of measurement associated with them. For instance, [[Newton's second law]] is :<math>{\mathbf F} = m{\mathbf a}</math> where '''F''' has units of force, '''a''' has units of acceleration, and the scalar ''m'' has units of mass. In one possible physical interpretation of the above diagram, the scale of acceleration is, for instance, 2 m/s<sup>2</sup> : cm, and that of force 5 N : cm. Thus a scale ratio of 2.5 kg : 1 is used for mass. Similarly, if displacement has a scale of 1:1000 and velocity of 0.2 cm : 1 m/s, or equivalently, 2 ms : 1, a scale ratio of 0.5 : s is used for time. --> ==Vectors, pseudovectors, and transformations== {{Multiple issues|section=yes| {{Unreferenced section|date=December 2021}} {{Technical|section|date=December 2021}} }} An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a [[coordinate system|coordinate transformation]]. A ''contravariant vector'' is required to have components that "transform opposite to the basis" under changes of [[Basis (linear algebra)|basis]]. The vector itself does not change when the basis is transformed; instead, the components of the vector make a change that cancels the change in the basis. In other words, if the reference axes (and the basis derived from it) were rotated in one direction, the component representation of the vector would rotate in the opposite way to generate the same final vector. Similarly, if the reference axes were stretched in one direction, the components of the vector would reduce in an exactly compensating way. Mathematically, if the basis undergoes a transformation described by an [[invertible matrix]] ''M'', so that a coordinate vector '''x''' is transformed to {{nowrap|1='''x'''′ = ''M'''''x'''}}, then a contravariant vector '''v''' must be similarly transformed via {{nowrap|1='''v'''′ = ''M''<math>^{-1}</math>'''v'''}}. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if ''v'' consists of the ''x'', ''y'', and ''z''-components of [[velocity]], then ''v'' is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract [[vector space|vector]], but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include [[displacement (vector)|displacement]], [[velocity]], [[electric field]], [[momentum]], [[force]], and [[acceleration]]. In the language of [[differential geometry]], the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a ''contravariant vector'' to be a [[tensor]] of [[Covariance and contravariance of vectors|contravariant]] rank one. Alternatively, a contravariant vector is defined to be a [[tangent space|tangent vector]], and the rules for transforming a contravariant vector follow from the [[chain rule]]. Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip {{em|and}} gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the ''[[orientation (space)|orientation]]'' of space. A vector which gains a minus sign when the orientation of space changes is called a ''[[pseudovector]]'' or an ''axial vector''. Ordinary vectors are sometimes called ''true vectors'' or ''polar vectors'' to distinguish them from pseudovectors. Pseudovectors occur most frequently as the [[cross product]] of two ordinary vectors. One example of a pseudovector is [[angular velocity]]. Driving in a [[car]], and looking forward, each of the [[wheel]]s has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the ''reflection'' of this angular velocity vector points to the right, but the {{em|actual}} angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include [[magnetic field]], [[torque]], or more generally any cross product of two (true) vectors. This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying [[symmetry]] properties. ==See also== {{Div col|colwidth=25em}} * [[Affine space]], which distinguishes between vectors and [[Point (geometry)|points]] * [[Banach space]] * [[Clifford algebra]] * [[Complex number]] * [[Coordinate system]] * [[Covariance and contravariance of vectors]] * [[Four-vector]], a non-Euclidean vector in Minkowski space (i.e. four-dimensional spacetime), important in [[theory of relativity|relativity]] * [[Function space]] * [[Grassmann]]'s ''Ausdehnungslehre'' * [[Hilbert space]] * [[Normal vector]] * [[Null vector]] * [[Parity (physics)]] * [[Position (geometry)]] * [[Pseudovector]] * [[Quaternion]] * [[Tangential and normal components]] (of a vector) * [[Tensor]] * [[Unit vector]] * [[Vector bundle]] * [[Vector calculus]] * [[Vector notation]] * [[Vector-valued function]] {{div col end}} ==Notes== {{Notelist}} {{Reflist}} ==References== {{sfn whitelist |CITEREFIvanov2001}} ===Mathematical treatments=== *{{Cite book | first = Tom | last = Apostol | author-link = Tom Apostol | title = Calculus | volume = 1: One-Variable Calculus with an Introduction to Linear Algebra | publisher = Wiley | year = 1967 | isbn = 978-0-471-00005-1 | url = https://archive.org/details/calculus01apos }} *{{Cite book | first = Tom | last = Apostol | author-link = Tom Apostol | title = Calculus | volume = 2: Multi-Variable Calculus and Linear Algebra with Applications | publisher = Wiley | year = 1969 | isbn = 978-0-471-00007-5 | url-access = registration | url = https://archive.org/details/calculus01apos }} *{{Citation | title = Introduction to Tensor Calculus and Continuum Mechanics | first = J. H. | last = Heinbockel | publisher = Trafford Publishing | year = 2001 | isbn = 1-55369-133-4 | url = http://www.math.odu.edu/~jhh/counter2.html }}. *{{Citation | last = Itô | first = Kiyosi | title = Encyclopedic Dictionary of Mathematics | publisher = [[MIT Press]] | edition = 2nd | isbn = 978-0-262-59020-4 | year = 1993 }}. *{{springer|id=V/v096340|title=Vector|first=A.B.|last=Ivanov}}. *{{Citation | last1 = Kane | first1 = Thomas R. | last2 = Levinson | first2 = David A. | title = Dynamics Online | publisher = OnLine Dynamics | location = Sunnyvale, California | year = 1996 }}. *{{Cite book | first = Serge | last = Lang | author-link = Serge Lang | title = Introduction to Linear Algebra | edition = 2nd | publisher = Springer | year = 1986 | isbn = 0-387-96205-0 }} *{{Cite book | first = Daniel | last = Pedoe | author-link = Daniel Pedoe | title = Geometry: A comprehensive course | url = https://archive.org/details/geometrycomprehe0000pedo | url-access = registration | publisher = Dover | year = 1988 | isbn = 0-486-65812-0 }} ===Physical treatments=== *{{Cite book | last = Aris | first = R. | title = Vectors, Tensors and the Basic Equations of Fluid Mechanics | publisher = Dover | year = 1990 | isbn = 978-0-486-66110-0 | url = https://archive.org/details/vectorstensorsba00aris }} *{{Cite book | first1 = Richard | last1 = Feynman | author1-link = Richard Feynman | first2 = R. | last2 = Leighton | first3 = M. | last3 = Sands | year = 2005 | edition = 2nd | title = The Feynman Lectures on Physics | volume = I | publisher = Addison Wesley | isbn = 978-0-8053-9046-9 | chapter = Chapter 11 | title-link = The Feynman Lectures on Physics }} ==External links== {{Wikiquote}} {{Commons category|Vectors}} {{Wikibooks|Waves|Vectors}} * {{springer|title=Vector|id=p/v096340}} * [https://web.archive.org/web/20120801005307/http://wwwppd.nrl.navy.mil/nrlformulary/vector_identities.pdf Online vector identities] ([[Portable Document Format|PDF]]) * [http://www.marco-learningsystems.com/pages/roche/introvectors.htm Introducing Vectors] A conceptual introduction ([[applied mathematics]]) {{Linear algebra}} {{Authority control}} [[Category:Kinematics]] [[Category:Abstract algebra]] [[Category:Vector calculus]] [[Category:Linear algebra]] [[Category:Concepts in physics]] [[Category:Vectors (mathematics and physics)]] [[Category:Analytic geometry]] [[Category:Euclidean geometry]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:Blockquote
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Comma separated entries
(
edit
)
Template:Commons category
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Efn
(
edit
)
Template:Em
(
edit
)
Template:For-text
(
edit
)
Template:Further
(
edit
)
Template:Harvnb
(
edit
)
Template:Lang
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Main other
(
edit
)
Template:Math
(
edit
)
Template:Multiple issues
(
edit
)
Template:Mvar
(
edit
)
Template:Navbox
(
edit
)
Template:Notelist
(
edit
)
Template:Nowrap
(
edit
)
Template:OED
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:See also
(
edit
)
Template:Sfn whitelist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Springer
(
edit
)
Template:Wikibooks
(
edit
)
Template:Wikiquote
(
edit
)