Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Matrix multiplication
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Fundamental applications== Historically, matrix multiplication has been introduced for facilitating and clarifying computations in [[linear algebra]]. This strong relationship between matrix multiplication and linear algebra remains fundamental in all mathematics, as well as in [[physics]], [[chemistry]], [[engineering]] and [[computer science]]. ===Linear maps=== If a [[vector space]] has a finite [[basis (linear algebra)|basis]], its vectors are each uniquely represented by a finite [[sequence (mathematics)|sequence]] of scalars, called a [[coordinate vector]], whose elements are the [[coordinates]] of the vector on the basis. These coordinate vectors form another vector space, which is [[isomorphism|isomorphic]] to the original vector space. A coordinate vector is commonly organized as a [[column matrix]] (also called a ''column vector''), which is a matrix with only one column. So, a column vector represents both a coordinate vector, and a vector of the original vector space. A [[linear map]] {{mvar|A}} from a vector space of dimension {{mvar|n}} into a vector space of dimension {{mvar|m}} maps a column vector :<math>\mathbf x=\begin{pmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{pmatrix}</math> onto the column vector :<math>\mathbf y= A(\mathbf x)= \begin{pmatrix}a_{11}x_1+\cdots + a_{1n}x_n\\ a_{21}x_1+\cdots + a_{2n}x_n \\ \vdots \\ a_{m1}x_1+\cdots + a_{mn}x_n\end{pmatrix}.</math> The linear map {{mvar|A}} is thus defined by the matrix :<math>\mathbf{A}=\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \\ \end{pmatrix}, </math> and maps the column vector <math>\mathbf x</math> to the matrix product :<math>\mathbf y = \mathbf {Ax}.</math> If {{mvar|B}} is another linear map from the preceding vector space of dimension {{mvar|m}}, into a vector space of dimension {{mvar|p}}, it is represented by a {{tmath|p\times m}} matrix <math>\mathbf B.</math> A straightforward computation shows that the matrix of the [[function composition|composite map]] {{tmath|B\circ A}} is the matrix product <math>\mathbf {BA}.</math> The general formula {{tmath|1=(B\circ A)(\mathbf x) = B(A(\mathbf x))}}) that defines the function composition is instanced here as a specific case of associativity of matrix product (see {{slink||Associativity}} below): :<math>(\mathbf{BA})\mathbf x = \mathbf{B}(\mathbf {Ax}) = \mathbf{BAx}.</math> ====Geometric rotations==== {{See also|Rotation matrix}} Using a [[Cartesian coordinate]] system in a Euclidean plane, the [[rotation (mathematics)|rotation]] by an angle <math>\alpha</math> around the [[origin (mathematics)|origin]] is a linear map. More precisely, <math display="block"> \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos \alpha & - \sin \alpha \\ \sin \alpha & \cos \alpha \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix},</math> where the source point <math>(x,y)</math> and its image <math>(x',y')</math> are written as column vectors. The composition of the rotation by <math>\alpha</math> and that by <math>\beta</math> then corresponds to the matrix product <math display="block">\begin{bmatrix} \cos \beta & - \sin \beta \\ \sin \beta & \cos \beta \end{bmatrix} \begin{bmatrix} \cos \alpha & - \sin \alpha \\ \sin \alpha & \cos \alpha \end{bmatrix} = \begin{bmatrix} \cos \beta \cos \alpha - \sin \beta \sin \alpha & - \cos \beta \sin \alpha - \sin \beta \cos \alpha \\ \sin \beta \cos \alpha + \cos \beta \sin \alpha & - \sin \beta \sin \alpha + \cos \beta \cos \alpha \end{bmatrix} = \begin{bmatrix} \cos (\alpha+\beta) & - \sin(\alpha+\beta) \\ \sin(\alpha+\beta) & \cos(\alpha+\beta) \end{bmatrix},</math> where appropriate [[List of trigonometric identities#Angle sum and difference identities|trigonometric identities]] are employed for the second equality. That is, the composition corresponds to the rotation by angle <math>\alpha+\beta</math>, as expected. ====Resource allocation in economics==== [[File:Mmult factory svg.svg|thumb|400px|The computation of the bottom left entry of <math>\mathbf{AB}</math> corresponds to the consideration of all paths (highlighted) from basic commodity <math>b_4</math> to final product <math>f_1</math> in the production flow graph.]] As an example, a fictitious factory uses 4 kinds of [[primary commodity|basic commodities]], <math>b_1, b_2, b_3, b_4</math> to produce 3 kinds of [[intermediate good]]s, <math>m_1, m_2, m_3</math>, which in turn are used to produce 3 kinds of [[final product]]s, <math>f_1, f_2, f_3</math>. The matrices :<math>\mathbf{A} = \begin{pmatrix} 1 & 0 & 1 \\ 2 & 1 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 2 \\ \end{pmatrix} </math> and <math>\mathbf{B} = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 3 & 1 \\ 4 & 2 & 2 \\ \end{pmatrix} </math> provide the amount of basic commodities needed for a given amount of intermediate goods, and the amount of intermediate goods needed for a given amount of final products, respectively. For example, to produce one unit of intermediate good <math>m_1</math>, one unit of basic commodity <math>b_1</math>, two units of <math>b_2</math>, no units of <math>b_3</math>, and one unit of <math>b_4</math> are needed, corresponding to the first column of <math>\mathbf{A}</math>. Using matrix multiplication, compute :<math>\mathbf{AB} = \begin{pmatrix} 5 & 4 & 3 \\ 8 & 9 & 5 \\\ 6 & 5 & 3 \\ 11 & 9 & 6 \\ \end{pmatrix} ;</math> this matrix directly provides the amounts of basic commodities needed for given amounts of final goods. For example, the bottom left entry of <math>\mathbf{AB}</math> is computed as <math>1 \cdot 1 + 1 \cdot 2 + 2 \cdot 4 = 11</math>, reflecting that <math>11</math> units of <math>b_4</math> are needed to produce one unit of <math>f_1</math>. Indeed, one <math>b_4</math> unit is needed for <math>m_1</math>, one for each of two <math>m_2</math>, and <math>2</math> for each of the four <math>m_3</math> units that go into the <math>f_1</math> unit, see picture. In order to produce e.g. 100 units of the final product <math>f_1</math>, 80 units of <math>f_2</math>, and 60 units of <math>f_3</math>, the necessary amounts of basic goods can be computed as :<math>(\mathbf{AB}) \begin{pmatrix} 100 \\ 80 \\ 60 \\ \end{pmatrix} = \begin{pmatrix} 1000 \\ 1820 \\ 1180 \\ 2180 \end{pmatrix} ,</math> that is, <math>1000</math> units of <math>b_1</math>, <math>1820</math> units of <math>b_2</math>, <math>1180</math> units of <math>b_3</math>, <math>2180</math> units of <math>b_4</math> are needed. Similarly, the product matrix <math>\mathbf{AB}</math> can be used to compute the needed amounts of basic goods for other final-good amount data.<ref>{{cite book | isbn=3-446-18668-9 | author=Peter Stingl | title=Mathematik fΓΌr Fachhochschulen – Technik und Informatik | location=[[Munich]] | publisher=[[Carl Hanser Verlag]] | edition=5th | year=1996 | language=German }} Here: Exm.5.4.10, p.205-206</ref> ===System of linear equations=== The general form of a [[system of linear equations]] is :<math>\begin{matrix}a_{11}x_1+\cdots + a_{1n}x_n=b_1, \\ a_{21}x_1+\cdots + a_{2n}x_n =b_2, \\ \vdots \\ a_{m1}x_1+\cdots + a_{mn}x_n =b_m. \end{matrix}</math> Using same notation as above, such a system is equivalent with the single matrix [[equation]] :<math>\mathbf{Ax}=\mathbf b.</math> ===Dot product, bilinear form and sesquilinear form=== The [[dot product]] of two column vectors is the unique entry of the matrix product :<math>\mathbf x^\mathsf T \mathbf y,</math> where <math>\mathbf x^\mathsf T</math> is the [[row vector]] obtained by [[transpose|transposing]] <math>\mathbf x</math>. (As usual, a 1Γ1 matrix is identified with its unique entry.) More generally, any [[bilinear form]] over a vector space of finite dimension may be expressed as a matrix product :<math>\mathbf x^\mathsf T \mathbf {Ay},</math> and any [[sesquilinear form]] may be expressed as :<math>\mathbf x^\dagger \mathbf {Ay},</math> where <math>\mathbf x^\dagger</math> denotes the [[conjugate transpose]] of <math>\mathbf x</math> (conjugate of the transpose, or equivalently transpose of the conjugate).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)