Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Transformation matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Finding the matrix of a transformation== If one has a linear transformation <math>T(x)</math> in functional form, it is easy to determine the transformation matrix ''A'' by transforming each of the vectors of the [[standard basis]] by ''T'', then inserting the result into the columns of a matrix. In other words, <math display="block">A = \begin{bmatrix} T( \mathbf e_1 ) & T( \mathbf e_2 ) & \cdots & T( \mathbf e_n ) \end{bmatrix}</math> For example, the function <math>T(x) = 5x</math> is a linear transformation. Applying the above process (suppose that ''n'' = 2 in this case) reveals that: <math display="block">T( \mathbf{x} ) = 5 \mathbf{x} = 5I\mathbf{x} = \begin{bmatrix} 5 & 0 \\ 0 & 5 \end{bmatrix} \mathbf{x}</math> The matrix representation of vectors and operators depends on the chosen basis; a [[matrix similarity|similar]] matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same. To elaborate, vector <math>\mathbf v</math> [[linear combination|can be represented]] in basis vectors, <math>E = \begin{bmatrix}\mathbf e_1 & \mathbf e_2 & \cdots & \mathbf e_n\end{bmatrix}</math> with coordinates <math> [\mathbf v]_E = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}^\mathrm{T}</math>: <math display="block">\mathbf v = v_1 \mathbf e_1 + v_2 \mathbf e_2 + \cdots + v_n \mathbf e_n = \sum_i v_i \mathbf e_i = E [\mathbf v]_E</math> Now, express the result of the transformation matrix ''A'' upon <math>\mathbf v</math>, in the given basis: <math display="block">\begin{align} A(\mathbf v) &= A \left(\sum_i v_i \mathbf e_i \right) = \sum_i {v_i A(\mathbf e_i)} \\ &= \begin{bmatrix}A(\mathbf e_1) & A(\mathbf e_2) & \cdots & A(\mathbf e_n)\end{bmatrix} [\mathbf v]_E = A \cdot [\mathbf v]_E \\[3pt] &= \begin{bmatrix}\mathbf e_1 & \mathbf e_2 & \cdots & \mathbf e_n \end{bmatrix} \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \cdots & a_{n,n} \\ \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix} \end{align}</math> The <math>a_{i,j}</math> elements of matrix ''A'' are determined for a given basis ''E'' by applying ''A'' to every <math>\mathbf e_j = \begin{bmatrix} 0 & 0 & \cdots & (v_j=1) & \cdots & 0 \end{bmatrix}^\mathrm{T}</math>, and observing the response vector <math display="block">A \mathbf e_j = a_{1,j} \mathbf e_1 + a_{2,j} \mathbf e_2 + \cdots + a_{n,j} \mathbf e_n = \sum_i a_{i,j} \mathbf e_i.</math> This equation defines the wanted elements, <math>a_{i,j}</math>, of ''j''-th column of the matrix ''A''.<ref>{{cite book |last= Nearing |first= James |year=2010 |title= Mathematical Tools for Physics |url = http://www.physics.miami.edu/nearing/mathmethods |chapter = Chapter 7.3 Examples of Operators |chapter-url = http://www.physics.miami.edu/~nearing/mathmethods/operators.pdf |access-date = January 1, 2012 |isbn = 978-0486482125 }}</ref> ===Eigenbasis and diagonal matrix=== {{Main|Diagonal matrix|Eigenvalues and eigenvectors}} Yet, there is a special basis for an operator in which the components form a [[diagonal matrix]] and, thus, multiplication complexity reduces to {{mvar|n}}. Being diagonal means that all coefficients <math>a_{i,j} </math> except <math>a_{i,i}</math> are zeros leaving only one term in the sum <math display="inline">\sum a_{i,j} \mathbf e_i</math> above. The surviving diagonal elements, <math>a_{i,i}</math>, are known as '''eigenvalues''' and designated with <math>\lambda_i</math> in the defining equation, which reduces to <math>A \mathbf e_i = \lambda_i \mathbf e_i</math>. The resulting equation is known as '''eigenvalue equation'''.<ref>{{cite book |last = Nearing |first = James |year = 2010 |title = Mathematical Tools for Physics |url = http://www.physics.miami.edu/nearing/mathmethods |chapter = Chapter 7.9: Eigenvalues and Eigenvectors |chapter-url = http://www.physics.miami.edu/~nearing/mathmethods/operators.pdf |access-date = January 1, 2012 |isbn = 978-0486482125 }}</ref> The [[Eigenvalues and eigenvectors|eigenvectors and eigenvalues are derived from it via the '''characteristic polynomial''']]. With [[Diagonalizable matrix#Diagonalization|diagonalization]], it is [[diagonalizability|often possible]] to [[change of basis|translate]] to and from eigenbases.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)