Template:Redirect Template:Short description

File:Rank-nullity.svg
Rank–nullity theorem

The rank–nullity theorem is a theorem in linear algebra, which asserts:

It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.

Stating the theoremEdit

Linear transformationsEdit

Let <math>T : V \to W</math> be a linear transformation between two vector spaces where <math>T</math>'s domain <math>V</math> is finite dimensional. Then <math display=block>\operatorname{rank}(T) ~+~ \operatorname{nullity}(T) ~=~ \dim V,</math> where <math display=inline>\operatorname{rank}(T)</math> is the rank of <math>T</math> (the dimension of its image) and <math>\operatorname{nullity}(T)</math> is the nullity of <math>T</math> (the dimension of its kernel). In other words, <math display=block>\dim (\operatorname{Im} T) + \dim (\operatorname{Ker} T) = \dim (\operatorname{Domain}(T)).</math> This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since <math> T </math> induces an isomorphism from <math>V / \operatorname{Ker} (T)</math> to <math>\operatorname{Im} (T),</math> the existence of a basis for <math> V </math> that extends any given basis of <math>\operatorname{Ker}(T)</math> implies, via the splitting lemma, that <math>\operatorname{Im}(T) \oplus \operatorname{Ker}(T) \cong V.</math> Taking dimensions, the rank–nullity theorem follows.

MatricesEdit

Linear maps can be represented with matrices. More precisely, an <math>m\times n</math> matrix Template:Mvar represents a linear map <math>f:F^n\to F^m,</math> where <math>F</math> is the underlying field.<ref>Template:Harvard citation text pp. 103-104, §2.4, Theorem 2.20</ref> So, the dimension of the domain of <math>f</math> is Template:Mvar, the number of columns of Template:Mvar, and the rank–nullity theorem for an <math>m\times n</math> matrix Template:Mvar is <math display=block>\operatorname{rank}(M) + \operatorname{nullity}(M) = n.</math>

ProofsEdit

Here we provide two proofs. The first<ref name=":0" /> operates in the general case, using linear maps. The second proof<ref>Template:Citation</ref> looks at the homogeneous system <math>\mathbf{Ax} = \mathbf{0},</math> where <math>\mathbf{A} </math> is a <math>m\times n</math> with rank <math>r,</math> and shows explicitly that there exists a set of <math>n-r</math> linearly independent solutions that span the null space of <math>\mathbf{A}</math>.

While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.

First proofEdit

Let <math>V, W</math> be vector spaces over some field <math>F,</math> and <math>T</math> defined as in the statement of the theorem with <math>\dim V = n</math>.

As <math>\operatorname{Ker}T \subset V</math> is a subspace, there exists a basis for it. Suppose <math>\dim\operatorname{Ker}T = k</math> and let <math display="block">\mathcal{K} := \{v_1, \ldots, v_k\} \subset \operatorname{Ker}(T)</math> be such a basis.

We may now, by the Steinitz exchange lemma, extend <math>\mathcal{K}</math> with <math>n-k</math> linearly independent vectors <math>w_1, \ldots, w_{n-k}</math> to form a full basis of <math>V</math>.

Let <math display="block"> \mathcal{S} := \{w_1, \ldots, w_{n-k}\} \subset V \setminus \operatorname{Ker}(T) </math> such that <math display="block"> \mathcal{B} := \mathcal{K} \cup \mathcal{S} = \{v_1, \ldots, v_k, w_1, \ldots, w_{n-k}\} \subset V </math> is a basis for <math>V</math>. From this, we know that <math display="block">\operatorname{Im} T = \operatorname{Span}T(\mathcal{B}) = \operatorname{Span}\{T(v_1), \ldots, T(v_k), T(w_1), \ldots, T(w_{n-k})\}</math>

<math> = \operatorname{Span}\{T(w_1), \ldots, T(w_{n-k})\} = \operatorname{Span}T(\mathcal{S}) .</math>

We now claim that <math>T(\mathcal{S})</math> is a basis for <math>\operatorname{Im} T</math>. The above equality already states that <math>T(\mathcal{S})</math> is a generating set for <math>\operatorname{Im} T</math>; it remains to be shown that it is also linearly independent to conclude that it is a basis.

Suppose <math>T(\mathcal{S})</math> is not linearly independent, and let <math display="block"> \sum_{j=1}^{n-k} \alpha _j T(w_j) = 0_W </math> for some <math>\alpha _j \in F</math>.

Thus, owing to the linearity of <math>T</math>, it follows that <math display="block"> T \left(\sum_{j=1}^{n-k} \alpha _j w_j \right) = 0_W \implies \left(\sum_{j=1}^{n-k} \alpha _j w_j \right) \in \operatorname{Ker} T = \operatorname{Span} \mathcal{K} \subset V .</math> This is a contradiction to <math>\mathcal{B}</math> being a basis, unless all <math>\alpha _j</math> are equal to zero. This shows that <math>T(\mathcal{S})</math> is linearly independent, and more specifically that it is a basis for <math>\operatorname{Im}T</math>.

To summarize, we have <math>\mathcal{K}</math>, a basis for <math>\operatorname{Ker}T</math>, and <math>T(\mathcal{S})</math>, a basis for <math>\operatorname{Im}T</math>.

Finally we may state that <math display="block"> \operatorname{Rank}(T) + \operatorname{Nullity}(T) = \dim \operatorname{Im} T + \dim \operatorname{Ker}T</math>

<math> = |T(\mathcal{S})| + |\mathcal{K}| = (n-k) + k = n = \dim V .</math>

This concludes our proof.

Second proofEdit

Let <math>\mathbf{A}</math> be an <math>m\times n</math> matrix with <math>r</math> linearly independent columns (i.e. <math>\operatorname{Rank}(\mathbf{A}) = r</math>). We will show that: Template:Ordered list

To do this, we will produce an <math>n \times (n-r)</math> matrix <math>\mathbf{X}</math> whose columns form a basis of the null space of <math>\mathbf{A}</math>.

Without loss of generality, assume that the first <math>r</math> columns of <math>\mathbf{A}</math> are linearly independent. So, we can write <math display="block">\mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_2\end{pmatrix} ,</math> where

  • <math>\mathbf{A}_1</math> is an <math>m \times r</math> matrix with <math>r</math> linearly independent column vectors, and
  • <math>\mathbf{A}_2</math> is an <math>m\times (n-r)</math> matrix such that each of its <math>n-r</math> columns is linear combinations of the columns of <math>\mathbf{A}_1</math>.

This means that <math>\mathbf{A}_2 = \mathbf{A}_1\mathbf{B}</math> for some <math>r \times (n-r)</math> matrix <math>\mathbf{B}</math> (see rank factorization) and, hence, <math display="block">\mathbf{A} = \begin{pmatrix} \mathbf{A}_1 & \mathbf{A}_1\mathbf{B}\end{pmatrix} .</math>

Let <math display="block">\mathbf{X} = \begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix} , </math> where <math>\mathbf{I}_{n-r}</math> is the <math>(n-r)\times (n-r)</math> identity matrix. So, <math>\mathbf{X}</math> is an <math>n \times (n-r)</math> matrix such that <math display="block"> \mathbf{A}\mathbf{X} = \begin{pmatrix}\mathbf{A}_1 & \mathbf{A}_1\mathbf{B} \end{pmatrix}\begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix} = -\mathbf{A}_1\mathbf{B} + \mathbf{A}_1\mathbf{B} = \mathbf{0}_{m \times (n-r)}. </math>

Therefore, each of the <math>n-r</math> columns of <math>\mathbf{X}</math> are particular solutions of <math>\mathbf{Ax} = {0}_{{F}^{m}}</math>.

Furthermore, the <math>n-r</math> columns of <math>\mathbf{X}</math> are linearly independent because <math>\mathbf{Xu} = \mathbf{0}_{{F}^{n}}</math> will imply <math>\mathbf{u} = \mathbf{0}_{{F}^{n-r}}</math> for <math>\mathbf{u} \in {F}^{n-r}</math>: <math display="block"> \mathbf{X}\mathbf{u} = \mathbf{0}_{{F}^{n}} \implies \begin{pmatrix} -\mathbf{B} \\

\mathbf{I}_{n-r}

\end{pmatrix}\mathbf{u} = \mathbf{0}_{{F}^{n}} \implies \begin{pmatrix} -\mathbf{B}\mathbf{u} \\

\mathbf{u}

\end{pmatrix} = \begin{pmatrix} \mathbf{0}_{{F}^{r}} \\

\mathbf{0}_{{F}^{n-r}}

\end{pmatrix} \implies \mathbf{u} = \mathbf{0}_{{F}^{n-r}}.</math> Therefore, the column vectors of <math>\mathbf{X}</math> constitute a set of <math>n-r</math> linearly independent solutions for <math>\mathbf{Ax} = \mathbf{0}_{\mathbb{F}^{m}}</math>.

We next prove that any solution of <math>\mathbf{Ax} = \mathbf{0}_{{F}^{m}}</math> must be a linear combination of the columns of <math>\mathbf{X}</math>.

For this, let <math display="block">\mathbf{u} = \begin{pmatrix}

\mathbf{u}_1 \\
\mathbf{u}_2

\end{pmatrix} \in {F}^{n}</math>

be any vector such that <math>\mathbf{Au} = \mathbf{0}_{{F}^{m}}</math>. Since the columns of <math>\mathbf{A}_1</math> are linearly independent, <math>\mathbf{A}_1\mathbf{x} = \mathbf{0}_{{F}^{m}}</math> implies <math>\mathbf{x} = \mathbf{0}_{{F}^{r}}</math>.

Therefore, <math display="block">\begin{array}{rcl} \mathbf{A}\mathbf{u} & = & \mathbf{0}_{{F}^{m}} \\ \implies \begin{pmatrix}\mathbf{A}_1 & \mathbf{A}_1\mathbf{B}\end{pmatrix} \begin{pmatrix} \mathbf{u}_1 \\ \mathbf{u}_2 \end{pmatrix} & = & \mathbf{A}_1\mathbf{u}_1 + \mathbf{A}_1\mathbf{B}\mathbf{u}_2 & = & \mathbf{A}_1(\mathbf{u}_1 + \mathbf{B}\mathbf{u}_2) & = & \mathbf{0}_{\mathbb{F}^{m}} \\ \implies \mathbf{u}_1 + \mathbf{B}\mathbf{u}_2 & = & \mathbf{0}_{{F}^{r}} \\ \implies \mathbf{u}_1 & = & -\mathbf{B}\mathbf{u}_2 \end{array}</math> <math display="block"> \implies \mathbf{u} = \begin{pmatrix} \mathbf{u}_1 \\ \mathbf{u}_2 \end{pmatrix} = \begin{pmatrix} -\mathbf{B} \\ \mathbf{I}_{n-r} \end{pmatrix}\mathbf{u}_2 = \mathbf{X}\mathbf{u}_2. </math>

This proves that any vector <math>\mathbf{u}</math> that is a solution of <math>\mathbf{Ax} = \mathbf{0}</math> must be a linear combination of the <math>n-r</math> special solutions given by the columns of <math>\mathbf{X}</math>. And we have already seen that the columns of <math>\mathbf{X}</math> are linearly independent. Hence, the columns of <math>\mathbf{X}</math> constitute a basis for the null space of <math>\mathbf{A}</math>. Therefore, the nullity of <math>\mathbf{A}</math> is <math>n - r</math>. Since <math>r</math> equals rank of <math>\mathbf{A}</math>, it follows that <math>\operatorname{Rank}(\mathbf{A}) + \operatorname{Nullity}(\mathbf{A}) = n</math>. This concludes our proof.

A third fundamental subspaceEdit

When <math>T: V \to W</math> is a linear transformation between two finite-dimensional subspaces, with <math> n = \dim(V)</math> and <math> m = \dim(W)</math> (so can be represented by an <math>m \times n</math> matrix <math>M</math>), the rank–nullity theorem asserts that if <math>T</math> has rank <math>r</math>, then <math>n - r</math> is the dimension of the null space of <math>M</math>, which represents the kernel of <math>T</math>. In some texts, a third fundamental subspace associated to <math>T</math> is considered alongside its image and kernel: the cokernel of <math>T</math> is the quotient space <math>W / \operatorname{Im}(T)</math>, and its dimension is <math>m - r</math>. This dimension formula (which might also be rendered <math>\dim \operatorname{Im}(T) + \dim\operatorname{Coker}(T) = \dim(W)</math>) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra.<ref>* Strang, Gilbert. Linear Algebra and Its Applications. 3rd ed. Orlando: Saunders, 1988.</ref><ref>Template:Citation</ref>

Reformulations and generalizationsEdit

This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.

In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that <math display="block"> 0 \rightarrow U \rightarrow V \mathbin{\overset{T}{\rightarrow}} R \rightarrow 0 </math> is a short exact sequence of vector spaces, then <math> U \oplus R \cong V </math>, hence <math display="block">\dim(U) + \dim(R) = \dim(V) .</math> Here <math>R</math> plays the role of <math>\operatorname{Im} T</math> and <math>U</math> is <math>\operatorname{Ker}T</math>, i.e. <math display="block"> 0 \rightarrow \ker T \mathbin{\hookrightarrow} V \mathbin{\overset{T}{\rightarrow}} \operatorname{im} T \rightarrow 0</math>

In the finite-dimensional case, this formulation is susceptible to a generalization: if <math display="block">0 \rightarrow V_1 \rightarrow V_2 \rightarrow \cdots V_r \rightarrow 0</math> is an exact sequence of finite-dimensional vector spaces, then<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> <math display="block">\sum_{i=1}^r (-1)^i\dim(V_i) = 0.</math> The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map <math>T \in \operatorname{Hom}(V,W)</math>, where <math>V</math> and <math>W</math> are finite-dimensional, is defined by <math display="block"> \operatorname{index} T = \dim \operatorname{Ker}(T) - \dim \operatorname{Coker} T .</math>

Intuitively, <math>\dim \operatorname{Ker} T</math> is the number of independent solutions <math>v</math> of the equation <math>Tv = 0</math>, and <math>\dim \operatorname{Coker} T </math> is the number of independent restrictions that have to be put on <math>w</math> to make <math>Tv = w </math> solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement <math display="block"> \operatorname{index} T = \dim V - \dim W . </math>

We see that we can easily read off the index of the linear map <math>T</math> from the involved spaces, without any need to analyze <math>T</math> in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.

CitationsEdit

Template:Reflist

ReferencesEdit

External linksEdit