Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Kernel (linear algebra)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Vectors mapped to 0 by a linear map}} {{other uses|Kernel (disambiguation)}} [[File:Projection-on-diagonal.gif|thumb|300x300px|An example for a kernel- the linear operator <math> L : (x,y) \longrightarrow (x, x)</math> transforms all points on the <math> (x=0, y)</math> line to the zero point <math> (0,0)</math>, thus they form the kernel for the linear operator]] In [[mathematics]], the '''kernel''' of a [[linear map]], also known as the '''null space''' or '''nullspace''', is the part of the [[Domain of a function|domain]] which is mapped to the [[Zero element#Additive identities|zero vector]] of the [[Codomain|co-domain]]; the kernel is always a [[linear subspace]] of the domain.<ref>{{Cite web|url=http://mathworld.wolfram.com/Kernel.html|title=Kernel|last=Weisstein|first=Eric W.|website=mathworld.wolfram.com|language=en|access-date=2019-12-09}}</ref> That is, given a linear map {{math|''L'' : ''V'' → ''W''}} between two [[vector space]]s {{mvar|V}} and {{mvar|W}}, the kernel of {{mvar|L}} is the vector space of all elements {{math|'''v'''}} of {{mvar|V}} such that {{math|1=''L''('''v''') = '''0'''}}, where {{math|'''0'''}} denotes the [[zero vector]] in {{mvar|W}},<ref name=":0">{{Cite web|url=https://brilliant.org/wiki/kernel/|title=Kernel (Nullspace) {{!}} Brilliant Math & Science Wiki|website=brilliant.org|language=en-us|access-date=2019-12-09}}</ref> or more symbolically: <math display="block">\ker(L) = \left\{ \mathbf{v} \in V \mid L(\mathbf{v})=\mathbf{0} \right\} = L^{-1}(\mathbf{0}).</math> ==Properties== [[File:Kernel and image of linear map.svg|thumb|300px|Kernel and image of a linear map {{mvar|L}} from {{mvar|V}} to {{mvar|W}}]] The kernel of {{mvar|L}} is a [[linear subspace]] of the domain {{mvar|V}}.<ref name="textbooks">Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in {{harvnb|Lay|2005}}, {{harvnb|Meyer|2001}}, and Strang's lectures.</ref><ref name=":0" /> In the linear map <math>L : V \to W,</math> two elements of {{mvar|V}} have the same [[Image (mathematics)|image]] in {{mvar|W}} if and only if their difference lies in the kernel of {{mvar|L}}, that is, <math display=block>L\left(\mathbf{v}_1\right) = L\left(\mathbf{v}_2\right) \quad \text{ if and only if } \quad L\left(\mathbf{v}_1-\mathbf{v}_2\right) = \mathbf{0}.</math> From this, it follows by the [[Isomorphism_theorems#First_isomorphism_theorem|first isomorphism theorem]] that the image of {{mvar|L}} is [[Vector space isomorphism|isomorphic]] to the [[Quotient space (linear algebra)|quotient]] of {{mvar|V}} by the kernel: <math display=block>\operatorname{im}(L) \cong V / \ker(L).</math> {{anchor|nullity}}In the case where {{mvar|V}} is [[finite-dimensional]], this implies the [[rank–nullity theorem]]: <math display=block>\dim(\ker L) + \dim(\operatorname{im} L) = \dim(V).</math> where the term {{em|{{visible anchor|rank}}}} refers to the dimension of the image of {{mvar|L}}, <math>\dim(\operatorname{im} L),</math> while ''{{em|{{visible anchor|nullity}}}}'' refers to the dimension of the kernel of {{mvar|L}}, <math>\dim(\ker L).</math><ref name=":1">{{Cite web|last=Weisstein|first=Eric W.|url=http://mathworld.wolfram.com/Rank-NullityTheorem.html|title=Rank-Nullity Theorem|website=mathworld.wolfram.com|language=en|access-date=2019-12-09}}</ref> That is, <math display=block>\operatorname{Rank}(L) = \dim(\operatorname{im} L) \qquad \text{ and } \qquad \operatorname{Nullity}(L) = \dim(\ker L),</math> so that the rank–nullity theorem can be restated as <math display=block>\operatorname{Rank}(L) + \operatorname{Nullity}(L) = \dim \left(\operatorname{domain} L\right).</math> When {{mvar|V}} is an [[inner product space]], the quotient <math>V / \ker(L)</math> can be identified with the [[orthogonal complement]] in {{mvar|V}} of <math>\ker(L)</math>. This is the generalization to linear operators of the [[row space]], or coimage, of a matrix. ==Generalization to modules== {{main|Module (mathematics)}} The notion of kernel also makes sense for [[homomorphism]]s of [[Module (mathematics)|modules]], which are generalizations of vector spaces where the scalars are elements of a [[Ring (mathematics)|ring]], rather than a [[Field (mathematics)|field]]. The domain of the mapping is a module, with the kernel constituting a [[submodule]]. Here, the concepts of rank and nullity do not necessarily apply. ==In functional analysis== {{main|Topological vector space}} If {{mvar|V}} and {{mvar|W}} are [[topological vector space]]s such that {{mvar|W}} is finite-dimensional, then a linear operator {{math|''L'': ''V'' → ''W''}} is [[continuous linear operator|continuous]] if and only if the kernel of {{mvar|L}} is a [[closed set|closed]] subspace of {{mvar|V}}. ==Representation as matrix multiplication== Consider a linear map represented as a {{math|''m'' × ''n''}} matrix {{mvar|A}} with coefficients in a [[field (mathematics)|field]] {{mvar|K}} (typically <math>\mathbb{R}</math> or <math>\mathbb{C}</math>), that is operating on column vectors {{math|'''x'''}} with {{mvar|n}} components over {{mvar|K}}. The kernel of this linear map is the set of solutions to the equation {{math|1=''A'''''x''' = '''0'''}}, where {{math|'''0'''}} is understood as the [[zero vector]]. The [[dimension (vector space)|dimension]] of the kernel of ''A'' is called the '''nullity''' of ''A''. In [[set-builder notation]], <math display="block">\operatorname{N}(A) = \operatorname{Null}(A) = \operatorname{ker}(A) = \left\{ \mathbf{x}\in K^n \mid A\mathbf{x} = \mathbf{0} \right\}.</math> The matrix equation is equivalent to a homogeneous [[system of linear equations]]: <math display="block">A\mathbf{x}=\mathbf{0} \;\;\Leftrightarrow\;\; \begin{alignat}{7} a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \;\cdots\; + \;&& a_{1n} x_n &&\; = \;&&& 0 \\ a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \;\cdots\; + \;&& a_{2n} x_n &&\; = \;&&& 0 \\ && && && && &&\vdots\ \;&&& \\ a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \;\cdots\; + \;&& a_{mn} x_n &&\; = \;&&& 0\text{.} \\ \end{alignat}</math> Thus the kernel of ''A'' is the same as the solution set to the above homogeneous equations. ===Subspace properties=== The kernel of a {{math|''m'' × ''n''}} matrix {{mvar|A}} over a field {{mvar|K}} is a [[linear subspace]] of {{math|'''K'''<sup>''n''</sup>}}. That is, the kernel of {{mvar|A}}, the set {{math|Null(''A'')}}, has the following three properties: # {{math|Null(''A'')}} always contains the [[zero vector]], since {{math|1=''A'''''0''' = '''0'''}}. # If {{math|'''x''' ∈ Null(''A'')}} and {{math|'''y''' ∈ Null(''A'')}}, then {{math|'''x''' + '''y''' ∈ Null(''A'')}}. This follows from the distributivity of matrix multiplication over addition. # If {{math|'''x''' ∈ Null(''A'')}} and {{mvar|c}} is a [[scalar (mathematics)|scalar]] {{math|''c'' ∈ ''K''}}, then {{math|''c'''''x''' ∈ Null(''A'')}}, since {{math|1=''A''(''c'''''x''') = ''c''(''A'''''x''') = ''c'''''0''' = '''0'''}}. ===The row space of a matrix=== {{main|Rank–nullity theorem}} The product ''A'''''x''' can be written in terms of the [[dot product]] of vectors as follows: <math display="block">A\mathbf{x} = \begin{bmatrix} \mathbf{a}_1 \cdot \mathbf{x} \\ \mathbf{a}_2 \cdot \mathbf{x} \\ \vdots \\ \mathbf{a}_m \cdot \mathbf{x} \end{bmatrix}.</math> Here, {{math|'''a'''<sub>1</sub>, ... , '''a'''<sub>''m''</sub>}} denote the rows of the matrix {{mvar|A}}. It follows that {{math|'''x'''}} is in the kernel of {{mvar|A}}, if and only if {{math|'''x'''}} is [[orthogonality|orthogonal]] (or perpendicular) to each of the row vectors of {{mvar|A}} (since orthogonality is defined as having a dot product of 0). The [[row space]], or coimage, of a matrix {{mvar|A}} is the [[linear span|span]] of the row vectors of {{mvar|A}}. By the above reasoning, the kernel of {{mvar|A}} is the [[orthogonal complement]] to the row space. That is, a vector {{math|'''x'''}} lies in the kernel of {{mvar|A}}, if and only if it is perpendicular to every vector in the row space of {{mvar|A}}. The dimension of the row space of {{mvar|A}} is called the [[rank (linear algebra)|rank]] of ''A'', and the dimension of the kernel of {{mvar|A}} is called the '''nullity''' of {{mvar|A}}. These quantities are related by the [[rank–nullity theorem]]<ref name=":1" /> <math display="block">\operatorname{rank}(A) + \operatorname{nullity}(A) = n.</math> ===Left null space=== The '''left null space''', or [[cokernel]], of a matrix {{mvar|A}} consists of all column vectors {{math|'''x'''}} such that {{math|1='''x'''<sup>T</sup>''A'' = '''0'''<sup>T</sup>}}, where T denotes the [[transpose]] of a matrix. The left null space of {{mvar|A}} is the same as the kernel of {{math|''A''<sup>T</sup>}}. The left null space of {{mvar|A}} is the orthogonal complement to the [[column space]] of {{mvar|A}}, and is dual to the [[cokernel]] of the associated linear transformation. The kernel, the row space, the column space, and the left null space of {{mvar|A}} are the '''four fundamental subspaces''' associated with the matrix {{mvar|A}}. ===Nonhomogeneous systems of linear equations=== The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: <math display="block">A\mathbf{x} = \mathbf{b}\quad \text{or} \quad \begin{alignat}{7} a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \;\cdots\; + \;&& a_{1n} x_n &&\; = \;&&& b_1 \\ a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \;\cdots\; + \;&& a_{2n} x_n &&\; = \;&&& b_2 \\ && && && && &&\vdots\ \;&&& \\ a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \;\cdots\; + \;&& a_{mn} x_n &&\; = \;&&& b_m \\ \end{alignat}</math> If {{math|'''u'''}} and {{math|'''v'''}} are two possible solutions to the above equation, then <math display="block">A(\mathbf{u} - \mathbf{v}) = A\mathbf{u} - A\mathbf{v} = \mathbf{b} - \mathbf{b} = \mathbf{0}</math> Thus, the difference of any two solutions to the equation {{math|1=''A'''''x''' = '''b'''}} lies in the kernel of {{mvar|A}}. It follows that any solution to the equation {{math|1=''A'''''x''' = '''b'''}} can be expressed as the sum of a fixed solution {{math|'''v'''}} and an arbitrary element of the kernel. That is, the solution set to the equation {{math|1=''A'''''x''' = '''b'''}} is <math display="block">\left\{ \mathbf{v}+\mathbf{x} \mid A \mathbf{v}=\mathbf{b} \land \mathbf{x}\in\operatorname{Null}(A) \right\},</math> Geometrically, this says that the solution set to {{math|1=''A'''''x''' = '''b'''}} is the [[translation (geometry)|translation]] of the kernel of {{mvar|A}} by the vector {{math|'''v'''}}. See also [[Fredholm alternative]] and [[flat (geometry)]]. ==Illustration== The following is a simple illustration of the computation of the kernel of a matrix (see {{slink||Computation by Gaussian elimination}}, below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix <math display="block">A = \begin{bmatrix} 2 & 3 & 5 \\ -4 & 2 & 3 \end{bmatrix}.</math> The kernel of this matrix consists of all vectors {{math|(''x'', ''y'', ''z'') ∈ [[real coordinate space|'''R'''<sup>3</sup>]]}} for which <math display="block">\begin{bmatrix} 2 & 3 & 5 \\ -4 & 2 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix},</math> which can be expressed as a homogeneous [[system of linear equations]] involving {{mvar|x}}, {{mvar|y}}, and {{mvar|z}}: <math display="block">\begin{align} 2x + 3y + 5z &= 0, \\ -4x + 2y + 3z &= 0. \end{align}</math> The same linear equations can also be written in matrix form as: <math display="block"> \left[\begin{array}{ccc|c} 2 & 3 & 5 & 0 \\ -4 & 2 & 3 & 0 \end{array}\right]. </math> Through [[Gauss–Jordan elimination]], the matrix can be reduced to: <math display="block"> \left[\begin{array}{ccc|c} 1 & 0 & 1/16 & 0 \\ 0 & 1 & 13/8 & 0 \end{array}\right]. </math> Rewriting the matrix in equation form yields: <math display="block">\begin{align} x &= -\frac{1}{16}z \\ y &= -\frac{13}{8}z. \end{align}</math> The elements of the kernel can be further expressed in [[Parametric equations|parametric vector form]], as follows: <math display="block">\begin{bmatrix} x \\ y \\ z\end{bmatrix} = c \begin{bmatrix} -1/16 \\ -13/8 \\ 1\end{bmatrix}\quad (\text{where }c \in \mathbb{R})</math> Since {{mvar|c}} is a [[Free variables (system of linear equations)|free variable]] ranging over all real numbers, this can be expressed equally well as: <math display="block"> \begin{bmatrix} x \\ y \\ z \end{bmatrix} = c \begin{bmatrix} -1 \\ -26 \\ 16 \end{bmatrix}. </math> The kernel of {{mvar|A}} is precisely the solution set to these equations (in this case, a [[line (geometry)|line]] through the origin in {{math|'''R'''<sup>3</sup>}}). Here, the vector {{math|(−1,−26,16)<sup>T</sup>}} constitutes a [[Basis (linear algebra)|basis]] of the kernel of {{mvar|A}}. The nullity of {{mvar|A}} is therefore 1, as it is spanned by a single vector. The following dot products are zero: <math display="block"> \begin{bmatrix} 2 & 3 & 5 \end{bmatrix} \begin{bmatrix} -1 \\ -26 \\ 16 \end{bmatrix} = 0 \quad\mathrm{and}\quad \begin{bmatrix} -4 & 2 & 3 \end{bmatrix} \begin{bmatrix} -1 \\ -26 \\ 16 \end{bmatrix} = 0 , </math> which illustrates that vectors in the kernel of {{mvar|A}} are orthogonal to each of the row vectors of {{mvar|A}}. These two (linearly independent) row vectors span the row space of {{mvar|A}}—a plane orthogonal to the vector {{math|(−1,−26,16)<sup>T</sup>}}. With the rank 2 of {{mvar|A}}, the nullity 1 of {{mvar|A}}, and the dimension 3 of {{mvar|A}}, we have an illustration of the rank-nullity theorem. ==Examples== *If {{math|''L'': '''R'''<sup>''m''</sup> → '''R'''<sup>''n''</sup>}}, then the kernel of {{math|''L''}} is the solution set to a homogeneous [[system of linear equations]]. As in the above illustration, if {{math|''L''}} is the operator: <math display="block"> L(x_1, x_2, x_3) = (2 x_1 + 3 x_2 + 5 x_3,\; - 4 x_1 + 2 x_2 + 3 x_3)</math> then the kernel of {{math|''L''}} is the set of solutions to the equations <math display="block"> \begin{alignat}{7} 2x_1 &\;+\;& 3x_2 &\;+\;& 5x_3 &\;=\;& 0 \\ -4x_1 &\;+\;& 2x_2 &\;+\;& 3x_3 &\;=\;& 0 \end{alignat}</math> *Let {{math|''C''[0,1]}} denote the [[vector space]] of all continuous real-valued functions on the interval [0,1], and define {{math|''L'': ''C''[0,1] → '''R'''}} by the rule <math display="block">L(f) = f(0.3).</math> Then the kernel of {{math|''L''}} consists of all functions {{math|1=''f'' ∈ ''C''[0,1]}} for which {{math|1=''f''(0.3) = 0}}. *Let {{math|''C''<sup>∞</sup>('''R''')}} be the vector space of all infinitely differentiable functions {{math|'''R''' → '''R'''}}, and let {{math|''D'': ''C''<sup>∞</sup>('''R''') → ''C''<sup>∞</sup>('''R''')}} be the [[differential operator|differentiation operator]]: <math display="block">D(f) = \frac{df}{dx}.</math> Then the kernel of {{math|''D''}} consists of all functions in {{math|''C''<sup>∞</sup>('''R''')}} whose derivatives are zero, i.e. the set of all [[constant function]]s. *Let {{math|'''R'''<sup>∞</sup>}} be the [[direct product]] of infinitely many copies of {{math|'''R'''}}, and let {{math|''s'': '''R'''<sup>∞</sup> → '''R'''<sup>∞</sup>}} be the [[shift operator]] <math display="block"> s(x_1, x_2, x_3, x_4, \ldots) = (x_2, x_3, x_4, \ldots).</math> Then the kernel of {{math|''s''}} is the one-dimensional subspace consisting of all vectors {{math|(''x''<sub>1</sub>, 0, 0, 0, ...)}}. *If {{mvar|V}} is an [[inner product space]] and {{mvar|W}} is a subspace, the kernel of the [[projection (linear algebra)|orthogonal projection]] {{math|''V'' → ''W''}} is the [[orthogonal complement]] to {{mvar|W}} in {{mvar|V}}. ==Computation by Gaussian elimination== A [[Basis (linear algebra)|basis]] of the kernel of a matrix may be computed by [[Gaussian elimination]]. For this purpose, given an {{math|''m'' × ''n''}} matrix {{mvar|A}}, we construct first the row [[augmented matrix]] <math> \begin{bmatrix}A \\ \hline I \end{bmatrix},</math> where {{math|''I''}}<!-- necessary to ensure a serif font --> is the {{math|''n'' × ''n''}} [[identity matrix]]. Computing its [[column echelon form]] by Gaussian elimination (or any other suitable method), we get a matrix <math> \begin{bmatrix} B \\\hline C \end{bmatrix}.</math> A basis of the kernel of {{Mvar|A}} consists in the non-zero columns of {{mvar|C}} such that the corresponding column of {{mvar|B}} is a [[zero matrix|zero column]]. In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that <math display="block">A = \begin{bmatrix} 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}. </math> Then <math display="block"> \begin{bmatrix} A \\ \hline I \end{bmatrix} = \begin{bmatrix} 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}. </math> Putting the upper part in column echelon form by column operations on the whole matrix gives <math display="block"> \begin{bmatrix} B \\ \hline C \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 3 & -2 & 8 \\ 0 & 1 & 0 & -5 & 1 & -4 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -7 & 9 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}. </math> The last three columns of {{mvar|B}} are zero columns. Therefore, the three last vectors of {{mvar|C}}, <math display="block">\left[\!\! \begin{array}{r} 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right] ,\; \left[\!\! \begin{array}{r} -2 \\ 1 \\ 0 \\ -7 \\ 1 \\ 0 \end{array} \right],\; \left[\!\! \begin{array}{r} 8 \\ -4 \\ 0 \\ 9 \\ 0 \\ 1 \end{array} \right] </math> are a basis of the kernel of {{mvar|A}}. Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that <math>\begin{bmatrix} A \\ \hline I \end{bmatrix}</math> reduces to <math>\begin{bmatrix} B \\ \hline C \end{bmatrix}</math> means that there exists an invertible matrix <math>P</math> such that <math> \begin{bmatrix} A \\ \hline I \end{bmatrix} P = \begin{bmatrix} B \\ \hline C \end{bmatrix}, </math> with <math>B</math> in column echelon form. Thus {{nowrap|<math>AP = B</math>,}} {{nowrap|<math>IP = C </math>,}} and {{nowrap|<math> AC = B </math>.}} A column vector <math>\mathbf v</math> belongs to the kernel of <math>A</math> (that is <math>A \mathbf v = \mathbf 0</math>) if and only if <math>B \mathbf w = \mathbf 0,</math> where {{nowrap|<math>\mathbf w = P^{-1} \mathbf v = C^{-1} \mathbf v</math>.}} As <math>B</math> is in column echelon form, {{nowrap|<math>B \mathbf w = \mathbf 0</math>,}} if and only if the nonzero entries of <math>\mathbf w</math> correspond to the zero columns of {{nowrap|<math>B</math>.}} By multiplying by {{nowrap|<math>C</math>,}} one may deduce that this is the case if and only if <math>\mathbf v = C \mathbf w</math> is a linear combination of the corresponding columns of {{nowrap|<math>C</math>.}} ==Numerical computation== The problem of computing the kernel on a computer depends on the nature of the coefficients. ===Exact coefficients=== If the coefficients of the matrix are exactly given numbers, the [[column echelon form]] of the matrix may be computed with [[Bareiss algorithm]] more efficiently than with Gaussian elimination. It is even more efficient to use [[modular arithmetic]] and [[Chinese remainder theorem]], which reduces the problem to several similar ones over [[finite field]]s (this avoids the overhead induced by the non-linearity of the [[computational complexity]] of integer multiplication).{{Citation needed|date=October 2014}} For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in [[cryptography]] and [[Gröbner basis]] computation, better algorithms are known, which have roughly the same [[Analysis of algorithms|computational complexity]], but are faster and behave better with modern [[computer hardware]].{{Citation needed|date=October 2014}} ===Floating point computation=== For matrices whose entries are [[floating-point number]]s, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the [[rounding error]]s, a floating-point matrix has almost always a [[full rank]], even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is [[well-conditioned problem|well conditioned]], i.e. it has a low [[condition number]].<ref>{{Cite web |url=https://www.math.ohiou.edu/courses/math3600/lecture11.pdf |title=Archived copy |access-date=2015-04-14 |archive-url=https://web.archive.org/web/20170829031912/http://www.math.ohiou.edu/courses/math3600/lecture11.pdf |archive-date=2017-08-29 |url-status=dead }}</ref>{{Citation needed|date=December 2019}} Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the [[Lapack]] library.{{Citation needed|date=October 2014}} ==See also== {{Div col|colwidth=20em}} * [[Kernel (algebra)]] * [[Zero set]] * [[System of linear equations]] * [[Row and column spaces]] * [[Row reduction]] * [[Four fundamental subspaces]] * [[Vector space]] * [[Linear subspace]] * [[Linear operator]] * [[Function space]] * [[Fredholm alternative]] {{div col end}} ==Notes and references== {{reflist}} ==Bibliography== {{see also|Linear algebra#Further reading}} {{refbegin}} * {{Citation | last = Axler | first = Sheldon Jay | year = 1997 | title = Linear Algebra Done Right | publisher = Springer-Verlag | edition = 2nd | isbn = 0-387-98259-0 | postscript = . }} * {{Citation | last = Lay | first = David C. | year = 2005 | title = Linear Algebra and Its Applications | publisher = Addison Wesley | edition = 3rd | isbn = 978-0-321-28713-7 | postscript = . }} * {{Citation |last=Meyer |first=Carl D. |year=2001 |title=Matrix Analysis and Applied Linear Algebra |publisher=Society for Industrial and Applied Mathematics (SIAM) |isbn=978-0-89871-454-8 |url=http://www.matrixanalysis.com/DownloadChapters.html |postscript=. |url-status=dead |archive-url=https://web.archive.org/web/20091031193126/http://matrixanalysis.com/DownloadChapters.html |archive-date=2009-10-31 }} * {{Citation | last = Poole | first = David | year = 2006 | title = Linear Algebra: A Modern Introduction | publisher = Brooks/Cole | edition = 2nd | isbn = 0-534-99845-3 | postscript = . }} * {{Citation | last = Anton | first = Howard | year = 2005 | title = Elementary Linear Algebra (Applications Version) | publisher = Wiley International | edition = 9th | postscript = . }} * {{Citation | last = Leon | first = Steven J. | year = 2006 | title = Linear Algebra With Applications | publisher = Pearson Prentice Hall | edition = 7th | postscript = . }} * {{Cite book | first = Serge | last = Lang | author-link = Serge Lang | title = Linear Algebra | publisher = Springer | year = 1987 | isbn = 9780387964126 }} * {{Citation | first1 = Lloyd N. | last1 = Trefethen | first2 = David III | last2 = Bau | title = Numerical Linear Algebra | publisher = SIAM | year = 1997 | isbn = 978-0-89871-361-9 | url = http://web.comlab.ox.ac.uk/oucl/work/nick.trefethen/text.html | postscript = . }} {{refend}} ==External links== {{wikibooks|Linear Algebra/Null Spaces}} * {{springer|title=Kernel of a matrix|id=p/k110090}} * [[Khan Academy]], [http://www.khanacademy.org/video/introduction-to-the-null-space-of-a-matrix Introduction to the Null Space of a Matrix] {{linear algebra}} {{DEFAULTSORT:Kernel (linear algebra)}} [[Category:Linear algebra]] [[Category:Functional analysis]] [[Category:Matrices (mathematics)]] [[Category:Numerical linear algebra]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Em
(
edit
)
Template:Harvnb
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Navbox
(
edit
)
Template:Nowrap
(
edit
)
Template:Other uses
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Wikibooks
(
edit
)