Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Kernel (linear algebra)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Numerical computation== The problem of computing the kernel on a computer depends on the nature of the coefficients. ===Exact coefficients=== If the coefficients of the matrix are exactly given numbers, the [[column echelon form]] of the matrix may be computed with [[Bareiss algorithm]] more efficiently than with Gaussian elimination. It is even more efficient to use [[modular arithmetic]] and [[Chinese remainder theorem]], which reduces the problem to several similar ones over [[finite field]]s (this avoids the overhead induced by the non-linearity of the [[computational complexity]] of integer multiplication).{{Citation needed|date=October 2014}} For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in [[cryptography]] and [[Gröbner basis]] computation, better algorithms are known, which have roughly the same [[Analysis of algorithms|computational complexity]], but are faster and behave better with modern [[computer hardware]].{{Citation needed|date=October 2014}} ===Floating point computation=== For matrices whose entries are [[floating-point number]]s, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the [[rounding error]]s, a floating-point matrix has almost always a [[full rank]], even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is [[well-conditioned problem|well conditioned]], i.e. it has a low [[condition number]].<ref>{{Cite web |url=https://www.math.ohiou.edu/courses/math3600/lecture11.pdf |title=Archived copy |access-date=2015-04-14 |archive-url=https://web.archive.org/web/20170829031912/http://www.math.ohiou.edu/courses/math3600/lecture11.pdf |archive-date=2017-08-29 |url-status=dead }}</ref>{{Citation needed|date=December 2019}} Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the [[Lapack]] library.{{Citation needed|date=October 2014}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)