Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sparse matrix
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Matrix in which most of the elements are zero}} {{redirects|Sparsity|other uses|Sparse (disambiguation)}} {| class=wikitable align=right width=240px style="margin: 3px 0 5px 14px;" | {{center|'''Example of sparse matrix'''}} {{center|<math>\left(\begin{smallmatrix} 11 & 22 & 0 & 0 & 0 & 0 & 0 \\ 0 & 33 & 44 & 0 & 0 & 0 & 0 \\ 0 & 0 & 55 & 66 & 77 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 88 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 99 \\ \end{smallmatrix}\right)</math>}} |- | {{center|{{resize|The above sparse matrix contains only 9 non-zero elements, with 26 zero elements. Its sparsity is 74%, and its density is 26%.}}}} |} [[Image:Finite element sparse matrix.png|right|thumb|A sparse matrix obtained when solving a [[finite element method|finite element problem]] in two dimensions. The non-zero elements are shown in black.]] In [[numerical analysis]] and [[scientific computing]], a '''sparse matrix''' or '''sparse array''' is a [[matrix (mathematics)|matrix]] in which most of the elements are zero.<ref name="Yan Wu Liu Gao 2017 p. ">{{cite conference | last1=Yan | first1=Di | last2=Wu | first2=Tao | last3=Liu | first3=Ying | last4=Gao | first4=Yang | title=2017 IEEE 17th International Conference on Communication Technology (ICCT) | chapter=An efficient sparse-dense matrix multiplication on a multicore system | publisher=IEEE | year=2017 | pages=1880–3 | isbn=978-1-5090-3944-9 | doi=10.1109/icct.2017.8359956 | quote=The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices. }}</ref> There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as '''sparse''' but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered '''dense'''.<ref name="Yan Wu Liu Gao 2017 p. "/> The number of zero-valued elements divided by the total number of elements (e.g., ''m'' × ''n'' for an ''m'' × ''n'' matrix) is sometimes referred to as the '''sparsity''' of the matrix. Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in [[combinatorics]] and application areas such as [[network theory]] and [[numerical analysis]], which typically have a low density of significant data or connections. Large sparse matrices often appear in [[scientific]] or [[engineering]] applications when solving [[partial differential equation]]s. When storing and manipulating sparse matrices on a [[computer]], it is beneficial and often necessary to use specialized [[algorithm]]s and [[data structure]]s that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,<ref>{{Cite web|url=https://www.businesswire.com/news/home/20190819005148/en/Cerebras-Systems-Unveils-Industry%E2%80%99s-Trillion-Transistor-Chip|title=Cerebras Systems Unveils the Industry's First Trillion Transistor Chip| quote=The WSE contains 400,000 AI-optimized compute cores. Called SLAC™ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation|date=2019-08-19 |website=www.businesswire.com|language=en|access-date=2019-12-02}}</ref> as they are common in the machine learning field.<ref>{{Cite press release|url=https://www.anl.gov/article/argonne-national-laboratory-deploys-cerebras-cs1-the-worlds-fastest-artificial-intelligence-computer|title=Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer {{!}} Argonne National Laboratory|quote=The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.| website=www.anl.gov|language=en|access-date=2019-12-02}}</ref> Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and [[Computer memory|memory]] are wasted on the zeros. Sparse data is by nature more easily [[data compression|compressed]] and thus requires significantly less [[computer data storage|storage]]. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms. ==Special cases== ===Banded=== {{main article|Band matrix}} An important special type of sparse matrices is [[band matrix]], defined as follows. The [[lower bandwidth of a matrix]] {{math|'''A'''}} is the smallest number {{math|''p''}} such that the entry {{math|''a''<sub>''i'',''j''</sub>}} vanishes whenever {{math|''i'' > ''j'' + ''p''}}. Similarly, the [[Band matrix#upper bandwidth|upper bandwidth]] is the smallest number {{math|''p''}} such that {{math|1=''a''<sub>''i'',''j''</sub> = 0}} whenever {{math|''i'' < ''j'' − ''p''}} {{harv|Golub|Van Loan|1996|loc=§1.2.1}}. For example, a [[tridiagonal matrix]] has lower bandwidth {{math|1}} and upper bandwidth {{math|1}}. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity. <math display="block">\begin{bmatrix} X & X & X & \cdot & \cdot & \cdot & \cdot & \\ X & X & \cdot & X & X & \cdot & \cdot & \\ X & \cdot & X & \cdot & X & \cdot & \cdot & \\ \cdot & X & \cdot & X & \cdot & X & \cdot & \\ \cdot & X & X & \cdot & X & X & X & \\ \cdot & \cdot & \cdot & X & X & X & \cdot & \\ \cdot & \cdot & \cdot & \cdot & X & \cdot & X & \\ \end{bmatrix}</math> Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices. By rearranging the rows and columns of a matrix {{math|'''A'''}} it may be possible to obtain a matrix {{math|'''A'''′}} with a lower bandwidth. A number of algorithms are designed for [[Graph bandwidth|bandwidth minimization]]. ====Diagonal==== A very efficient structure for an extreme case of band matrices, the ''[[diagonal matrix]]'', is to store just the entries in the [[main diagonal]] as a [[one-dimensional array]], so a diagonal {{math|''n'' × ''n''}} matrix requires only {{math|''n''}} entries. ===Symmetric=== A symmetric sparse matrix arises as the [[adjacency matrix]] of an [[undirected graph]]; it can be stored efficiently as an [[adjacency list]]. ===Block diagonal=== A [[block-diagonal matrix]] consists of sub-matrices along its diagonal blocks. A block-diagonal matrix {{math|'''A'''}} has the form <math display="block">\mathbf{A} = \begin{bmatrix} \mathbf{A}_1 & 0 & \cdots & 0 \\ 0 & \mathbf{A}_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{A}_n \end{bmatrix},</math> where {{math|'''A'''<sub>''k''</sub>}} is a square matrix for all {{math|1=''k'' = 1, ..., ''n''}}. ==Use== ===Reducing fill-in=== The ''fill-in'' of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The [[symbolic Cholesky decomposition]] can be used to calculate the worst possible fill-in before doing the actual [[Cholesky decomposition]]. There are other methods than the [[Cholesky decomposition]] in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in. ===Solving sparse matrix equations=== Both [[Iterative method|iterative]] and direct methods exist for sparse matrix solving. Iterative methods, such as [[conjugate gradient]] method and [[GMRES]] utilize fast computations of matrix-vector products <math>A x_i</math>, where matrix <math>A</math> is sparse. The use of [[preconditioner]]s can significantly accelerate convergence of such iterative methods. =={{anchor|storage}} Storage == A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element {{math|''a''<sub>''i'',''j''</sub>}} of the matrix and is accessed by the two [[Array data structure|indices]] {{math|''i''}} and {{math|''j''}}. Conventionally, {{math|''i''}} is the row index, numbered from top to bottom, and {{math|''j''}} is the column index, numbered from left to right. For an {{math|''m'' × ''n''}} matrix, the amount of memory required to store the matrix in this format is proportional to {{math|''m'' × ''n''}} (disregarding the fact that the dimensions of the matrix also need to be stored). In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously. Formats can be divided into two groups: * Those that support efficient modification, such as DOK (Dictionary of keys), LIL (List of lists), or COO (Coordinate list). These are typically used to construct the matrices. * Those that support efficient access and matrix operations, such as CSR (Compressed Sparse Row) or CSC (Compressed Sparse Column). ===Dictionary of keys (DOK)=== DOK consists of a [[associative array|dictionary]] that maps {{math|(row, column)}}-[[ordered pair|pairs]] to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.<ref>See [http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.dok_matrix.html <code>scipy.sparse.dok_matrix</code>]</ref> ===List of lists (LIL)=== LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.<ref>See [http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html <code>scipy.sparse.lil_matrix</code>]</ref> ===Coordinate list (COO)=== COO stores a list of {{math|(row, column, value)}} tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.<ref>See [http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html <code>scipy.sparse.coo_matrix</code>]</ref> ===Compressed sparse row (CSR, CRS or Yale format)=== The ''compressed sparse row'' (CSR) or ''compressed row storage'' (CRS) or Yale format represents a matrix {{math|'''M'''}} by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications ({{math|'''M'''''x''}}). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.<ref>{{cite conference |first1=Aydın |last1=Buluç |first2=Jeremy T. |last2=Fineman |first3=Matteo |last3=Frigo |first4=John R. |last4=Gilbert |first5=Charles E. |last5=Leiserson |authorlink5=Charles E. Leiserson |title=Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks |year=2009 |conference=ACM Symp. on Parallelism in Algorithms and Architectures |url=https://people.eecs.berkeley.edu/~aydin/csb2009.pdf |citeseerx=10.1.1.211.5256}}</ref> The CSR format stores a sparse {{math|''m'' × ''n''}} matrix {{math|'''M'''}} in row form using three (one-dimensional) arrays {{math|(V, COL_INDEX, ROW_INDEX)}}. Let {{math|NNZ}} denote the number of nonzero entries in {{math|'''M'''}}. (Note that [[Zero-based numbering|zero-based indices]] shall be used here.) * The arrays {{math|V}} and {{math|COL_INDEX}} are of length {{math|NNZ}}, and contain the non-zero values and the column indices of those values respectively * {{math|COL_INDEX}} contains the column in which the corresponding entry {{math|V}} is located. * The array {{math|ROW_INDEX}} is of length {{math|''m'' + 1}} and encodes the index in {{math|V}} and {{math|COL_INDEX}} where the given row starts. This is equivalent to {{math|ROW_INDEX[j]}} encoding the total number of nonzeros above row {{math|j}}. The last element is {{math|NNZ}} , i.e., the fictitious index in {{Math|V}} immediately after the last valid index {{math|NNZ − 1}}.<ref name=Saad03>{{harvnb|Saad|2003|publisher=SIAM}}</ref> For example, the matrix <math display="block">\begin{pmatrix} 5 & 0 & 0 & 0 \\ 0 & 8 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 6 & 0 & 0 \\ \end{pmatrix}</math> is a {{math|4 × 4}} matrix with 4 nonzero elements, hence V = [ 5 8 3 6 ] COL_INDEX = [ 0 1 2 1 ] ROW_INDEX = [ 0 1 2 3 4 ] assuming a zero-indexed language. To extract a row, we first define: row_start = ROW_INDEX[row] row_end = ROW_INDEX[row + 1] Then we take slices from V and COL_INDEX starting at row_start and ending at row_end. To extract the row 1 (the second row) of this matrix we set <code>row_start=1</code> and <code>row_end=2</code>. Then we make the slices <code>V[1:2] = [8]</code> and <code>COL_INDEX[1:2] = [1]</code>. We now know that in row 1 we have one element at column 1 with value 8. In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only when {{math|NNZ < (''m'' (''n'' − 1) − 1) / 2}}. Another example, the matrix <math display="block">\begin{pmatrix} 10 & 20 & 0 & 0 & 0 & 0 \\ 0 & 30 & 0 & 40 & 0 & 0 \\ 0 & 0 & 50 & 60 & 70 & 0 \\ 0 & 0 & 0 & 0 & 0 & 80 \\ \end{pmatrix}</math> is a {{math|4 × 6}} matrix (24 entries) with 8 nonzero elements, so V = [ 10 20 30 40 50 60 70 80 ] COL_INDEX = [ 0 1 1 3 2 3 4 5 ] ROW_INDEX = [ 0 2 4 7 8 ] The whole is stored as 21 entries: 8 in {{math|V}}, 8 in {{math|COL_INDEX}}, and 5 in {{math|ROW_INDEX}}. * {{math|ROW_INDEX}} splits the array {{math|V}} into rows: <code>(10, 20) (30, 40) (50, 60, 70) (80)</code>, indicating the index of {{math|V}} (and {{math|COL_INDEX}}) where each row starts and ends; * {{math|COL_INDEX}} aligns values in columns: <code>(10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0, 0, 0, 80)</code>. Note that in this format, the first value of {{math|ROW_INDEX}} is always zero and the last is always {{math|NNZ}}, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored, {{math|NNZ}} would not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formula {{math|ROW_INDEX[''i'' + 1] − ROW_INDEX[''i'']}} works for any row {{math|''i''}}. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix. The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines {{math|ROW_INDEX}} and {{math|COL_INDEX}} into a single array and handles the diagonal of the matrix separately.<ref>{{citation |title=Sparse Matrix Multiplication Package (SMMP) |first1=Randolph E. |last1=Bank |first2=Craig C. |last2=Douglas |journal=Advances in Computational Mathematics |volume=1 |year=1993 |pages=127–137 |doi=10.1007/BF02070824 |s2cid=6412241 |url=http://www.mgnet.org/~douglas/Preprints/pub0034.pdf}}</ref> For [[Logical matrix | logical]] [[Adjacency matrix | adjacency matrices]], the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation. It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.<ref>{{cite web |url=https://apps.dtic.mil/dtic/tr/fulltext/u2/a047724.pdf |archive-url=https://web.archive.org/web/20190406045412/https://apps.dtic.mil/dtic/tr/fulltext/u2/a047724.pdf |url-status=live |archive-date=April 6, 2019 |title=Yale Sparse Matrix Package |last1=Eisenstat |first1=S. C. |last2=Gursky |first2=M. C. |last3=Schultz |first3=M. H. |last4=Sherman |first4=A. H. |date=April 1977 |language=English |access-date=6 April 2019 }}</ref> ===Compressed sparse column (CSC or CCS)=== CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is {{math|(val, row_ind, col_ptr)}}, where {{math|val}} is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; {{math|row_ind}} is the row indices corresponding to the values; and, {{math|col_ptr}} is the list of {{math|val}} indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via the <code>sparse</code> function). ==Software== Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source: * [[Portable, Extensible Toolkit for Scientific Computation|PETSc]], a large C library, containing many different matrix solvers for a variety of matrix storage formats. * [[Trilinos]], a large C++ library, with sub-libraries dedicated to the storage of dense and sparse matrices and solution of corresponding linear systems. * [[Eigen (C++ library)|Eigen3]] is a C++ library that contains several sparse matrix solvers. However, none of them are [[Parallel computing|parallelized]]. * [[MUMPS (software)|MUMPS]] ('''MU'''ltifrontal '''M'''assively '''P'''arallel sparse direct '''S'''olver), written in Fortran90, is a [[frontal solver]]. * [[deal.II]], a finite element library that also has a sub-library for sparse linear systems and their solution. * [[Dune (mathematics software)|DUNE]], another finite element library that also has a sub-library for sparse linear systems and their solution. * [[Armadillo (C++ library)|Armadillo]] provides a user-friendly C++ wrapper for BLAS and LAPACK. * [[SciPy]] provides support for several sparse matrix formats, linear algebra, and solvers. * [[ALGLIB]] is a C++ and C# library with sparse linear algebra support * [[ARPACK]] Fortran 77 library for sparse matrix diagonalization and manipulation, using the Arnoldi algorithm * [[SLEPc]] Library for solution of large scale linear systems and sparse matrices * [[scikit-learn]], a Python library for [[machine learning]], provides support for sparse matrices and solvers * [https://docs.julialang.org/en/v1/stdlib/SparseArrays/ SparseArrays] is a [[Julia (programming language)|Julia]] standard library. * [[PSBLAS]], software toolkit to solve sparse linear systems supporting multiple formats also on GPU. ==History== The term ''sparse matrix'' was possibly coined by [[Harry Markowitz]] who initiated some pioneering work but then left the field.<ref>[http://purl.umn.edu/107467 Oral history interview with Harry M. Markowitz], pp. 9, 10.</ref> ==See also== {{columns-list|colwidth=22em| * [[Matrix representation]] * [[Pareto principle]] * [[Ragged matrix]] * [[Single-entry matrix]] * [[Skyline matrix]] * [[Sparse graph code]] * [[Sparse file]] * [[Harwell-Boeing file format]] * [[Matrix Market exchange formats]] }} ==Notes== {{Reflist}} ==References== {{refbegin}} * {{Cite book | first1=Gene H. | last1=Golub | author1-link=Gene H. Golub | first2=Charles F. | last2=Van Loan | author2-link=Charles F. Van Loan | year=1996 | title=Matrix Computations | edition=3rd | publisher=Johns Hopkins | place=Baltimore | isbn=978-0-8018-5414-9 }} * {{Cite book | last1=Stoer | first1=Josef | last2=Bulirsch | first2=Roland | title=Introduction to Numerical Analysis | publisher=Springer | edition=3rd | isbn=978-0-387-95452-3 | year=2002 |doi=10.1007/978-0-387-21738-3 }} * {{Cite book | last=Tewarson| first=Reginald P.|title=Sparse Matrices |publisher=Academic Press |date=1973 |url=https://www.sciencedirect.com/science/book/9780126856507 |isbn=0-12-685650-8 |oclc=316552948 |series=Mathematics in science and engineering |volume=99}} (This book, by a professor at the State University of New York at Stony Book, was the first book exclusively dedicated to Sparse Matrices. Graduate courses using this as a textbook were offered at that University in the early 1980s). * {{Cite web |title=Sparse Matrix Multiplication Package|first1= Randolph E.|last1= Bank|first2= Craig C.|last2= Douglas |url=http://www.mgnet.org/~douglas/Preprints/pub0034.pdf}} * {{Cite book |last=Pissanetzky|first= Sergio|year= 1984|title=Sparse Matrix Technology|url=https://archive.org/details/sparsematrixtech0000piss|url-access=registration|publisher= Academic Press |isbn=978-0-12-557580-5 |oclc=680489638 }} *{{cite journal|doi=10.1007/BF02521587|title=Reducing the profile of sparse symmetric matrices|year=1976|last1=Snay|first1=Richard A.|journal=[[Bulletin Géodésique]]|volume=50|issue=4|pages=341–352|bibcode=1976BGeod..50..341S|hdl=2027/uc1.31210024848523|s2cid=123079384|hdl-access=free}} Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville, MD. Referencing {{harvnb|Saad|2003}}. * {{cite book |first1=Jennifer |last1=Scott |first2=Miroslav |last2=Tuma |title=Algorithms for Sparse Linear Systems |series=Nečas Center Series |publisher=Birkhauser |date=2023 |doi=10.1007/978-3-031-25820-6 |isbn=978-3-031-25819-0 }} (Open Access) {{refend}} ==Further reading== {{refbegin}} * {{cite journal | title = A comparison of several bandwidth and profile reduction algorithms | journal = ACM Transactions on Mathematical Software | year = 1976 | volume = 2 | issue = 4 | pages = 322–330 | url = http://portal.acm.org/citation.cfm?id=355707 | doi = 10.1145/355705.355707 | last1 = Gibbs | first1 = Norman E. | last2 = Poole | first2 = William G. | last3 = Stockmeyer | first3 = Paul K. | s2cid = 14494429 | url-access = subscription }} * {{cite journal | title = Sparse matrices in MATLAB: Design and Implementation | journal = SIAM Journal on Matrix Analysis and Applications | year = 1992 | volume = 13 | issue = 1 | pages = 333–356 | url = http://citeseer.ist.psu.edu/gilbert91sparse.html | doi = 10.1137/0613024 | last1 = Gilbert | first1 = John R. | last2 = Moler | first2 = Cleve | last3 = Schreiber | first3 = Robert | citeseerx = 10.1.1.470.1054 }} * [http://faculty.cse.tamu.edu/davis/research.html Sparse Matrix Algorithms Research] at the Texas A&M University. * [https://sparse.tamu.edu/ SuiteSparse Matrix Collection] * [http://www.small-project.eu SMALL project] A EU-funded project on sparse models, algorithms and dictionary learning for large-scale data. * {{cite book |first=Wolfgang |last=Hackbusch |title=Iterative Solution of Large Sparse Systems of Equations |series=Applied Mathematical Sciences |publisher=Springer |date=2016 |volume=95 |doi=10.1007/978-3-319-28483-5 |edition=2nd|isbn=978-3-319-28481-1 }} * {{cite book |first=Yousef |last=Saad |title=Iterative Methods for Sparse Linear Systems |publisher=SIAM |date=2003 |isbn=978-0-89871-534-7 |doi=10.1137/1.9780898718003 |oclc=693784152 }} * {{cite book |first=Timothy A. |last=Davis |title=Direct Methods for Sparse Linear Systems |publisher=SIAM |date=2006 |isbn=978-0-89871-613-9 |doi=10.1137/1.9780898718881 |oclc=694087302 }} {{refend}} {{Data structures}} {{Matrix classes}} {{Numerical linear algebra}} [[Category:Sparse matrices| ]] [[Category:Arrays]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Center
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite press release
(
edit
)
Template:Cite web
(
edit
)
Template:Columns-list
(
edit
)
Template:Data structures
(
edit
)
Template:Harv
(
edit
)
Template:Harvnb
(
edit
)
Template:Main article
(
edit
)
Template:Math
(
edit
)
Template:Matrix classes
(
edit
)
Template:Numerical linear algebra
(
edit
)
Template:Redirects
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:Resize
(
edit
)
Template:Short description
(
edit
)