Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sparse matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Matrix in which most of the elements are zero}} {{redirects|Sparsity|other uses|Sparse (disambiguation)}} {| class=wikitable align=right width=240px style="margin: 3px 0 5px 14px;" | {{center|'''Example of sparse matrix'''}} {{center|<math>\left(\begin{smallmatrix} 11 & 22 & 0 & 0 & 0 & 0 & 0 \\ 0 & 33 & 44 & 0 & 0 & 0 & 0 \\ 0 & 0 & 55 & 66 & 77 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 88 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 99 \\ \end{smallmatrix}\right)</math>}} |- | {{center|{{resize|The above sparse matrix contains only 9 non-zero elements, with 26 zero elements. Its sparsity is 74%, and its density is 26%.}}}} |} [[Image:Finite element sparse matrix.png|right|thumb|A sparse matrix obtained when solving a [[finite element method|finite element problem]] in two dimensions. The non-zero elements are shown in black.]] In [[numerical analysis]] and [[scientific computing]], a '''sparse matrix''' or '''sparse array''' is a [[matrix (mathematics)|matrix]] in which most of the elements are zero.<ref name="Yan Wu Liu Gao 2017 p. ">{{cite conference | last1=Yan | first1=Di | last2=Wu | first2=Tao | last3=Liu | first3=Ying | last4=Gao | first4=Yang | title=2017 IEEE 17th International Conference on Communication Technology (ICCT) | chapter=An efficient sparse-dense matrix multiplication on a multicore system | publisher=IEEE | year=2017 | pages=1880β3 | isbn=978-1-5090-3944-9 | doi=10.1109/icct.2017.8359956 | quote=The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices. }}</ref> There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as '''sparse''' but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered '''dense'''.<ref name="Yan Wu Liu Gao 2017 p. "/> The number of zero-valued elements divided by the total number of elements (e.g., ''m'' Γ ''n'' for an ''m'' Γ ''n'' matrix) is sometimes referred to as the '''sparsity''' of the matrix. Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in [[combinatorics]] and application areas such as [[network theory]] and [[numerical analysis]], which typically have a low density of significant data or connections. Large sparse matrices often appear in [[scientific]] or [[engineering]] applications when solving [[partial differential equation]]s. When storing and manipulating sparse matrices on a [[computer]], it is beneficial and often necessary to use specialized [[algorithm]]s and [[data structure]]s that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,<ref>{{Cite web|url=https://www.businesswire.com/news/home/20190819005148/en/Cerebras-Systems-Unveils-Industry%E2%80%99s-Trillion-Transistor-Chip|title=Cerebras Systems Unveils the Industry's First Trillion Transistor Chip| quote=The WSE contains 400,000 AI-optimized compute cores. Called SLACβ’ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation|date=2019-08-19 |website=www.businesswire.com|language=en|access-date=2019-12-02}}</ref> as they are common in the machine learning field.<ref>{{Cite press release|url=https://www.anl.gov/article/argonne-national-laboratory-deploys-cerebras-cs1-the-worlds-fastest-artificial-intelligence-computer|title=Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer {{!}} Argonne National Laboratory|quote=The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.| website=www.anl.gov|language=en|access-date=2019-12-02}}</ref> Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and [[Computer memory|memory]] are wasted on the zeros. Sparse data is by nature more easily [[data compression|compressed]] and thus requires significantly less [[computer data storage|storage]]. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)