Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Adjacency matrix
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Square matrix used to represent a graph or network}} In [[graph theory]] and [[computer science]], an '''adjacency matrix''' is a [[square matrix]] used to represent a finite [[graph (discrete mathematics)|graph]]. The elements of the [[matrix (mathematics)|matrix]] indicate whether pairs of [[Vertex (graph theory)|vertices]] are [[Neighbourhood (graph theory)|adjacent]] or not in the graph. In the special case of a finite [[simple graph]], the adjacency matrix is a [[(0,1)-matrix]] with zeros on its diagonal. If the graph is [[Glossary of graph theory terms#undirected|undirected]] (i.e. all of its [[Glossary of graph theory terms#edge|edges]] are bidirectional), the adjacency matrix is [[symmetric matrix|symmetric]]. The relationship between a graph and the [[eigenvalue]]s and [[eigenvector]]s of its adjacency matrix is studied in [[spectral graph theory]]. The adjacency matrix of a graph should be distinguished from its [[incidence matrix]], a different matrix representation whose elements indicate whether vertex–edge pairs are [[Incidence (graph)|incident]] or not, and its [[degree matrix]], which contains information about the [[degree (graph theory)|degree]] of each vertex. ==Definition== For a simple graph with vertex set {{math|''U'' {{=}} {''u''<sub>1</sub>, ..., ''u''<sub>''n''</sub>}<nowiki/>}}, the adjacency matrix is a square {{math|''n'' × ''n''}} matrix {{mvar|A}} such that its element {{mvar|A<sub>ij</sub>}} is 1 when there is an edge from vertex {{math|''u''<sub>i</sub>}} to vertex {{math|''u''<sub>j</sub>}}, and 0 when there is no edge.<ref>{{citation| title=Algebraic Graph Theory| edition=2nd| first=Norman|last=Biggs|series=Cambridge Mathematical Library|publisher=Cambridge University Press|year=1993|at=Definition 2.1, p. 7}}.</ref> The diagonal elements of the matrix are all 0, since edges from a vertex to itself ([[Loop (graph theory)|loops]]) are not allowed in simple graphs. It is also sometimes useful in [[algebraic graph theory]] to replace the nonzero elements with algebraic variables.<ref>{{citation|last=Harary|first=Frank|author-link=Frank Harary|journal=SIAM Review|mr=0144330|pages=202–210|title=The determinant of the adjacency matrix of a graph|volume=4|issue=3|year=1962|doi=10.1137/1004057|bibcode = 1962SIAMR...4..202H}}.</ref> The same concept can be extended to [[multigraph]]s and graphs with loops by storing the number of edges between each two vertices in the corresponding matrix element, and by allowing nonzero diagonal elements. Loops may be counted either once (as a single edge) or twice (as two vertex-edge incidences), as long as a consistent convention is followed. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. ===Of a bipartite graph=== <!-- [[Adjacency matrix of a bipartite graph]] & [[Biadjacency matrix]] redirect here --> The adjacency matrix {{mvar|A}} of a [[bipartite graph]] whose two parts have {{mvar|r}} and {{mvar|s}} vertices can be written in the form : <math>A = \begin{pmatrix} 0_{r,r} & B \\ B^\mathsf{T} & 0_{s,s} \end{pmatrix},</math> where {{mvar|B}} is an {{math|''r'' × ''s''}} matrix, and {{math|0<sub>''r'',''r''</sub>}} and {{math|0<sub>''s'',''s''</sub>}} represent the {{math|''r'' × ''r''}} and {{math|''s'' × ''s''}} [[zero matrix|zero matrices]]. In this case, the smaller matrix {{mvar|B}} uniquely represents the graph, and the remaining parts of {{mvar|A}} can be discarded as redundant. {{mvar|B}} is sometimes called the ''biadjacency matrix''. Formally, let {{math|''G'' {{=}} (''U'', ''V'', ''E'')}} be a [[bipartite graph]] with parts {{math|''U'' {{=}} {''u''<sub>1</sub>, ..., ''u''<sub>''r''</sub>}<nowiki/>}}, {{math|''V'' {{=}} {''v''<sub>1</sub>, ..., ''v''<sub>''s''</sub>}<nowiki/>}} and edges {{mvar|E}}. The biadjacency matrix is the {{math|''r'' × ''s''}} 0–1 matrix {{mvar|B}} in which {{math|''b''<sub>''i'',''j''</sub> {{=}} 1}} [[if and only if]] {{math|(''u''<sub>''i''</sub>, ''v''<sub>''j''</sub>) ∈ ''E''}}. If {{mvar|G}} is a bipartite [[multigraph]] or [[weighted graph]], then the elements {{mvar|b''<sub>i,j</sub>''}} are taken to be the number of edges between the vertices or the weight of the edge {{math|(''u''<sub>''i''</sub>, ''v''<sub>''j''</sub>)}}, respectively. ===Variations=== An {{nowrap|{{math|(''a'', ''b'', ''c'')}}}}-adjacency matrix {{mvar|A}} of a simple graph has {{math|''A''<sub>''i'',''j''</sub> {{=}} ''a''}} if {{math|(''i'', ''j'')}} is an edge, {{mvar|b}} if it is not, and {{mvar|c}} on the diagonal. The [[Seidel adjacency matrix]] is a {{math|{{nowrap|(−1, 1, 0)}}}}-adjacency matrix. This matrix is used in studying [[strongly regular graph]]s and [[two-graph]]s.<ref>{{cite journal |last=Seidel |first=J. J. |title=Strongly Regular Graphs with (−1, 1, 0) Adjacency Matrix Having Eigenvalue 3 |journal=[[Linear Algebra and Its Applications|Lin. Alg. Appl.]] |volume=1 |issue=2 |pages=281–298 |year=1968 |doi=10.1016/0024-3795(68)90008-6 |doi-access= }}</ref> The '''[[distance matrix]]''' has in position {{math|(''i'', ''j'')}} the distance between vertices {{mvar|v<sub>i</sub>}} and {{mvar|v<sub>j</sub>}}. The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which contains [[Boolean algebra|Boolean values]]), it gives the exact distance between them. ==Examples== ===Undirected graphs=== The convention followed here (for undirected graphs) is that each edge adds 1 to the appropriate cell in the matrix, and each loop (an edge from a vertex to itself) adds 2 to the appropriate cell on the diagonal in the matrix.<ref>{{cite conference |url=https://books.google.com/books?id=wp7XsCAm_9EC&pg=PA63 |title=Expander graphs and codes |last1=Shum |first1=Kenneth |last2=Blake |first2=Ian |date=2003-12-18 |publisher=American Mathematical Society |book-title=Volume 68 of DIMACS series in discrete mathematics and theoretical computer science |pages=63 |isbn=9780821871102 |conference=Algebraic Coding Theory and Information Theory: DIMACS Workshop, Algebraic Coding Theory and Information Theory }}</ref> This allows the degree of a vertex to be easily found by taking the sum of the values in either its respective row or column in the adjacency matrix. {|class="wikitable" style="text-align: center; width: 700px;" ![[Labeled graph]] !Adjacency matrix |- |[[Image:6n-graph2.svg|200px]] |<!-- The first cell is a 2, not a 1, because of the self-loop. Otherwise, its degree would not match its row or column sum. --><math>\begin{pmatrix} 2 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix}</math> <br>Coordinates are 1–6. |- |[[File:Symmetric group 4; Cayley graph 1,5,21 (Nauru Petersen); numbers.svg|250px]] <br>[[Nauru graph]] |[[File:Symmetric group 4; Cayley graph 1,5,21 (adjacency matrix).svg|250px]] <br>Coordinates are 0–23. <br>White fields are zeros, colored fields are ones. |} ===Directed graphs=== The adjacency matrix of a directed graph can be asymmetric. One can define the adjacency matrix of a directed graph either such that # a non-zero element {{mvar|A<sub>ij</sub>}} indicates an edge from {{mvar|i}} to {{mvar|j}} or # it indicates an edge from {{mvar|j}} to {{mvar|i}}. The former definition is commonly used in graph theory and social network analysis (e.g., sociology, political science, economics, psychology).<ref> {{citation | title=Analyzing Social Networks | edition=2nd | first1=Steve | last1=Borgatti | first2=Martin | last2=Everett | first3=Jeffrey | last3=Johnson | publisher=SAGE | year=2018 | at=p. 20 }}</ref> The latter is more common in other applied sciences (e.g., dynamical systems, physics, network science) where {{mvar|A}} is sometimes used to describe linear dynamics on graphs.<ref> {{citation | title=Networks | edition=2nd | first=Mark | last=Newman | publisher=Oxford University Press | year=2018 | at=p. 110 }}</ref> Using the first definition, the [[Directed graph#Indegree and outdegree|in-degrees]] of a vertex can be computed by summing the entries of the corresponding column and the out-degree of vertex by summing the entries of the corresponding row. When using the second definition, the in-degree of a vertex is given by the corresponding row sum and the out-degree is given by the corresponding column sum. {| class="wikitable" style="text-align: center; width: 700px;" ! Labeled graph ! Adjacency matrix |- | [[File:Symmetric group 4; Cayley graph 4,9; numbers.svg|250px]] <br>[[Directed graph|Directed]] [[Cayley graph]] of [[Symmetric group|S]]<sub>4</sub> | [[File:Symmetric group 4; Cayley graph 4,9 (adjacency matrix).svg|250px]] <br>Coordinates are 0–23. <br>As the graph is directed, the matrix is not necessarily [[Symmetric matrix|symmetric]]. |} ===Trivial graphs=== The adjacency matrix of a [[complete graph]] contains all ones except along the diagonal where there are only zeros. The adjacency matrix of an [[empty graph]] is a [[zero matrix]]. ==Properties== ===Spectrum=== The adjacency matrix of an undirected simple graph is [[symmetric matrix|symmetric]], and therefore has a complete set of [[real number|real]] [[eigenvalue]]s and an orthogonal [[eigenvector]] basis. The set of eigenvalues of a graph is the '''spectrum''' of the graph.<ref>{{harvtxt|Biggs|1993}}, Chapter 2 ("The spectrum of a graph"), pp. 7–13.</ref> It is common to denote the eigenvalues by <math>\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n.</math> The greatest eigenvalue <math>\lambda_1</math> is bounded above by the maximum degree. This can be seen as result of the [[Perron–Frobenius theorem]], but it can be proved easily. Let {{mvar|v}} be one eigenvector associated to <math>\lambda_1</math> and {{mvar|x}} the entry in which {{mvar|v}} has maximum absolute value. Without loss of generality assume {{mvar|v<sub>x</sub>}} is positive since otherwise you simply take the eigenvector -{{mvar|v}}, also associated to <math>\lambda_1</math>. Then : <math>\lambda_1 v_x = (Av)_x = \sum_{y=1}^n A_{x,y}v_y \leq \sum_{y=1}^n A_{x,y} v_x = v_x \deg(x).</math> For {{mvar|d}}-regular graphs, {{mvar|d}} is the first eigenvalue of {{mvar|A}} for the vector {{math|{{nowrap|''v'' {{=}} (1, ..., 1)}}}} (it is easy to check that it is an eigenvalue and it is the maximum because of the above bound). The multiplicity of this eigenvalue is the number of connected components of {{mvar|G}}, in particular <math>\lambda_1>\lambda_2</math> for connected graphs. It can be shown that for each eigenvalue <math>\lambda_i</math>, its opposite <math>-\lambda_i = \lambda_{n+1-i}</math> is also an eigenvalue of {{mvar|A}} if {{mvar|G}} is a [[bipartite graph]].<ref>{{citation|last1=Brouwer|first1=Andries E.|last2=Haemers|first2=Willem H.|contribution=1.3.6 Bipartite graphs|contribution-url=https://books.google.com/books?id=F98THwYgrXYC&pg=PA6|doi=10.1007/978-1-4614-1939-6|isbn=978-1-4614-1938-9|location=New York|mr=2882891|pages=6–7|publisher=Springer|series=Universitext|title=Spectra of Graphs|year=2012}}</ref> In particular −{{mvar|d}} is an eigenvalue of any {{mvar|d}}-regular bipartite graph. The difference <math>\lambda_1 - \lambda_2</math> is called the [[spectral gap]] and it is related to the [[Expander graph|expansion]] of {{mvar|G}}. It is also useful to introduce the [[spectral radius]] of <math>A</math> denoted by <math>\lambda(G) = \max_{\left|\lambda_i\right| < d} |\lambda_i|</math>. This number is bounded by <math>\lambda(G) \geq 2\sqrt{d-1} - o(1)</math>. This bound is tight in the [[Ramanujan graphs]]. ===Isomorphism and invariants=== Suppose two directed or undirected graphs {{math|''G''<sub>1</sub>}} and {{math|''G''<sub>2</sub>}} with adjacency matrices {{math|''A''<sub>1</sub>}} and {{math|''A''<sub>2</sub>}} are given. {{math|''G''<sub>1</sub>}} and {{math|''G''<sub>2</sub>}} are [[graph isomorphism|isomorphic]] if and only if there exists a [[permutation matrix]] {{mvar|P}} such that : <math>P A_1 P^{-1} = A_2.</math> In particular, {{math|''A''<sub>1</sub>}} and {{math|''A''<sub>2</sub>}} are [[Similar (linear algebra)|similar]] and therefore have the same [[Minimal polynomial (linear algebra)|minimal polynomial]], [[characteristic polynomial]], [[eigenvalues]], [[determinant]] and [[Trace (matrix)|trace]]. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic.<ref>[[Godsil, Chris]]; [[Gordon Royle|Royle, Gordon]] ''Algebraic Graph Theory'', Springer (2001), {{ISBN|0-387-95241-1}}, p.164</ref> Such [[linear operator]]s are said to be [[isospectral]]. ===Matrix powers=== If {{mvar|A}} is the adjacency matrix of the directed or undirected graph {{mvar|G}}, then the matrix {{math|''A''<sup>''n''</sup>}} (i.e., the [[matrix multiplication|matrix product]] of {{mvar|n}} copies of {{mvar|A}}) has an interesting interpretation: the element {{math|{{nowrap|(''i'', ''j'')}}}} gives the number of (directed or undirected) [[Path (graph theory)|walks]] of length {{mvar|n}} from vertex {{mvar|i}} to vertex {{mvar|j}}. If {{mvar|n}} is the smallest nonnegative integer, such that for some {{mvar|i}}, {{mvar|j}}, the element {{math|{{nowrap|(''i'', ''j'')}}}} of {{math|''A''<sup>''n''</sup>}} is positive, then {{mvar|n}} is the distance between vertex {{mvar|i}} and vertex {{mvar|j}}. A great example of how this is useful is in counting the number of triangles in an undirected graph {{mvar|G}}, which is exactly the [[Trace (linear algebra)|trace]] of {{math|''A''<sup>3</sup>}} divided by 3 or 6 depending on whether the graph is directed or not. We divide by those values to compensate for the overcounting of each triangle. In an undirected graph, each triangle will be counted twice for all three nodes, because the path can be followed clockwise or counterclockwise : ijk or ikj. The adjacency matrix can be used to determine whether or not the graph is [[Connectivity (graph theory)|connected]]. If a directed graph has a [[nilpotent matrix|nilpotent]] adjacency matrix (i.e., if there exists {{mvar|n}} such that {{math|''A''<sup>''n''</sup>}} is the zero matrix), then it is a [[directed acyclic graph]].<ref>{{Cite journal|title=Matrices with Permanent Equal to One |last1=Nicholson |first1=Victor A |year=1975 |journal=Linear Algebra and Its Applications |issue=12 |page=187|url=https://core.ac.uk/download/pdf/82099476.pdf}}</ref> ==Data structures== The adjacency matrix may be used as a [[data structure]] for the [[Graph (abstract data type)|representation of graphs]] in computer programs for manipulating graphs. The main alternative data structure, also in use for this application, is the [[adjacency list]].<ref>{{harvtxt|Goodrich|Tamassia|2015}}, p. 361: "There are two data structures that people often use to represent graphs, the adjacency list and the adjacency matrix."</ref><ref name="clrs">{{citation |author-link=Thomas H. Cormen |first1=Thomas H. |last1=Cormen |author-link2=Charles E. Leiserson |first2=Charles E. |last2=Leiserson |author-link3=Ronald L. Rivest |first3=Ronald L. |last3=Rivest |author-link4=Clifford Stein |first4=Clifford |last4=Stein |year=2001 |title=[[Introduction to Algorithms]] |edition=Second |publisher=MIT Press and McGraw-Hill |isbn=0-262-03293-7 |chapter=Section 22.1: Representations of graphs |pages=527–531 }}.</ref> The space needed to represent an adjacency matrix and the time needed to perform operations on them is dependent on the [[matrix representation]] chosen for the underlying matrix. [[Sparse matrix representation]]s only store non-zero matrix entries and implicitly represent the zero entries. They can, for example, be used to represent [[sparse graph]]s without incurring the space overhead from storing the many zero entries in the adjacency matrix of the sparse graph. In the following section the adjacency matrix is assumed to be represented by an [[array data structure]] so that zero and non-zero entries are all directly represented in storage. Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only {{math|{{pipe}}''V''{{thinsp}}{{pipe}}<sup>2</sup>{{hairsp}}/{{hairsp}}8}} bytes to represent a directed graph, or (by using a packed triangular format and only storing the lower triangular part of the matrix) approximately {{math|{{pipe}}''V''{{thinsp}}{{pipe}}<sup>2</sup>{{hairsp}}/{{hairsp}}16}} bytes to represent an undirected graph. Although slightly more succinct representations are possible, this method gets close to the information-theoretic lower bound for the minimum number of bits needed to represent all {{mvar|n}}-vertex graphs.<ref>{{citation | last = Turán | first = György | doi = 10.1016/0166-218X(84)90126-4 | issue = 3 | journal = [[Discrete Applied Mathematics]] | mr = 749658 | pages = 289–294 | title = On the succinct representation of graphs | volume = 8 | year = 1984 | doi-access = }}.</ref> For storing graphs in [[text file]]s, fewer bits per byte can be used to ensure that all bytes are text characters, for instance by using a [[Base64]] representation.<ref>{{citation|first1=Brendan|last1=McKay|author-link=Brendan McKay (mathematician)|title=Description of graph6 and sparse6 encodings|url=http://cs.anu.edu.au/~bdm/data/formats.txt|access-date=2012-02-10|archive-date=2001-04-30|archive-url=https://web.archive.org/web/20010430081526/http://cs.anu.edu.au/~bdm/data/formats.txt|url-status=live}}.</ref> Besides avoiding wasted space, this compactness encourages [[locality of reference]]. However, for a large [[sparse graph]], adjacency lists require less storage space, because they do not waste any space representing edges that are ''not'' present.<ref name="clrs"/><ref name="gt"/> An alternative form of adjacency matrix (which, however, requires a larger amount of space) replaces the numbers in each element of the matrix with pointers to edge objects (when edges are present) or null pointers (when there is no edge).<ref name="gt"/> It is also possible to store [[weighted graph|edge weights]] directly in the elements of an adjacency matrix.<ref name="clrs"/> Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list, and takes time proportional to the number of neighbors. With an adjacency matrix, an entire row must instead be scanned, which takes a larger amount of time, proportional to the number of vertices in the whole graph. On the other hand, testing whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.<ref name="clrs"/><ref name="gt">{{citation|title=Algorithm Design and Applications|first1=Michael T.|last1=Goodrich|author1-link=Michael T. Goodrich|first2=Roberto|last2=Tamassia|author2-link=Roberto Tamassia|publisher=Wiley|year=2015|page=363}}.</ref> ==See also== * [[Laplacian matrix]] * [[Self-similarity matrix]] ==References== {{Reflist}} ==External links== {{Commons category}} * {{MathWorld|AdjacencyMatrix|Adjacency matrix}} * [http://www.x2d.org/java/projects/fluffschack.jnlp Fluffschack] — an educational Java web start game demonstrating the relationship between adjacency matrices and graphs. * [http://opendatastructures.org/versions/edition-0.1e/ods-java/12_1_AdjacencyMatrix_Repres.html Open Data Structures - Section 12.1 - AdjacencyMatrix: Representing a Graph by a Matrix], [[Pat Morin]] * [https://web.archive.org/web/20190325054550/http://www.cafemath.fr/mathblog/article.php?page=GoodWillHunting.php Café math : Adjacency Matrices of Graphs] : Application of the adjacency matrices to the computation generating series of walks. {{Graph representations}} {{Matrix classes}} {{Authority control}} [[Category:Algebraic graph theory]] [[Category:Matrices (mathematics)]] [[Category:Graph data structures]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Commons category
(
edit
)
Template:Graph representations
(
edit
)
Template:Harvtxt
(
edit
)
Template:ISBN
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:Matrix classes
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)