Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Laplacian matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definitions for graphs with weighted edges == Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In [[spectral clustering]] and graph-based [[signal processing]], where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the [[Distance_matrix|distances]] between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization. === Laplacian matrix === The Laplacian matrix is defined by : <math>L = D - A, </math> where ''D'' is the [[degree matrix]] and ''A'' is the [[adjacency matrix]] of the graph. For [[directed graph]]s, either the [[degree (graph theory)|indegree or outdegree]] might be used, depending on the application, as in the following example: {|class="wikitable" ! [[Adjacency matrix]] ! In-Degree matrix ! In-Degree Laplacian ! Out-Degree matrix ! Out-Degree Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 2\\ 3 & 0 & 5\\ 6 & 7 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 9 & 0 & 0\\ 0 & 8 & 0\\ 0 & 0 & 7\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 9 & -1 & -2\\ -3 & 8 & -5\\ -6 & -7 & 7\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 3 & 0 & 0\\ 0 & 8 & 0\\ 0 & 0 & 13\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 3 & -1 & -2\\ -3 & 8 & -5\\ -6 & -7 & 13\\ \end{array}\right)</math> |} Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values. === Symmetric Laplacian via the incidence matrix === [[Image:elastic network model.png|thumb|A 2-dimensional spring system.]] For graphs with weighted edges one can define a weighted incidence matrix ''B'' and use it to construct the corresponding symmetric Laplacian as <math>L = B B^\textsf{T}</math>. An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the [[incidence matrix]] as for regular graphs and introduce a matrix just holding the values of the weights. A [[spring system]] is an example of this model used in [[mechanics]] to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges. We thus reuse the definition of the weightless <math display="inline">|v| \times |e|</math> [[incidence matrix]] ''B'' with element ''B''<sub>''ve''</sub> for the vertex ''v'' and the edge ''e'' (connecting vertexes <math display="inline">v_i</math> and <math display="inline">v_j</math>, with ''i'' > ''j'') defined by :<math>B_{ve} = \left\{\begin{array}{rl} 1, & \text{if } v = v_i\\ -1, & \text{if } v = v_j\\ 0, & \text{otherwise}. \end{array}\right.</math> We now also define a diagonal <math display="inline">|e| \times |e|</math> matrix ''W'' containing the edge weights. Even though the edges in the definition of ''B'' are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian <math display="inline">|v| \times |v|</math> matrix ''L'' defined as :<math>L = B W B^\textsf{T}</math> where <math display="inline">B^\textsf{T}</math> is the [[transpose|matrix transpose]] of ''B''. The construction is illustrated in the following example, where every edge <math display="inline">e_i</math> is assigned the weight value ''i'', with <math display="inline">i=1, 2, 3, 4.</math> {|class="wikitable" ! [[Undirected graph]] ! [[Incidence matrix]] ! Edge weights ! Laplacian matrix |- | [[image:Labeled_undirected_graph.svg|100px]] | <math display="inline">\left(\begin{array}{rrrr} 1 & 1 & 1 & 0\\ -1 & 0 & 0 & 0\\ 0 & -1 & 0 & 1\\ 0 & 0 & -1 & -1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrrr} 1 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 3 & 0\\ 0 & 0 & 0 & 4\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrrr} 6 & -1 & -2 & -3\\ -1 & 1 & 0 & 0\\ -2 & 0 & 6 & -4\\ -3 & 0 & -4 & 7\\ \end{array}\right)</math> |} ===Symmetric Laplacian for a directed graph=== Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix <math>A</math> of the original directed graph and its [[matrix transpose]] <math>A^T</math> as in the following example: {|class="wikitable" ! [[Adjacency matrix]] ! Symmetrized adjacency matrix ! Symmetric Laplacian matrix |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 1\\ 0 & 0 & 1\\ 1 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 2\\ 1 & 0 & 1\\ 2 & 1 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 3 & -1 & -2\\ -1 & 2 & -1\\ -2 & -1 & 3\\ \end{array}\right)</math> |} where the zero and one entries of <math>A</math> are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2. Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the [[degree (graph theory)|indegree and outdegree]], as in the following example: {|class="wikitable" ! [[Adjacency matrix]] ! Out-Degree matrix ! Out-Degree Laplacian ! In-Degree matrix ! In-Degree Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 1\\ 0 & 0 & 1\\ 1 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 2 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 2 & -1 & -1\\ 0 & 1 & -1\\ -1 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 2\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1 & -1\\ 0 & 1 & -1\\ -1 & 0 & 2\\ \end{array}\right)</math> |} The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix. ===Laplacian matrix normalization=== The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a [[Glossary_of_graph_theory#weighted_graph| weighted graph]], a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors. ==== Symmetrically normalized Laplacian ==== The '''symmetrically normalized Laplacian''' is defined as : <math>L^\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = I - (D^+)^{1/2} A (D^+)^{1/2},</math> where ''L'' is the unnormalized Laplacian, ''A'' is the adjacency matrix, ''D'' is the degree matrix, and <math>D^+</math> is the [[Moore–Penrose inverse]]. Since the degree matrix ''D'' is diagonal, its reciprocal square root <math display="inline">(D^+)^{1/2}</math> is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of ''D''. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example: {|class="wikitable" ! [[Adjacency matrix]] ! In-Degree matrix ! In-Degree normalized Laplacian ! Out-Degree matrix ! Out-Degree normalized Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 0\\ 4 & 0 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1/2 & 0\\ -2 & 1 & 0\\ 0 & 0 & 0\\\end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 4 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1/2 & 0\\ -2 & 1 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> |} The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix ''A'' is symmetric and the diagonal entries of ''D'' are nonnegative, in which case we can use the term the '''''symmetric normalized Laplacian'''''. The symmetric normalized Laplacian matrix can be also written as : <math>L^\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = (D^+)^{1/2}B W B^\textsf{T} (D^+)^{1/2} = S S^T</math> using the weightless <math display="inline">|v| \times |e|</math> [[incidence matrix]] ''B'' and the diagonal <math display="inline">|e| \times |e|</math> matrix ''W'' containing the edge weights and defining the new <math display="inline">|v| \times |e|</math> weighted incidence matrix <math display="inline">S=(D^+)^{1/2}B W^{{1}/{2}}</math> whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge ''e = {u, v}'' has an entry <math display="inline">\frac{1}{\sqrt{d_u}}</math> in the row corresponding to ''u'', an entry <math display="inline">-\frac{1}{\sqrt{d_v}}</math> in the row corresponding to ''v'', and has 0 entries elsewhere. ==== Random walk normalized Laplacian ==== The '''random walk normalized Laplacian''' is defined as : <math>L^\text{rw} := D^+ L = I - D^+ A</math> where ''D'' is the degree matrix. Since the degree matrix ''D'' is diagonal, its inverse <math display="inline">D^+</math> is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of ''D''. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element <math display="inline">L^\text{rw}_{i,i}</math> to 0. The matrix elements of <math display="inline">L^\text{rw}</math> are given by : <math>L^{\text{rw}}_{i,j} := \begin{cases} 1 & \mbox{if}\ i = j\ \mbox{and}\ \deg(v_i) \neq 0\\ -\frac{1}{\deg(v_i)} & \mbox{if}\ i \neq j\ \mbox{and}\ v_i \mbox{ is adjacent to } v_j \\ 0 & \mbox{otherwise}. \end{cases}</math> The name of the random-walk normalized Laplacian comes from the fact that this matrix is <math display="inline">L^\text{rw} = I - P</math>, where <math display="inline">P = D^+A</math> is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let <math display="inline"> e_i </math> denote the i-th [[standard basis]] vector. Then <math display="inline">x = e_i P </math> is a [[probability vector]] representing the distribution of a random walker's locations after taking a single step from vertex <math display="inline">i</math>; i.e., <math display="inline">x_j = \mathbb{P}\left(v_i \to v_j\right)</math>. More generally, if the vector <math display="inline"> x </math> is a probability distribution of the location of a random walker on the vertices of the graph, then <math display="inline">x' = x P^t</math> is the probability distribution of the walker after <math display="inline">t</math> steps. The random walk normalized Laplacian can also be called the left normalized Laplacian <math>L^\text{rw} := D^+L</math> since the normalization is performed by multiplying the Laplacian by the normalization matrix <math>D^+</math> on the left. It has each row summing to zero since <math>P = D^+A</math> is [[Stochastic matrix|right stochastic]], assuming all the weights are non-negative. In the less uncommonly used right normalized Laplacian <math>L D^+ = I - A D^+</math> each column sums to zero since <math>A D^+</math> is [[Stochastic matrix|left stochastic]]. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose [[degree (graph theory)|indegree or outdegree]] for normalization: {|class="wikitable" ! [[Adjacency matrix]] ! Out-Degree matrix ! Out-Degree left normalized Laplacian ! In-Degree matrix ! In-Degree right normalized Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 0\\ 0 & 0 & 2\\ 1 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1 & 0\\ 0 & 1 & -1\\ -1 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 2\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1 & 0\\ 0 & 1 & -1\\ -1 & 0 & 1\\ \end{array}\right)</math> |} The left out-degree normalized Laplacian with row-sums all 0 relates to [[Stochastic matrix|right stochastic]] <math>D_{\text{out}}^+A</math> , while the right in-degree normalized Laplacian with column-sums all 0 contains [[Stochastic matrix|left stochastic]] <math>AD_{\text{in}}^+</math>. ====Negative weights==== Negative weights present several challenges for normalization: * The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex. * Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist. * Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)