Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Laplacian matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Laplacian matrix normalization=== The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a [[Glossary_of_graph_theory#weighted_graph| weighted graph]], a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors. ==== Symmetrically normalized Laplacian ==== The '''symmetrically normalized Laplacian''' is defined as : <math>L^\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = I - (D^+)^{1/2} A (D^+)^{1/2},</math> where ''L'' is the unnormalized Laplacian, ''A'' is the adjacency matrix, ''D'' is the degree matrix, and <math>D^+</math> is the [[Moore–Penrose inverse]]. Since the degree matrix ''D'' is diagonal, its reciprocal square root <math display="inline">(D^+)^{1/2}</math> is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of ''D''. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example: {|class="wikitable" ! [[Adjacency matrix]] ! In-Degree matrix ! In-Degree normalized Laplacian ! Out-Degree matrix ! Out-Degree normalized Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 0\\ 4 & 0 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1/2 & 0\\ -2 & 1 & 0\\ 0 & 0 & 0\\\end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 4 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1/2 & 0\\ -2 & 1 & 0\\ 0 & 0 & 0\\ \end{array}\right)</math> |} The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix ''A'' is symmetric and the diagonal entries of ''D'' are nonnegative, in which case we can use the term the '''''symmetric normalized Laplacian'''''. The symmetric normalized Laplacian matrix can be also written as : <math>L^\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = (D^+)^{1/2}B W B^\textsf{T} (D^+)^{1/2} = S S^T</math> using the weightless <math display="inline">|v| \times |e|</math> [[incidence matrix]] ''B'' and the diagonal <math display="inline">|e| \times |e|</math> matrix ''W'' containing the edge weights and defining the new <math display="inline">|v| \times |e|</math> weighted incidence matrix <math display="inline">S=(D^+)^{1/2}B W^{{1}/{2}}</math> whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge ''e = {u, v}'' has an entry <math display="inline">\frac{1}{\sqrt{d_u}}</math> in the row corresponding to ''u'', an entry <math display="inline">-\frac{1}{\sqrt{d_v}}</math> in the row corresponding to ''v'', and has 0 entries elsewhere. ==== Random walk normalized Laplacian ==== The '''random walk normalized Laplacian''' is defined as : <math>L^\text{rw} := D^+ L = I - D^+ A</math> where ''D'' is the degree matrix. Since the degree matrix ''D'' is diagonal, its inverse <math display="inline">D^+</math> is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of ''D''. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element <math display="inline">L^\text{rw}_{i,i}</math> to 0. The matrix elements of <math display="inline">L^\text{rw}</math> are given by : <math>L^{\text{rw}}_{i,j} := \begin{cases} 1 & \mbox{if}\ i = j\ \mbox{and}\ \deg(v_i) \neq 0\\ -\frac{1}{\deg(v_i)} & \mbox{if}\ i \neq j\ \mbox{and}\ v_i \mbox{ is adjacent to } v_j \\ 0 & \mbox{otherwise}. \end{cases}</math> The name of the random-walk normalized Laplacian comes from the fact that this matrix is <math display="inline">L^\text{rw} = I - P</math>, where <math display="inline">P = D^+A</math> is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let <math display="inline"> e_i </math> denote the i-th [[standard basis]] vector. Then <math display="inline">x = e_i P </math> is a [[probability vector]] representing the distribution of a random walker's locations after taking a single step from vertex <math display="inline">i</math>; i.e., <math display="inline">x_j = \mathbb{P}\left(v_i \to v_j\right)</math>. More generally, if the vector <math display="inline"> x </math> is a probability distribution of the location of a random walker on the vertices of the graph, then <math display="inline">x' = x P^t</math> is the probability distribution of the walker after <math display="inline">t</math> steps. The random walk normalized Laplacian can also be called the left normalized Laplacian <math>L^\text{rw} := D^+L</math> since the normalization is performed by multiplying the Laplacian by the normalization matrix <math>D^+</math> on the left. It has each row summing to zero since <math>P = D^+A</math> is [[Stochastic matrix|right stochastic]], assuming all the weights are non-negative. In the less uncommonly used right normalized Laplacian <math>L D^+ = I - A D^+</math> each column sums to zero since <math>A D^+</math> is [[Stochastic matrix|left stochastic]]. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose [[degree (graph theory)|indegree or outdegree]] for normalization: {|class="wikitable" ! [[Adjacency matrix]] ! Out-Degree matrix ! Out-Degree left normalized Laplacian ! In-Degree matrix ! In-Degree right normalized Laplacian |- | <math display="inline">\left(\begin{array}{rrr} 0 & 1 & 0\\ 0 & 0 & 2\\ 1 & 0 & 0\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1 & 0\\ 0 & 1 & -1\\ -1 & 0 & 1\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 2\\ \end{array}\right)</math> | <math display="inline">\left(\begin{array}{rrr} 1 & -1 & 0\\ 0 & 1 & -1\\ -1 & 0 & 1\\ \end{array}\right)</math> |} The left out-degree normalized Laplacian with row-sums all 0 relates to [[Stochastic matrix|right stochastic]] <math>D_{\text{out}}^+A</math> , while the right in-degree normalized Laplacian with column-sums all 0 contains [[Stochastic matrix|left stochastic]] <math>AD_{\text{in}}^+</math>. ====Negative weights==== Negative weights present several challenges for normalization: * The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex. * Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist. * Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)