Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Expander graph
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications and useful properties== The original motivation for expanders is to build economical robust networks (phone or computer): an expander with bounded degree is precisely an asymptotic robust graph with the number of edges growing linearly with size (number of vertices), for all subsets. Expander graphs have found extensive applications in [[computer science]], in designing [[algorithm]]s, [[Expander code|error correcting codes]], [[Extractor (mathematics)|extractors]], [[pseudorandom generator]]s, [[sorting network]]s ({{harvtxt|Ajtai|Komlós|Szemerédi|1983}}) and robust [[computer network]]s. They have also been used in proofs of many important results in [[computational complexity theory]], such as [[SL (complexity)|SL]] = [[L (complexity)|L]] ({{harvtxt|Reingold|2008}}) and the [[PCP theorem]] ({{harvtxt|Dinur|2007}}). In [[cryptography]], expander graphs are used to construct [[hash function]]s. In a [https://www.ams.org/journals/bull/2006-43-04/S0273-0979-06-01126-8/ 2006 survey of expander graphs], Hoory, Linial, and Wigderson split the study of expander graphs into four categories: [[extremal graph theory|extremal problem]]s, typical behavior, explicit constructions, and algorithms. Extremal problems focus on the bounding of expansion parameters, while typical behavior problems characterize how the expansion parameters are distributed over [[random graph]]s. Explicit constructions focus on constructing graphs that optimize certain parameters, and algorithmic questions study the evaluation and estimation of parameters. ===Expander mixing lemma=== {{Main|Expander mixing lemma}} The expander mixing lemma states that for an {{math|(''n'', ''d'', ''λ'')}}-graph, for any two subsets of the vertices {{math|''S'', ''T'' ⊆ ''V''}}, the number of edges between {{mvar|S}} and {{mvar|T}} is approximately what you would expect in a random {{mvar|d}}-regular graph. The approximation is better the smaller {{math|''λ''}} is. In a random {{mvar|d}}-regular graph, as well as in an [[Erdős–Rényi model|Erdős–Rényi random graph]] with edge probability {{math|{{frac|''d''|''n''}}}}, we expect {{math|{{frac|''d''|''n''}} • {{abs|''S''}} • {{abs|''T''}}}} edges between {{mvar|S}} and {{mvar|T}}. More formally, let {{math|''E''(''S'', ''T'')}} denote the number of edges between {{mvar|S}} and {{mvar|T}}. If the two sets are not disjoint, edges in their intersection are counted twice, that is, : <math>E(S,T)=2|E(G[S\cap T])| + E(S\setminus T,T) + E(S,T\setminus S). </math> Then the expander mixing lemma says that the following inequality holds: :<math>\left|E(S, T) - \frac{d \cdot |S| \cdot |T|}{n}\right| \leq \lambda \sqrt{|S| \cdot |T|}.</math> Many properties of {{math|(''n'', ''d'', ''λ'')}}-graphs are corollaries of the expander mixing lemmas, including the following.<ref name="Hoory 2006"/> * An [[Independent set (graph theory)|independent set]] of a graph is a subset of vertices with no two vertices adjacent. In an {{math|(''n'', ''d'', ''λ'')}}-graph, an independent set has size at most {{math|{{frac|''λn''|''d''}}}}. * The [[Graph coloring|chromatic number]] of a graph {{mvar|G}}, {{math|''χ''(''G'')}}, is the minimum number of colors needed such that adjacent vertices have different colors. Hoffman showed that {{math|{{frac|''d''|''λ''}} ≤ ''χ''(''G'')}},<ref>{{Cite journal|last1=Hoffman|first1=A. J.|last2=Howes|first2=Leonard|date=1970|title=On Eigenvalues and Colorings of Graphs, Ii|url=https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1749-6632.1970.tb56474.x|journal=Annals of the New York Academy of Sciences|language=en|volume=175|issue=1|pages=238–242|doi=10.1111/j.1749-6632.1970.tb56474.x|bibcode=1970NYASA.175..238H|s2cid=85243045|issn=1749-6632}}</ref> while Alon, Krivelevich, and Sudakov showed that if {{math|''d'' < {{frac|2''n''|3}}}}, then<ref>{{Cite journal|last1=Alon|first1=Noga|author-link1=Noga Alon|last2=Krivelevich|first2=Michael|author-link2=Michael Krivelevich|last3=Sudakov|first3=Benny|author-link3=Benny Sudakov|date=1999-09-01|title=Coloring Graphs with Sparse Neighborhoods|journal=[[Journal of Combinatorial Theory]] | series=Series B |language=en|volume=77|issue=1|pages=73–82|doi=10.1006/jctb.1999.1910|doi-access=free|issn=0095-8956}}</ref> <math>\chi(G) \leq O \left( \frac{d}{\log(1+d/\lambda)} \right).</math> * The [[Diameter (graph theory)|diameter]] of a graph is the maximum distance between two vertices, where the distance between two vertices is defined to be the shortest path between them. Chung showed that the diameter of an {{math|(''n'', ''d'', ''λ'')}}-graph is at most<ref>{{Cite journal|last=Chung|first=F. R. K.|date=1989|title=Diameters and eigenvalues|url=https://www.ams.org/jams/1989-02-02/S0894-0347-1989-0965008-X/|journal=Journal of the American Mathematical Society|language=en|volume=2|issue=2|pages=187–196|doi=10.1090/S0894-0347-1989-0965008-X|issn=0894-0347|doi-access=free}}</ref> <math>\left\lceil \log \frac{n}{ \log(d/\lambda)} \right\rceil.</math> ===Expander walk sampling=== {{Main|Expander walk sampling}} The [[Chernoff bound]] states that, when sampling many independent samples from a random variable in the range {{math|[−1, 1]}}, with high probability the average of our samples is close to the expectation of the random variable. The expander walk sampling lemma, due to {{harvtxt|Ajtai|Komlós|Szemerédi|1987}} and {{harvtxt|Gillman|1998}}, states that this also holds true when sampling from a walk on an expander graph. This is particularly useful in the theory of [[derandomization]], since sampling according to an expander walk uses many fewer random bits than sampling independently. === AKS sorting network and approximate halvers === {{Main|Sorting network}} Sorting networks take a set of inputs and perform a series of parallel steps to sort the inputs. A parallel step consists of performing any number of disjoint comparisons and potentially swapping pairs of compared inputs. The depth of a network is given by the number of parallel steps it takes. Expander graphs play an important role in the AKS sorting network, which achieves depth {{math|''O''(log ''n'')}}. While this is asymptotically the best known depth for a sorting network, the reliance on expanders makes the constant bound too large for practical use. Within the AKS sorting network, expander graphs are used to construct bounded depth {{mvar|ε}}-halvers. An {{mvar|ε}}-halver takes as input a length {{mvar|n}} permutation of {{math|(1, …, ''n'')}} and halves the inputs into two disjoint sets {{mvar|A}} and {{mvar|B}} such that for each integer {{math|''k'' ≤ {{frac|''n''|2}}}} at most {{mvar|εk}} of the {{mvar|k}} smallest inputs are in {{mvar|B}} and at most {{mvar|εk}} of the {{mvar|k}} largest inputs are in {{mvar|A}}. The sets {{mvar|A}} and {{mvar|B}} are an {{mvar|ε}}-halving. Following {{harvtxt|Ajtai|Komlós|Szemerédi|1983}}, a depth {{mvar|d}} {{mvar|ε}}-halver can be constructed as follows. Take an {{mvar|n}} vertex, degree {{mvar|d}} bipartite expander with parts {{mvar|X}} and {{mvar|Y}} of equal size such that every subset of vertices of size at most {{mvar|εn}} has at least {{math|{{sfrac|1 – ''ε''|''ε''}}}} neighbors. The vertices of the graph can be thought of as registers that contain inputs and the edges can be thought of as wires that compare the inputs of two registers. At the start, arbitrarily place half of the inputs in {{mvar|X}} and half of the inputs in {{mvar|Y}} and decompose the edges into {{mvar|d}} perfect matchings. The goal is to end with {{mvar|X}} roughly containing the smaller half of the inputs and {{mvar|Y}} containing roughly the larger half of the inputs. To achieve this, sequentially process each matching by comparing the registers paired up by the edges of this matching and correct any inputs that are out of order. Specifically, for each edge of the matching, if the larger input is in the register in {{mvar|X}} and the smaller input is in the register in {{mvar|Y}}, then swap the two inputs so that the smaller one is in {{mvar|X}} and the larger one is in {{mvar|Y}}. It is clear that this process consists of {{mvar|d}} parallel steps. After all {{mvar|d}} rounds, take {{mvar|A}} to be the set of inputs in registers in {{mvar|X}} and {{mvar|B}} to be the set of inputs in registers in {{mvar|Y}} to obtain an {{mvar|ε}}-halving. To see this, notice that if a register {{mvar|u}} in {{mvar|X}} and {{mvar|v}} in {{mvar|Y}} are connected by an edge {{mvar|uv}} then after the matching with this edge is processed, the input in {{mvar|u}} is less than that of {{mvar|v}}. Furthermore, this property remains true throughout the rest of the process. Now, suppose for some {{math|''k'' ≤ {{frac|''n''|2}}}} that more than {{mvar|εk}} of the inputs {{math|(1, …, ''k'')}} are in {{mvar|B}}. Then by expansion properties of the graph, the registers of these inputs in {{mvar|Y}} are connected with at least {{math|{{sfrac|1 – ''ε''|''ε''}}''k''}} registers in {{mvar|X}}. Altogether, this constitutes more than {{mvar|k}} registers so there must be some register {{mvar|A}} in {{mvar|X}} connected to some register {{mvar|B}} in {{mvar|Y}} such that the final input of {{mvar|A}} is not in {{math|(1, …, ''k'')}}, while the final input of {{mvar|B}} is. This violates the previous property however, and thus the output sets {{mvar|A}} and {{mvar|B}} must be an {{mvar|ε}}-halving.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)