Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Kruskal's algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Parallel algorithm == Kruskal's algorithm is inherently sequential and hard to parallelize. It is, however, possible to perform the initial sorting of the edges in parallel or, alternatively, to use a parallel implementation of a [[binary heap]] to extract the minimum-weight edge in every iteration.<ref>{{Cite journal|last1=Quinn|first1=Michael J.|last2=Deo|first2=Narsingh|date=1984|title=Parallel graph algorithms|journal=ACM Computing Surveys |volume= 16 |issue=3 |pages=319–348|doi=10.1145/2514.2515|s2cid=6833839|doi-access=free}}</ref> As parallel sorting is possible in time <math>O(n)</math> on <math>O(\log n)</math> processors,<ref>{{cite book|title=Introduction to Parallel Computing|last1=Grama|first1=Ananth|last2=Gupta|first2=Anshul|last3=Karypis|first3=George|last4=Kumar|first4=Vipin|year=2003|isbn=978-0201648652|pages=412–413|publisher=Addison-Wesley }}</ref> the runtime of Kruskal's algorithm can be reduced to ''O''(''E'' α(''V'')), where α again is the inverse of the single-valued [[Ackermann function]]. A variant of Kruskal's algorithm, named Filter-Kruskal, has been described by Osipov et al.<ref name="osipov2009">{{Cite journal |last1=Osipov |first1=Vitaly |last2=Sanders |first2=Peter |last3=Singler |first3=Johannes |date=2009 |title=The filter-kruskal minimum spanning tree algorithm |journal=Proceedings of the Eleventh Workshop on Algorithm Engineering and Experiments (ALENEX). Society for Industrial and Applied Mathematics |pages=52–61 |doi=10.1137/1.9781611972894.5 |doi-access=free|isbn=978-0-89871-930-7 }}</ref> and is better suited for parallelization. The basic idea behind Filter-Kruskal is to partition the edges in a similar way to [[quicksort]] and filter out edges that connect vertices of the same tree to reduce the cost of sorting. The following [[pseudocode]] demonstrates this. '''function''' filter_kruskal(G) '''is''' '''if''' |G.E| < kruskal_threshold: '''return''' kruskal(G) pivot = choose_random(G.E) E{{sub|≤}}, E{{sub|>}} = partition(G.E, pivot) A = filter_kruskal(E{{sub|≤}}) E{{sub|>}} = filter(E{{sub|>}}) A = A ∪ filter_kruskal(E{{sub|>}}) '''return''' A '''function''' partition(E, pivot) '''is''' E{{sub|≤}} = ∅, E{{sub|>}} = ∅ '''foreach''' (u, v) in E '''do''' '''if''' weight(u, v) ≤ pivot '''then''' E{{sub|≤}} = E{{sub|≤}} ∪ {(u, v)} '''else''' E{{sub|>}} = E{{sub|>}} ∪ {(u, v)} '''return''' E{{sub|≤}}, E{{sub|>}} '''function''' filter(E) '''is''' E{{sub|f}} = ∅ '''foreach''' (u, v) in E '''do''' '''if''' find_set(u) ≠ find_set(v) '''then''' E{{sub|f}} = E{{sub|f}} ∪ {(u, v)} '''return''' E{{sub|f}} Filter-Kruskal lends itself better to parallelization as sorting, filtering, and partitioning can easily be performed in parallel by distributing the edges between the processors.<ref name="osipov2009" /> Finally, other variants of a parallel implementation of Kruskal's algorithm have been explored. Examples include a scheme that uses helper threads to remove edges that are definitely not part of the MST in the background,<ref>{{Cite book|last1=Katsigiannis|first1=Anastasios|last2=Anastopoulos|first2=Nikos|last3=Konstantinos|first3=Nikas|last4=Koziris|first4=Nectarios|title=2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum |chapter=An Approach to Parallelize Kruskal's Algorithm Using Helper Threads |date=2012|url=http://tarjomefa.com/wp-content/uploads/2017/10/7793-English-TarjomeFa.pdf|pages=1601–1610|doi=10.1109/IPDPSW.2012.201|isbn=978-1-4673-0974-5|s2cid=14430930}}</ref> and a variant which runs the sequential algorithm on ''p'' subgraphs, then merges those subgraphs until only one, the final MST, remains.<ref>{{Cite book|last1=Lončar|first1=Vladimir|last2=Škrbić|first2=Srdjan|last3=Balaž|first3=Antun|chapter=Parallelization of Minimum Spanning Tree Algorithms Using Distributed Memory Architectures |date=2014|title=Transactions on Engineering Technologies|chapter-url=https://www.researchgate.net/publication/235994104|pages=543–554|doi=10.1007/978-94-017-8832-8_39|isbn=978-94-017-8831-1}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)