Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dijkstra's algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Running time == Bounds of the running time of Dijkstra's algorithm on a graph with edges ''{{mvar|E}}'' and vertices ''{{mvar|V}}'' can be expressed as a function of the number of edges, denoted <math>|E|</math>, and the number of vertices, denoted <math>|V|</math>, using [[big-O notation]]. The complexity bound depends mainly on the data structure used to represent the set ''{{mvar|Q}}''. In the following, upper bounds can be simplified because <math>|E|</math> is <math>O(|V|^2)</math> for any simple graph, but that simplification disregards the fact that in some problems, other upper bounds on <math>|E|</math> may hold. For any data structure for the vertex set ''{{mvar|Q}}'', the running time i s:{{sfn|Cormen|Leiserson|Rivest|Stein|2001}} : <math>\Theta(|E| \cdot T_\mathrm{dk} + |V| \cdot T_\mathrm{em}),</math> where <math>T_\mathrm{dk}</math> and <math>T_\mathrm{em}</math> are the complexities of the ''decrease-key'' and ''extract-minimum'' operations in ''{{mvar|Q}}'', respectively. The simplest version of Dijkstra's algorithm stores the vertex set ''{{mvar|Q}}'' as a linked list or array, and edges as an [[adjacency list]] or [[Adjacency matrix|matrix]]. In this case, extract-minimum is simply a linear search through all vertices in ''{{mvar|Q}}'', so the running time is <math>\Theta(|E| + |V|^2) = \Theta(|V|^2)</math>. For [[sparse graph]]s, that is, graphs with far fewer than <math>|V|^2</math> edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a [[self-balancing binary search tree]], [[binary heap]], [[pairing heap]], [[Fibonacci heap]] or a priority heap as a [[priority queue]] to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to update this structure as the priority queue ''{{mvar|Q}}'' changes. With a self-balancing binary search tree or binary heap, the algorithm requires : <math>\Theta((|E| + |V|) \log |V|)</math> time in the worst case; for connected graphs this time bound can be simplified to <math>\Theta( | E | \log | V | )</math>. The [[Fibonacci heap]] improves this to : <math>\Theta(|E| + |V| \log|V|).</math> When using binary heaps, the [[Best, worst and average case|average case]] time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common [[probability distribution]], the expected number of ''decrease-key'' operations is bounded by <math>\Theta(|V| \log (|E|/|V|))</math>, giving a total running time of<ref name=" mehlhorn" />{{rp|199–200}} : <math>O\left(|E| + |V| \log \frac{|E|}{|V|} \log |V|\right).</math> ===Practical optimizations and infinite graphs=== In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it).{{r|mehlhorn}}{{rp|198}} This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up queue operations.<ref name="felner">{{cite conference |last=Felner |first=Ariel |year=2011 |title=Position Paper: Dijkstra's Algorithm versus Uniform Cost Search or a Case Against Dijkstra's Algorithm |url=http://www.aaai.org/ocs/index.php/SOCS/SOCS11/paper/view/4017/4357 |conference=Proc. 4th Int'l Symp. on Combinatorial Search |archive-url=https://web.archive.org/web/20200218150924/https://www.aaai.org/ocs/index.php/SOCS/SOCS11/paper/view/4017/4357 |archive-date=18 February 2020 |access-date=12 February 2015 |url-status=dead}} In a route-finding problem, Felner finds that the queue can be a factor 500–600 smaller, taking some 40% of the running time.</ref> Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called ''uniform-cost search'' (UCS) in the artificial intelligence literature{{r|felner}}<ref name="aima">{{Cite AIMA|3|pages=75, 81}}</ref><ref>Sometimes also ''least-cost-first search'': {{cite journal |last=Nau |first=Dana S. |year=1983 |title=Expert computer systems |url=https://www.cs.umd.edu/~nau/papers/nau1983expert.pdf |journal=Computer |publisher=IEEE |volume=16 |issue=2 |pages=63–85 |doi=10.1109/mc.1983.1654302 |s2cid=7301753}}</ref> and can be expressed in pseudocode as<!--NOTE TO EDITORS: Don't get confused by "uniform" in the name and remove costs from the pseudocode. UCS includes costs and is different from breadth-first search.--> '''procedure''' uniform_cost_search(start) '''is''' node ← start frontier ← priority queue containing node only expanded ← empty set '''do''' '''if''' frontier is empty '''then''' '''return''' failure node ← frontier.pop() '''if''' node is a goal state '''then''' '''return''' solution(node) expanded.add(node) '''for each''' of node's neighbors ''n'' '''do''' '''if''' ''n'' is not in expanded and not in frontier '''then''' frontier.add(''n'') '''else if''' ''n'' is in frontier with higher cost replace existing node with ''n'' Its complexity can be expressed in an alternative way for very large graphs: when {{math|''C''<sup>*</sup>}} is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least ''{{mvar|ε}}'', and the number of neighbors per node is bounded by ''{{mvar|b}}'', then the algorithm's worst-case time and space complexity are both in {{math|''O''(''b''<sup>1+⌊''C''<sup>*</sup> {{frac}} ''ε''⌋</sup>)}}.{{r|aima}} Further optimizations for the single-target case include [[Bidirectional search|bidirectional]] variants, goal-directed variants such as the [[A* algorithm]] (see {{slink||Related problems and algorithms}}), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce {{math|''s''–''t''}} routing to connecting ''{{mvar|s}}'' and ''{{mvar|t}}'' to their respective "[[Transit Node Routing|transit nodes]]" followed by shortest-path computation between these transit nodes using a "highway".<ref name="speedup2">{{cite conference |last1=Wagner |first1=Dorothea |last2=Willhalm |first2=Thomas |year=2007 |title=Speed-up techniques for shortest-path computations |conference=STACS |pages=23–36}}</ref> Combinations of such techniques may be needed for optimal practical performance on specific problems.<ref>{{cite journal |last1=Bauer |first1=Reinhard |last2=Delling |first2=Daniel |last3=Sanders |first3=Peter |last4=Schieferdecker |first4=Dennis |last5=Schultes |first5=Dominik |last6=Wagner |first6=Dorothea |year=2010 |title=Combining hierarchical and goal-directed speed-up techniques for Dijkstra's algorithm |url=https://publikationen.bibliothek.kit.edu/1000014952 |journal=J. Experimental Algorithmics |volume=15 |pages=2.1 |doi=10.1145/1671970.1671976 |s2cid=1661292}}</ref> === Optimality for comparison-sorting by distance === As well as simply computing distances and paths, Dijkstra's algorithm can be used to sort vertices by their distances from a given starting vertex. In 2023, Haeupler, Rozhoň, Tětek, Hladík, and [[Robert Tarjan|Tarjan]] (one of the inventors of the 1984 heap), proved that, for this sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm with a special heap data structure has a runtime and number of comparisons that is within a constant factor of optimal among [[Comparison sort|comparison-based]] algorithms for the same sorting problem on the same graph and starting vertex but with variable edge weights. To achieve this, they use a comparison-based heap whose cost of returning/removing the minimum element from the heap is logarithmic in the number of elements inserted after it rather than in the number of elements in the heap.<ref>{{cite arXiv |last1=Haeupler |first1=Bernhard |title=Universal Optimality of Dijkstra via Beyond-Worst-Case Heaps |date=2024-10-28 |eprint=2311.11793 |last2=Hladík |first2=Richard |last3=Rozhoň |first3=Václav |last4=Tarjan |first4=Robert |last5=Tětek |first5=Jakub|class=cs.DS }}</ref><ref>{{Cite web |last=Brubaker |first=Ben |date=2024-10-25 |title=Computer Scientists Establish the Best Way to Traverse a Graph |url=https://www.quantamagazine.org/computer-scientists-establish-the-best-way-to-traverse-a-graph-20241025/ |access-date=2024-12-09 |website=Quanta Magazine |language=en}}</ref> ===Specialized variants=== When arc weights are small integers (bounded by a parameter <math>C</math>), specialized queues can be used for increased speed. The first algorithm of this type was Dial's algorithm{{Sfn|Dial|1969}} for graphs with positive integer edge weights, which uses a [[bucket queue]] to obtain a running time <math>O(|E|+|V|C)</math>. The use of a [[Van Emde Boas tree]] as the priority queue brings the complexity to <math>O(|E|+|V|\log C/\log\log |V|C)</math> .{{sfn|Ahuja|Mehlhorn|Orlin|Tarjan|1990}} Another interesting variant based on a combination of a new [[radix heap]] and the well-known Fibonacci heap runs in time <math>O(|E|+|V|\sqrt{\log C})</math> .{{sfn|Ahuja|Mehlhorn|Orlin|Tarjan|1990}} Finally, the best algorithms in this special case run in <math>O(|E|\log\log|V|)</math>{{sfn|Thorup|2000}} time and <math>O(|E| + |V|\min\{(\log|V|)^{1/3+\varepsilon}, (\log C)^{1/4+\varepsilon}\})</math> time.{{sfn|Raman|1997}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)