Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Fibonacci heap
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Data structure for priority queue operations}} {{Infobox data structure-amortized | name = Fibonacci heap | type = [[Heap (data structure)|heap]] | invented_by = Michael L. Fredman and Robert E. Tarjan | invented_year = 1984 | insert_avg = ''Ξ''(1) | delete_min_avg = O(log ''n'') | find_min_avg = ''Ξ''(1) | decrease_key_avg = ''Ξ''(1) | merge_avg = ''Ξ''(1) }} In [[computer science]], a '''Fibonacci heap''' is a [[data structure]] for [[priority queue]] operations, consisting of a collection of [[Heap (data structure)|heap-ordered trees]]. It has a better [[amortized]] running time than many other priority queue data structures including the [[binary heap]] and [[binomial heap]]. [[Michael Fredman|Michael L. Fredman]] and [[Robert E. Tarjan]] developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after the [[Fibonacci number]]s, which are used in their running time analysis. The amortized times of all operations on Fibonacci heaps is constant, except ''delete-min''.<ref>{{Introduction to Algorithms|2|chapter=Chapter 20: Fibonacci Heaps |pages=476–497}} Third edition p. 518.</ref><ref name="Fredman And Tarjan"/> Deleting an element (most often used in the special case of deleting the minimum element) works in <math>O(\log n)</math> amortized time, where <math>n</math> is the size of the heap.<ref name="Fredman And Tarjan"/> This means that starting from an empty data structure, any sequence of ''a'' insert and ''decrease-key'' operations and ''b'' ''delete-min'' operations would take <math>O(a + b\log n)</math> worst case time, where <math>n</math> is the maximum heap size. In a binary or binomial heap, such a sequence of operations would take <math>O((a + b)\log n)</math> time. A Fibonacci heap is thus better than a binary or binomial heap when <math>b</math> is smaller than <math>a</math> by a non-constant factor. It is also possible to merge two Fibonacci heaps in constant amortized time, improving on the logarithmic merge time of a binomial heap, and improving on binary heaps which cannot handle merges efficiently. Using Fibonacci heaps improves the asymptotic running time of algorithms which utilize priority queues. For example, [[Dijkstra's algorithm]] and [[Prim's algorithm]] can be made to run in <math>O(|E|+|V|\log|V|)</math> time. == Structure == [[Image:Fibonacci heap.png|thumbnail|upright=1.05|Figure 1. Example of a Fibonacci heap. It has three trees of degrees 0, 1 and 3. Three vertices are marked (shown in blue). Therefore, the potential of the heap is 9 (3 trees + 2 × (3 marked-vertices)).]] A Fibonacci heap is a collection of [[tree data structure|tree]]s satisfying the [[minimum-heap property]], that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a [[lazy evaluation|lazy]] manner, postponing the work for later operations. For example, merging heaps is done simply by concatenating the two lists of trees, and operation ''decrease key'' sometimes cuts a node from its parent and forms a new tree. However, at some point order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of direct children) are kept quite low: every node has degree at most <math>\log n</math> and the size of a subtree rooted in a node of degree <math>k</math> is at least <math>F_{k+2}</math>, where <math>F_i</math> is the <math>i</math>th [[Fibonacci number]]. This is achieved by the rule: at most one child can be cut off each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree (see Proof of degree bounds, below). The number of trees is decreased in the operation ''delete-min'', where trees are linked together. As a result of a relaxed structure, some operations can take a long time while others are done very quickly. For the [[amortized analysis|amortized running time]] analysis, we use the [[potential method]], in that we pretend that very fast operations take a little bit longer than they actually do. This additional time is then later combined and subtracted from the actual running time of slow operations. The amount of time saved for later use is measured at any given moment by a potential function. The potential <math>\phi</math> of a Fibonacci heap is given by :<math>\phi = t + 2m</math>, where <math>t</math> is the number of trees in the Fibonacci heap, and <math>m</math> is the number of marked nodes. A node is marked if at least one of its children was cut, since this node was made a child of another node (all roots are unmarked). The amortized time for an operation is given by the sum of the actual time and <math>c</math> times the difference in potential, where ''c'' is a constant (chosen to match the constant factors in the [[big O notation]] for the actual time). Thus, the root of each tree in a heap has one unit of time stored. This unit of time can be used later to link this tree with another tree at amortized time 0. Also, each marked node has two units of time stored. One can be used to cut the node from its parent. If this happens, the node becomes a root and the second unit of time will remain stored in it as in any other root. == Operations == To allow fast deletion and concatenation, the roots of all trees are linked using a circular [[doubly linked list]]. The children of each node are also linked using such a list. For each node, we maintain its number of children and whether the node is marked. === Find-min === We maintain a pointer to the root containing the minimum key, allowing <math>O(1)</math> access to the minimum. This pointer must be updated during the other operations, which adds only a constant time overhead. === Merge === The merge operation simply concatenates the root lists of two heaps together, and setting the minimum to be the smaller of the two heaps. This can be done in constant time, and the potential does not change, leading again to constant amortized time. === Insert === The insertion operation can be considered a special case of the merge operation, with a single node. The node is simply appended to the root list, increasing the potential by one. The amortized cost is thus still constant. === Delete-min === [[Image:Fibonacci heap extractmin1.png|thumb|Figure 2. First phase of ''delete-min''.]] [[File:Fibonacci heap extractmin2.png|thumb|163x163px|Figure 3. Third phase of ''delete-min''.]] The delete-min operation does most of the work in restoring the structure of the heap. It has three phases: # The root containing the minimum element is removed, and each of its <math>d</math> children becomes a new root. It takes time <math>O(d)</math> to process these new roots, and the potential increases by <math>d-1</math>. Therefore, the amortized running time of this phase is <math>O(d) = O(\log n)</math>. # There may be up to <math>n</math> roots. We therefore decrease the number of roots by successively linking together roots of the same degree. When two roots have the same degree, we make the one with the larger key a child of the other, so that the minimum heap property is observed. The degree of the smaller root increases by one. This is repeated until every root has a different degree. To find trees of the same degree efficiently, we use an array of length <math>O(\log n)</math> in which we keep a pointer to one root of each degree. When a second root is found of the same degree, the two are linked and the array is updated. The actual running time is <math>O(\log n + m)</math>, where <math>m</math> is the number of roots at the beginning of the second phase. In the end, we will have at most <math>O(\log n)</math> roots (because each has a different degree). Therefore, the difference in the potential from before to after this phase is <math>O(\log n) - m</math>. Thus, the amortized running time is <math>O(\log n + m) + c(O(\log n) - m)</math>. By choosing a sufficiently large <math>c</math> such that the terms in <math>m</math> cancel out, this simplifies to <math>O(\log n)</math>. # Search the final list of roots to find the minimum, and update the minimum pointer accordingly. This takes <math>O(\log n)</math> time, because the number of roots has been reduced. Overall, the amortized time of this operation is <math>O(\log n)</math>, provided that <math>d = O(\log n)</math>. The proof of this is given in the following section. === Decrease-key === [[Image:Fibonacci heap-decreasekey.png|246x246px|thumb|Figure 4. Fibonacci heap from Figure 1 after decreasing key of node 9 to 0.]]If decreasing the key of a node <math>x</math> causes it to become smaller than its parent, then it is cut from its parent, becoming a new unmarked root. If it is also less than the minimum key, then the minimum pointer is updated. We then initiate a series of ''cascading cuts'', starting with the parent of <math>x</math>. As long as the current node is marked, it is cut from its parent and made an unmarked root. Its original parent is then considered. This process stops when we reach an unmarked node <math>y</math>. If <math>y</math> is not a root, it is marked. In this process we introduce some number, say <math>k</math>, of new trees. Except possibly <math>x</math>, each of these new trees loses its original mark. The terminating node <math>y</math> may become marked. Therefore, the change in the number of marked nodes is between of <math>-k</math> and <math>-k+2</math>. The resulting change in potential is <math>k+2(-k+2)=-k+4</math>. The actual time required to perform the cutting was <math>O(k)</math>. Hence, the amortized time is <math>O(k) + c(-k+4)</math>, which is constant, provided <math>c</math> is sufficiently large. ==Proof of degree bounds== The amortized performance of a Fibonacci heap depends on the degree (number of children) of any tree root being <math>O(\log n)</math>, where <math>n</math> is the size of the heap. Here we show that the size of the (sub)tree rooted at any node <math>x</math> of degree <math>d</math> in the heap must have size at least <math>F_{d+2}</math>, where <math>F_i</math> is the <math>i</math>th [[Fibonacci number]]. The degree bound follows from this and the fact (easily proved by induction) that <math>F_{d+2} \ge \varphi^d</math> for all integers <math>d\ge 0</math>, where <math>\varphi = (1+\sqrt 5)/2 \approx 1.618</math> is the [[golden ratio]]. We then have <math>n \ge F_{d+2} \ge \varphi^d</math>, and taking the log to base <math>\varphi</math> of both sides gives <math>d\le \log_{\varphi} n</math> as required. Let <math>x</math> be an arbitrary node in a Fibonacci heap, not necessarily a root. Define <math>\mathrm{size}(x)</math> to be the size of the tree rooted at <math>x</math> (the number of descendants of <math>x</math>, including <math>x</math> itself). We prove by induction on the height of <math>x</math> (the length of the longest path from <math>x</math> to a descendant leaf) that <math>\mathrm{size}(x) \ge F_{d+2}</math>, where <math>d</math> is the degree of <math>x</math>. '''Base case:''' If <math>x</math> has height <math>0</math>, then <math>d=0</math>, and <math>\mathrm{size}(x) = 1 \ge F_2</math>. '''Inductive case:''' Suppose <math>x</math> has positive height and degree <math>d>0</math>. Let <math>y_1, y_2 \dots y_d</math> be the children of <math>x</math>, indexed in order of the times they were most recently made children of <math>x</math> (<math>y_1</math> being the earliest and <math>y_d</math> the latest), and let <math>c_1, c_2 \dots c_d</math> be their respective degrees. We claim that <math>c_i \ge i-2</math> for each <math>i</math>. Just before <math>y_i</math> was made a child of <math>x</math>, <math>y_1 \dots y_{i-1}</math> were already children of <math>x</math>, and so <math>x</math> must have had degree at least <math>i-1</math> at that time. Since trees are combined only when the degrees of their roots are equal, it must have been the case that <math>y_i</math> also had degree at least <math>i-1</math> at the time when it became a child of <math>x</math>. From that time to the present, <math>y_i</math> could have only lost at most one child (as guaranteed by the marking process), and so its current degree <math>c_i</math> is at least <math>i-2</math>. This proves the claim. Since the heights of all the <math>y_i</math> are strictly less than that of <math>x</math>, we can apply the inductive hypothesis to them to get<math display="block">\mathrm{size}(y_i) \ge F_{c_i+2} \ge F_{(i-2)+2} = F_i.</math>The nodes <math>x</math> and <math>y_1</math> each contribute at least 1 to <math>\mathrm{size}(x)</math>, and so we have<math display="block">\begin{align} \mathrm{size}(x) &\ge 2 + \sum_{i=2}^d \mathrm{size}(y_i) \\ &\ge 2 + \sum_{i=2}^d F_i \\ &= 1 + \sum_{i=0}^d F_i \\ &= F_{d+2} \end{align}</math>where the last step is an identity for Fibonacci numbers. This gives the desired lower bound on <math>\mathrm{size}(x)</math>. ==Performance== Although Fibonacci heaps look very efficient, they have the following two drawbacks:<ref name=FSST>{{cite journal | last1 = Fredman | first1 = Michael L. | author1-link = Michael Fredman | last2 = Sedgewick | first2 = Robert | author2-link = Robert Sedgewick (computer scientist) | last3 = Sleator | first3 = Daniel D. | author3-link = Daniel Sleator | last4 = Tarjan | first4 = Robert E. | author4-link = Robert Tarjan | doi = 10.1007/BF01840439 | issue = 1β4 | journal = Algorithmica | pages = 111β129 | title = The pairing heap: a new form of self-adjusting heap | url = https://www.cs.cmu.edu/~sleator/papers/pairing-heaps.pdf | volume = 1 | year = 1986| s2cid = 23664143 }}</ref> # They are complicated to implement. # They are not as efficient in practice when compared with theoretically less efficient forms of heaps.<ref>http://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/FibonacciHeaps.pdf, p. 79</ref> In their simplest version, they require manipulation of four pointers per node, whereas only two or three pointers per node are needed in other structures, such as the [[binomial heap]], or [[pairing heap]]. This results in large memory consumption per node and high constant factors on all operations. Although the total running time of a sequence of operations starting with an empty structure is bounded by the bounds given above, some (very few) operations in the sequence can take very long to complete (in particular, delete-min has linear running time in the worst case). For this reason, Fibonacci heaps and other amortized data structures may not be appropriate for [[real-time computing|real-time systems]]. It is possible to create a data structure which has the same worst-case performance as the Fibonacci heap has amortized performance. One such structure, the [[Brodal queue]],<ref name="bare_url">{{Citation |citeseerx=10.1.1.43.8133 |title=Worst-Case Efficient Priority Queues |year=1996 |author=Gerth StΓΈlting Brodal |journal=Proc. 7th ACM-SIAM Symposium on Discrete Algorithms |publisher=[[Society for Industrial and Applied Mathematics]] |pages=52β58 |isbn=0-89871-366-8 |url=https://archive.org/details/proceedingsofsev0000acms/page/52 }}</ref> is, in the words of the creator, "quite complicated" and "[not] applicable in practice." Invented in 2012, the [[strict Fibonacci heap]]<ref>{{Cite conference |doi=10.1145/2213977.2214082 |title=Strict Fibonacci heaps |conference=Proceedings of the 44th symposium on Theory of Computing - STOC '12 |pages=1177 |year=2012 |last1=Brodal |first1=G. S. L. |last2=Lagogiannis |first2=G. |last3=Tarjan |first3=R. E. |isbn=978-1-4503-1245-5 |url=http://www.cs.au.dk/~gerth/papers/stoc12.pdf }}</ref> is a simpler (compared to Brodal's) structure with the same worst-case bounds. Despite being simpler, experiments show that in practice the strict Fibonacci heap performs slower than more complicated [[Brodal queue]] and also slower than basic Fibonacci heap.<ref>{{Cite book|last1=Mrena|first1=Michal|last2=Sedlacek|first2=Peter|last3=Kvassay|first3=Miroslav|title=2019 International Conference on Information and Digital Technologies (IDT) |chapter=Practical Applicability of Advanced Implementations of Priority Queues in Finding Shortest Paths |date=June 2019|location=Zilina, Slovakia|publisher=IEEE|pages=335β344|doi=10.1109/DT.2019.8813457|isbn=9781728114019|s2cid=201812705}}</ref><ref name=":0">{{cite journal |last1=Larkin |first1=Daniel |last2=Sen |first2=Siddhartha |last3=Tarjan |first3=Robert |date=2014 |title=A Back-to-Basics Empirical Study of Priority Queues |journal=Proceedings of the Sixteenth Workshop on Algorithm Engineering and Experiments |pages=61β72 |arxiv=1403.0252 |bibcode=2014arXiv1403.0252L |doi=10.1137/1.9781611973198.7 |isbn=978-1-61197-319-8 |s2cid=15216766}}</ref> The run-relaxed heaps of Driscoll et al. give good worst-case performance for all Fibonacci heap operations except merge.<ref name="driscoll">{{Cite journal | last1=Driscoll | first1=James R. | last2=Gabow | first2=Harold N. | last3=Shrairman | first3=Ruth | last4=Tarjan | first4=Robert E. | date=November 1988 | title=Relaxed heaps: An alternative to Fibonacci heaps with applications to parallel computation |journal=Communications of the ACM | volume=31 | issue=11 | pages=1343β1354 | doi=10.1145/50087.50096 | s2cid=16078067 | doi-access=free }}</ref> Recent experimental results suggest that the Fibonacci heap is more efficient in practice than most of its later derivatives, including quake heaps, violation heaps, strict Fibonacci heaps, and rank pairing heaps, but less efficient than pairing heaps or array-based heaps.<ref name=":0" /> ==Summary of running times== {{Heap Running Times}} == References == {{reflist}} == External links == * [https://web.archive.org/web/20060620102957/http://www.cs.yorku.ca/~aaw/Jason/FibonacciHeapAnimation.html Java applet simulation of a Fibonacci heap] * [http://www.mathworks.com/matlabcentral/fileexchange/30072-fibonacci-heap MATLAB implementation of Fibonacci heap] * [http://www.labri.fr/perso/pelegrin/code/#fibonacci De-recursived and memory efficient C implementation of Fibonacci heap] (free/libre software, [http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.html CeCILL-B license]) * [https://github.com/evansenter/f_heap Ruby implementation of the Fibonacci heap (with tests)] * [http://www.cs.princeton.edu/~wayne/cs423/fibonacci/FibonacciHeapAlgorithm.html Pseudocode of the Fibonacci heap algorithm] * {{YouTube|id=6JxvKfSV9Ns|title=Fibonacci Heaps or "How to invent an extremely clever data structure"}} {{Data structures}} {{DEFAULTSORT:Fibonacci Heap}} [[Category:Fibonacci numbers]] [[Category:Heaps (data structures)]] [[Category:Amortized data structures]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Data structures
(
edit
)
Template:Heap Running Times
(
edit
)
Template:Infobox data structure-amortized
(
edit
)
Template:Introduction to Algorithms
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:YouTube
(
edit
)