Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Heapsort
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|A sorting algorithm which uses the heap data structure}} {{Use dmy dates|date=March 2022}} {{Infobox Algorithm|class=[[Sorting algorithm]] |image=[[File:Sorting heapsort anim.gif]] |caption=A run of heapsort sorting an array of randomly permuted values. In the first stage of the algorithm the array elements are reordered to satisfy the [[Heap (data structure)|heap property]]. Before the actual sorting takes place, the heap tree structure is shown briefly for illustration. |data=[[Array data structure|Array]] |time=<math>O(n\log n)</math> |average-time=<math>O(n\log n)</math> |best-time=<math>O(n\log n)</math> (distinct keys)<ref>{{cite journal |journal=Journal of Algorithms |volume=20 |number=11 |pages=205β217 |year=1996 |title=On the Best Case of Heapsort |first1=B. |last1=BollobΓ‘s |first2=T. I. |last2=Fenner |first3=A. M. |last3=Frieze |doi=10.1006/jagm.1996.0011 |url=https://www.math.cmu.edu/~af1p/Texfiles/Best.pdf }}</ref><ref>{{cite tech report |title=The Best Case of Heapsort |institution=Princeton University |number=TR-293-90 |last1=Sedgewick |first1=Robert |authorlink1=Robert Sedgewick (computer scientist) |last2=Schaffer |first2=Russel W. |date=October 1990 |url=https://www.cs.princeton.edu/research/techreps/TR-293-90 }}</ref><br />or <math>O(n)</math> (equal keys) |space=<math>O(n)</math> total <math>O(1)</math> auxiliary }} In [[computer science]], '''heapsort''' is an efficient, [[comparison sort|comparison-based]] [[sorting algorithm]] that reorganizes an input array into a [[heap (data structure)|heap]] (a data structure where each node is greater than its children) and then repeatedly removes the largest node from that heap, placing it at the end of the array in a similar manner to [[Selection sort]]. <ref>{{Cite book |last=Cormen |first=Thomas H. |title=Introduction to algorithms |last2=Leiserson |first2=Charles Eric |last3=Rivest |first3=Ronald L. |last4=Stein |first4=Clifford |date=2022 |publisher=The MIT Press |isbn=978-0-262-04630-5 |edition=4th|location=Cambridge, Massachusetts |pages=170}}</ref> Although somewhat slower in practice on most machines than a well-implemented [[quicksort]], it has the advantages of very simple implementation and a more favorable worst-case {{math|[[big O notation|''O''(''n'' log ''n'')]]}} runtime. Most real-world quicksort variants include an implementation of heapsort as a fallback should they detect that quicksort is becoming degenerate. Heapsort is an [[in-place algorithm]], but it is not a [[stable sort]]. Heapsort was invented by [[J. W. J. Williams]] in 1964.<ref>{{harvnb|Williams|1964}}</ref> The paper also introduced the [[binary heap]] as a useful data structure in its own right.<ref name="brass">{{cite book |first=Peter |last=Brass |title=Advanced Data Structures |publisher=Cambridge University Press |year=2008 |isbn=978-0-521-88037-4 |page=209}}</ref> In the same year, [[Robert W. Floyd]] published an improved version that could sort an array in-place, continuing his earlier research into the [[treesort]] algorithm.<ref name="brass"/> == Overview == The heapsort algorithm can be divided into two phases: heap construction, and heap extraction. The heap is an [[implicit data structure]] which takes no space beyond the array of objects to be sorted; the array is interpreted as a [[complete binary tree#Types of binary trees|complete binary tree]] where each array element is a node and each node's parent and child links are defined by simple arithmetic on the array indexes. For a zero-based array, the root node is stored at index 0, and the nodes linked to node {{code|i}} are <pre> iLeftChild(i) = 2β i + 1 iRightChild(i) = 2β i + 2 iParent(i) = floor((iβ1) / 2) </pre> where the [[floor function]] rounds down to the preceding integer. For a more detailed explanation, see {{section link|Binary heap|Heap implementation}}. This binary tree is a max-heap when each node is greater than or equal to both of its children. Equivalently, each node is less than or equal to its parent. This rule, applied throughout the tree, results in the maximum node being located at the root of the tree. In the first phase, a heap is built out of the data (see {{section link|Binary heap|Building a heap}}). In the second phase, the heap is converted to a sorted array by repeatedly removing the largest element from the heap (the root of the heap), and placing it at the end of the array. The heap is updated after each removal to maintain the heap property. Once all objects have been removed from the heap, the result is a sorted array. Heapsort is normally performed in place. During the first phase, the array is divided into an unsorted prefix and a heap-ordered suffix (initially empty). Each step shrinks the prefix and expands the suffix. When the prefix is empty, this phase is complete. During the second phase, the array is divided into a heap-ordered prefix and a sorted suffix (initially empty). Each step shrinks the prefix and expands the suffix. When the prefix is empty, the array is sorted. == Algorithm == The heapsort algorithm begins by rearranging the array into a binary max-heap. The algorithm then repeatedly swaps the root of the heap (the greatest element remaining in the heap) with its last element, which is then declared to be part of the sorted suffix. Then the heap, which was damaged by replacing the root, is repaired so that the greatest element is again at the root. This repeats until only one value remains in the heap. The steps are: # Call the {{code|heapify()}} function on the array. This builds a heap from an array in {{math|''O''(''n'')}} operations. # Swap the first element of the array (the largest element in the heap) with the final element of the heap. Decrease the considered range of the heap by one. # Call the {{code|siftDown()}} function on the array to move the new first element to its correct place in the heap. # Go back to step (2) until the remaining array is a single element. The {{code|heapify()}} operation is run once, and is {{math|''O''(''n'')}} in performance. The {{code|siftDown()}} function is called {{mvar|n}} times and requires {{math|''O''(log ''n'')}} work each time, due to its traversal starting from the root node. Therefore, the performance of this algorithm is {{math|1=''O''(''n'' + ''n'' log ''n'') = ''O''(''n'' log ''n'')}}. The heart of the algorithm is the {{code|siftDown()}} function. This constructs binary heaps out of smaller heaps, and may be thought of in two equivalent ways: * given two binary heaps, and a shared parent node which is not part of either heap, merge them into a single larger binary heap; or * given a "damaged" binary heap, where the max-heap property (no child is greater than its parent) holds everywhere ''except'' possibly between the root node and its children, repair it to produce an undamaged heap. To establish the max-heap property at the root, up to three nodes must be compared (the root and its two children), and the greatest must be made the root. This is most easily done by finding the greatest child, then comparing that child to the root. There are three cases: # If there are no children (the two original heaps are empty), the heap property trivially holds, and no further action is required. # If root is greater than or equal to the greatest child, the heap property holds and likewise, no further action is required. # If the root is less than the greatest child, exchange the two nodes. The heap property now holds at the newly promoted node (it is greater than or equal to both of its children, and in fact greater than any descendant), but may be violated between the newly demoted ex-root and its new children. To correct this, repeat the {{code|siftDown()}} operation on the subtree rooted at the newly demoted ex-root. The number of iterations in any one {{code|siftdown()}} call is bounded by the height of the tree, which is {{math|1={{floor|log{{sub|2}} ''n''}} = ''O''(log ''n'')}}. === Pseudocode === The following is a simple way to implement the algorithm in [[pseudocode]]. Arrays are [[Comparison of programming languages (array)|zero-based]] and {{code|swap}} is used to exchange two elements of the array. Movement 'down' means from the root towards the leaves, or from lower indices to higher. Note that during the sort, the largest element is at the root of the heap at {{code|a[0]}}, while at the end of the sort, the largest element is in {{code|a[end]}}. '''procedure''' heapsort(a, count) '''is''' '''input:''' an unordered array ''a'' of length ''count'' ''(Build the heap in array a so that largest value is at the root)'' heapify(a, count) ''(The following loop maintains the [[Loop invariant|invariants]] that a[0:endβ1] is a heap, and'' ''every element a[end:countβ1] beyond end is greater than everything before it,'' ''i.e. a[end:countβ1] is in sorted order.)'' end β count '''while''' end > 1 '''do''' ''(the heap size is reduced by one)'' end β end β 1 ''(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)'' swap(a[end], a[0]) ''(the swap ruined the heap property, so restore it)'' siftDown(a, 0, end) The sorting routine uses two subroutines, {{code|heapify}} and {{code|siftDown}}. The former is the common in-place heap construction routine, while the latter is a common subroutine for implementing {{code|heapify}}. ''(Put elements of 'a' in heap order, in-place)'' '''procedure''' heapify(a, count) '''is''' ''(start is initialized to the first leaf node)'' ''(the last element in a 0-based array is at index count-1; find the parent of that element)'' start β iParent(count-1) + 1 '''while''' start > 0 '''do''' ''(go to the last non-heap node)'' start β start β 1 ''(sift down the node at index 'start' to the proper place such that all nodes below'' '' the start index are in heap order)'' siftDown(a, start, count) ''(after sifting down the root all nodes/elements are in heap order)'' ''(Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid)'' '''procedure''' siftDown(a, root, end) '''is''' '''while''' iLeftChild(root) < end '''do''' ''(While the root has at least one child)'' child β iLeftChild(root) ''(Left child of root)'' ''(If there is a right child and that child is greater)'' '''if''' child+1 < end '''and''' a[child] < a[child+1] '''then''' child β child + 1 '''if''' a[root] < a[child] '''then''' swap(a[root], a[child]) root β child ''(repeat to continue sifting down the child now)'' '''else''' ''(The root holds the largest element. Since we may assume the heaps rooted'' '' at the children are valid, this means that we are done.)'' '''return''' The {{code|heapify}} procedure operates by building small heaps and repeatedly merging them using {{code|siftDown}}. It starts with the leaves, observing that they are trivial but valid heaps by themselves, and then adds parents. Starting with element {{math|''n''/2}} and working backwards, each internal node is made the root of a valid heap by sifting down. The last step is sifting down the first element, after which the entire array obeys the heap property. To see that this takes {{math|''O''(''n'')}} time, count the worst-case number of {{code|siftDown}} iterations. The last half of the array requires zero iterations, the preceding quarter requires at most one iteration, the eighth before that requires at most two iterations, the sixteenth before that requires at most three, and so on. Looked at a different way, if we assume every {{code|siftDown}} call requires the maximum number of iterations, the first half of the array requires one iteration, the first quarter requires one more (total 2), the first eighth requires yet another (total 3), and so on. This totals {{math|1=''n''/2 + ''n''/4 + ''n''/8 + β― = nβ (1/2 + 1/4 + 1/8 + β―)}}, where the infinite sum is a well-known [[geometric series]] whose sum is {{math|1}}, thus the product is simply {{mvar|n}}. The above is an approximation. The exact worst-case number of comparisons during the heap-construction phase of heapsort is known to be equal to {{math|2''n'' β 2''s''<sub>2</sub>(''n'') β ''e''<sub>2</sub>(''n'')}}, where {{math|''s''<sub>2</sub>(''n'')}} is the [[Hamming weight|number of 1 bits]] in the binary representation of {{mvar|n}} and {{math|''e''<sub>2</sub>(''n'')}} is the [[Find first set|number of trailing 0 bits]].<ref>{{cite journal | last1 = Suchenek | first1 = Marek A. | title = Elementary Yet Precise Worst-Case Analysis of Floyd's Heap-Construction Program | doi = 10.3233/FI-2012-751 | pages = 75β92 | journal = [[Fundamenta Informaticae]] | volume = 120 | issue = 1 | year = 2012}}</ref><ref>{{cite arXiv | last = Suchenek | first = Marek A. | title = A Complete Worst-Case Analysis of Heapsort with Experimental Verification of Its Results, A manuscript | date = 2015-04-07 | eprint = 1504.01459 | class = cs.DS }}</ref> === Standard implementation === Although it is convenient to think of the two phases separately, most implementations combine the two, allowing a single instance of {{code|siftDown}}to be [[Inline expansion|expanded inline]].<ref>{{harvnb|Knuth|1997}}</ref>{{rp|Algorithm H}} Two variables (here, {{code|start}} and {{code|end}}) keep track of the bounds of the heap area. The portion of the array before {{code|start}} is unsorted, while the portion beginning at {{code|end}} is sorted. Heap construction decreases {{code|start}} until it is zero, after which heap extraction decreases {{code|end}} until it is 1 and the array is entirely sorted. '''procedure''' heapsort(a, count) '''is''' '''input:''' an unordered array ''a'' of length ''count'' start β floor(count/2) end β count '''while''' end > 1 '''do''' '''if''' start > 0 '''then''' ''(Heap construction)'' start β start β 1 '''else''' ''(Heap extraction)'' end β end β 1 swap(a[end], a[0]) ''(The following is siftDown(a, start, end))'' root β start '''while''' iLeftChild(root) < end '''do''' child β iLeftChild(root) ''(If there is a right child and that child is greater)'' '''if''' child+1 < end '''and''' a[child] < a[child+1] '''then''' child β child + 1 '''if''' a[root] < a[child] '''then''' swap(a[root], a[child]) root β child ''(repeat to continue sifting down the child now)'' '''else''' '''break''' ''(return to outer loop)'' == Variations == === Williams' heap construction === {{see also|Binary heap#Building a heap}} The description above uses Floyd's improved heap-construction algorithm, which operates in {{math|''O''(''n'')}} time and uses the same {{code|siftDown}} primitive as the heap-extraction phase. Although this algorithm, being both faster and simpler to program, is used by all practical heapsort implementations, Williams' original algorithm may be easier to understand, and is needed to implement a more general binary heap [[priority queue]]. Rather than merging many small heaps, Williams' algorithm maintains one single heap at the front of the array and repeatedly appends an additional element using a {{code|siftUp}} primitive. Being at the end of the array, the new element is a leaf and has no children to worry about, but may violate the heap property by being greater than its parent. In this case, exchange it with its parent and repeat the test until the parent is greater or there is no parent (we have reached the root). In pseudocode, this is: '''procedure''' siftUp(a, end) '''is''' '''input:''' ''a is the array, which heap-ordered up to end-1.'' ''end is the node to sift up.'' '''while''' end > 0 parent := iParent(end) '''if''' a[parent] < a[end] '''then''' ''(out of max-heap order)'' swap(a[parent], a[end]) end := parent ''(continue sifting up)'' '''else''' '''return''' '''procedure''' heapify(a, count) '''is''' ''(start with a trivial single-element heap)'' end := 1 '''while''' end < count ''(sift up the node at index end to the proper place such that'' '' all nodes above the end index are in heap order)'' siftUp(a, end) end := end + 1 ''(after sifting up the last node all nodes are in heap order)'' [[File:Binary heap bottomup vs topdown.svg|thumb|right|Difference in time complexity between the "siftDown" version and the "siftUp" version.]] To understand why this algorithm can take asymptotically more time to build a heap ({{math|''O''(''n'' log ''n'')}} vs. {{math|''O''(''n'')}} worst case), note that in Floyd's algorithm, almost all the calls to {{code|siftDown}} operations apply to very ''small'' heaps. Half the heaps are height-1 trivial heaps and can be skipped entirely, half of the remainder are height-2, and so on. Only two calls are on heaps of size {{math|''n''/2}}, and only one {{code|siftDown}} operation is done on the full {{mvar|n}}-element heap. The overall average {{code|siftDown}} operation takes {{math|''O''(1)}} time. In contrast, in Williams' algorithm most of the calls to {{code|siftUp}} are made on ''large'' heaps of height {{math|''O''(log ''n'')}}. Half of the calls are made with a heap size of {{math|''n''/2}} or more, three-quarters are made with a heap size of {{math|''n''/4}} or more, and so on. Although the ''average'' number of steps is similar to Floyd's technique, {{r|Bojesen00|p=3}} pre-sorted input will cause the worst case: each added node is sifted up to the root, so the average call to {{code|siftUp}} will require approximately {{math|1=(log{{sub|2}}''n'' β 1)/2 + (log{{sub|2}}''n'' β 2)/4 + (log{{sub|2}}''n'' β 3)/8 + β―}} {{math|1== log{{sub|2}}''n'' β (1 + 1/2 + 1/4 + β―)}} {{math|1== log{{sub|2}}''n'' β 2}} iterations. Because it is dominated by the second heap-extraction phase, the heapsort algorithm itself has {{math|''O''(''n'' log ''n'')}} time complexity using either version of heapify. === Bottom-up heapsort === Bottom-up heapsort is a variant that reduces the number of comparisons required by a significant factor. While ordinary "top-down" heapsort requires {{math|2''n'' log<sub>2</sub> ''n'' + ''O''(''n'')}} comparisons worst-case and on average,{{r|Wegener}} the bottom-up variant requires {{math|''n'' log<sub>2</sub>''n'' + ''O''(1)}} comparisons on average,{{r|Wegener}} and {{math|1.5''n'' log<sub>2</sub>''n'' + ''O''(''n'')}} in the worst case.{{r|fleischer}} If comparisons are cheap (e.g. integer keys) then the difference is unimportant,{{r|Melhorn}} as top-down heapsort compares values that have already been loaded from memory. If, however, comparisons require a [[function call]] or other complex logic, then bottom-up heapsort is advantageous. This is accomplished by using a more elaborate {{code|siftDown}} procedure. The change improves the linear-time heap-building phase slightly,{{r|McDiarmid}} but is more significant in the second phase. Like top-down heapsort, each iteration of the second phase extracts the top of the heap, {{code|a[0]}}, and fills the gap it leaves with {{code|a[end]}}, then sifts this latter element down the heap. But this element came from the lowest level of the heap, meaning it is one of the smallest elements in the heap, so the sift-down will likely take many steps to move it back down.<ref name=MacKay05>{{cite web |title=Heapsort, Quicksort, and Entropy |first=David J. C. |last=MacKay |author-link=David J. C. MacKay |date=December 2005 |url=https://inference.org.uk/mackay/sorting/sorting.html |access-date=2021-02-12 }}</ref> In top-down heapsort, each step of {{code|siftDown}} requires two comparisons, to find the minimum of three elements: the new node and its two children. Bottom-up heapsort conceptually replaces the root with a value of ββ and sifts it down using only one comparison per level (since no child can possibly be less than ββ) until the leaves are reached, then replaces the ββ with the correct value and sifts it ''up'' (again, using one comparison per level) until the correct position is found. This places the root in the same location as top-down {{code|siftDown}}, but fewer comparisons are required to find that location. For any single {{code|siftDown}} operation, the bottom-up technique is advantageous if the number of downward movements is at least {{frac|2|3}} of the height of the tree (when the number of comparisons is {{frac|4|3}} times the height for both techniques), and it turns out that this is more than true on average, even for worst-case inputs.<ref name="fleischer">{{cite journal |last=Fleischer |first=Rudolf |title=A tight lower bound for the worst case of Bottom-Up-Heapsort |journal=Algorithmica |volume=11 |issue=2 |date=February 1994 |pages=104β115 |doi=10.1007/bf01182770 |url=http://staff.gutech.edu.om/~rudolf/Paper/buh_algorithmica94.pdf |hdl=11858/00-001M-0000-0014-7B02-C|s2cid=21075180 |hdl-access=free }} Also available as {{cite tech report |last=Fleischer |first=Rudolf |title=A tight lower bound for the worst case of Bottom-Up-Heapsort |date=April 1991 |institution=[[Max Planck Institute for Informatics|MPI-INF]] |number=MPI-I-91-104 |url=http://pubman.mpdl.mpg.de/pubman/item/escidoc:1834997:3/component/escidoc:2463941/MPI-I-94-104.pdf}}</ref> A naΓ―ve implementation of this conceptual algorithm would cause some redundant data copying, as the sift-up portion undoes part of the sifting down. A practical implementation searches downward for a leaf where ββ ''would'' be placed, then upward for where the root ''should'' be placed. Finally, the upward traversal continues to the root's starting position, performing no more comparisons but exchanging nodes to complete the necessary rearrangement. This optimized form performs the same number of exchanges as top-down {{code|siftDown}}. Because it goes all the way to the bottom and then comes back up, it is called '''heapsort with bounce''' by some authors.<ref>{{cite book |first1=Bernard |last1=Moret |author-link1=Bernard Moret |first2=Henry D. |last2=Shapiro |title=Algorithms from P to NP Volume 1: Design and Efficiency |chapter=8.6 Heapsort |page=528 |publisher=Benjamin/Cummings |year=1991 |isbn=0-8053-8008-6 |quote=For lack of a better name we call this enhanced program 'heapsort with bounce.{{'-}}}}</ref> '''function''' leafSearch(a, i, end) '''is''' j β i '''while''' iRightChild(j) < end '''do''' ''(Determine which of j's two children is the greater)'' '''if''' a[iRightChild(j)] > a[iLeftChild(j)] '''then''' j β iRightChild(j) '''else''' j β iLeftChild(j) ''(At the last level, there might be only one child)'' '''if''' iLeftChild(j) < end '''then''' j β iLeftChild(j) '''return''' j The return value of the <code>leafSearch</code> is used in the modified <code>siftDown</code> routine:<ref name="fleischer"/> '''procedure''' siftDown(a, i, end) '''is''' j β leafSearch(a, i, end) '''while''' a[i] > a[j] '''do''' j β iParent(j) '''while''' j > i '''do''' swap(a[i], a[j]) j β iParent(j) Bottom-up heapsort was announced as beating quicksort (with median-of-three pivot selection) on arrays of size β₯16000.{{refn|name=Wegener|{{cite journal |last=Wegener |first=Ingo |author-link=Ingo Wegener |title={{sc|Bottom-Up Heapsort}}, a new variant of {{sc|Heapsort}} beating, on an average, {{sc|Quicksort}} (if {{mvar|n}} is not very small) |journal=Theoretical Computer Science |volume=118 |issue=1 |date=13 September 1993 |pages=81β98 |doi=10.1016/0304-3975(93)90364-y |doi-access=free |url=https://core.ac.uk/download/pdf/82350265.pdf}} Although this is a reprint of work first published in 1990 (at the Mathematical Foundations of Computer Science conference), the technique was published by Carlsson in 1987.<ref name=Carlsson/>}} A 2008 re-evaluation of this algorithm showed it to be no faster than top-down heapsort for integer keys, presumably because modern [[branch prediction]] nullifies the cost of the predictable comparisons that bottom-up heapsort manages to avoid.<ref name=Melhorn>{{cite book |last1=Mehlhorn |first1=Kurt |author1-link=Kurt Mehlhorn |first2=Peter |last2=Sanders |author2-link=Peter Sanders (computer scientist) |title=Algorithms and Data Structures: The Basic Toolbox |chapter=Priority Queues |publisher=Springer |year=2008 |page=142 |isbn=978-3-540-77977-3 |url=http://people.mpi-inf.mpg.de/~mehlhorn/Toolbox.html |chapter-url=http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/PriorityQueues.pdf#page=16}}</ref> A further refinement does a [[binary search]] in the upward search, and sorts in a worst case of {{math|(''n''+1)(log<sub>2</sub>(''n''+1) + log<sub>2</sub> log<sub>2</sub>(''n''+1) + 1.82) + ''O''(log<sub>2</sub>''n'')}} comparisons, approaching [[Comparison sort#Number of comparisons required to sort a list|the information-theoretic lower bound]] of {{math|''n'' log<sub>2</sub>''n'' β 1.4427''n''}} comparisons.<ref name=Carlsson>{{cite journal |first=Scante |last=Carlsson |title=A variant of heapsort with almost optimal number of comparisons |journal=Information Processing Letters |volume=24 |issue=4 |pages=247β250 |date=March 1987 |doi=10.1016/0020-0190(87)90142-6 |s2cid=28135103 |url=https://pdfs.semanticscholar.org/caec/6682ffd13c6367a8c51b566e2420246faca2.pdf |archive-url=https://web.archive.org/web/20161227055904/https://pdfs.semanticscholar.org/caec/6682ffd13c6367a8c51b566e2420246faca2.pdf |url-status=dead |archive-date=2016-12-27 }}</ref> A variant that uses two extra bits per internal node (''n''β1 bits total for an ''n''-element heap) to cache information about which child is greater (two bits are required to store three cases: left, right, and unknown)<ref name=McDiarmid>{{Cite journal |title=Building heaps fast |last1=McDiarmid |first1=C. J. H. |last2=Reed |first2=B. A. |date=September 1989 |journal=Journal of Algorithms |volume=10 |issue=3 |pages=352β365 |doi=10.1016/0196-6774(89)90033-3 |url=http://cgm.cs.mcgill.ca/~breed/2016COMP610/BUILDINGHEAPSFAST.pdf}}</ref> uses less than {{math|''n'' log<sub>2</sub>''n'' + 1.1''n''}} compares.<ref>{{cite journal |title=The worst-case complexity of McDiarmid and Reed's variant of {{sc|Bottom-Up Heapsort}} is less than ''n'' log ''n'' + 1.1''n'' |first=Ingo |last=Wegener |author-link=Ingo Wegener |journal=Information and Computation |volume=97 |issue=1 |pages=86β96 |date=March 1992 |doi=10.1016/0890-5401(92)90005-Z |doi-access=free}}</ref> === Other variations === *Ternary heapsort uses a [[ternary heap]] instead of a binary heap; that is, each element in the heap has three children. It is more complicated to program but does a constant number of times fewer swap and comparison operations. This is because each sift-down step in a ternary heap requires three comparisons and one swap, whereas in a binary heap, two comparisons and one swap are required. Two levels in a ternary heap cover 3<sup>2</sup> = 9 elements, doing more work with the same number of comparisons as three levels in the binary heap, which only cover 2<sup>3</sup> = 8.{{Citation needed|date=September 2014}} This is primarily of academic interest, or as a student exercise,<ref>{{cite book |title=Data Structures Using Pascal |chapter=Chapter 8: Sorting |first1=Aaron M. |last1=Tenenbaum |first2=Moshe J. |last2=Augenstein |year=1981 |page=405 |publisher=Prentice-Hall |isbn=0-13-196501-8 |quote=<!--Exercise 8. Define an almost complete ternary tree as a tree in which every node has at most three sons and such that the nodes can be numbered from 1 to n so that the sons of node[i] are node[3*iβl], node[3*i] and node[3*i+l]. Define a ternary heap as an almost complete ternary tree in which the content of each node is greater than or equal to the contents of all its descendants. -->Write a sorting routine similar to the heapsort except that it uses a ternary heap. }}</ref> as the additional complexity is not worth the minor savings, and bottom-up heapsort beats both. *Memory-optimized heapsort{{r|LaMarca99|p=87}} improves heapsort's [[locality of reference]] by increasing the number of children even more. This increases the number of comparisons, but because all children are stored consecutively in memory, reduces the number of [[cache line]]s accessed during heap traversal, a net performance improvement. *The standard implementation of Floyd's heap-construction algorithm causes a large number of [[cache miss]]es once the size of the data exceeds that of the [[CPU cache]].{{r|LaMarca99|p=87}} Better performance on large data sets can be obtained by merging in [[depth-first]] order, combining subheaps as soon as possible, rather than combining all subheaps on one level before proceeding to the one above.<ref name=Bojesen00>{{cite journal |title=Performance Engineering Case Study: Heap Construction |first1=Jesper |last1=Bojesen |first2=Jyrki |last2=Katajainen |first3=Maz |last3=Spork |journal=ACM Journal of Experimental Algorithmics |date=2000 |volume=5 |pages=15βes |number=15 |doi=10.1145/351827.384257 |citeseerx=10.1.1.35.3248 |s2cid=30995934 |url=http://hjemmesider.diku.dk/~jyrki/Paper/katajain.ps |format=PostScript}} [https://www.semanticscholar.org/paper/Performance-Engineering-Case-Study-Heap-Bojesen-Katajainen/6f4ada5912c1da64e16453d67ec99c970173fb5b Alternate PDF source].</ref><ref>{{cite conference |chapter=In-place Heap Construction with Optimized Comparisons, Moves, and Cache Misses |first1=Jingsen |last1=Chen |first2=Stefan |last2=Edelkamp |first3=Amr |last3=Elmasry |first4=Jyrki |last4=Katajainen |title=Mathematical Foundations of Computer Science 2012 |series=Lecture Notes in Computer Science |doi=10.1007/978-3-642-32589-2_25 |conference=37th international conference on Mathematical Foundations of Computer Science |pages=259β270 |location=Bratislava, Slovakia |date=27β31 August 2012 |volume=7464 |isbn=978-3-642-32588-5 |s2cid=1462216 |chapter-url=https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf |archive-url=https://web.archive.org/web/20161229031307/https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf |url-status=dead |archive-date=29 December 2016 }} See particularly Fig. 3.</ref> *Out-of-place heapsort<ref>{{cite conference |title=QuickHeapsort, an efficient mix of classical sorting algorithms |first1=Domenico |last1=Cantone |first2=Gianluca |last2=Concotti |conference=4th Italian Conference on Algorithms and Complexity |series=Lecture Notes in Computer Science |volume=1767 |pages=150β162 |isbn=3-540-67159-5 |date=1β3 March 2000 |location=Rome |url=https://archive.org/details/springer_10.1007-3-540-46521-9 }}</ref><ref>{{cite journal |title=QuickHeapsort, an efficient mix of classical sorting algorithms |first1=Domenico |last1=Cantone |first2=Gianluca |last2=Concotti |journal=Theoretical Computer Science |volume=285 |issue=1 |pages=25β42 |date=August 2002 |doi=10.1016/S0304-3975(01)00288-2 |doi-access=free |zbl=1016.68042 |url=https://core.ac.uk/download/pdf/81957449.pdf }}</ref>{{r|MacKay05}} improves on bottom-up heapsort by eliminating the worst case, guaranteeing {{math|''n'' log<sub>2</sub>''n'' + ''O''(''n'')}} comparisons. When the maximum is taken, rather than fill the vacated space with an unsorted data value, fill it with a {{math|ββ}} sentinel value, which never "bounces" back up. It turns out that this can be used as a primitive in an in-place (and non-recursive) "QuickHeapsort" algorithm.<ref>{{cite journal |title=QuickHeapsort: Modifications and improved analysis |first1=Volker |last1=Diekert |first2=Armin |last2=WeiΓ |journal= Theory of Computing Systems |volume=59 |issue=2 |pages=209β230 |date=August 2016 |doi=10.1007/s00224-015-9656-y <!-- |doi=10.1007/978-3-642-38536-0_3 Conference version, published September 2012--> |arxiv=1209.4214 |s2cid=792585 }}</ref> First, you perform a quicksort-like partitioning pass, but reversing the order of the partitioned data in the array. Suppose ([[without loss of generality]]) that the smaller partition is the one greater than the pivot, which should go at the end of the array, but our reversed partitioning step places it at the beginning. Form a heap out of the smaller partition and do out-of-place heapsort on it, exchanging the extracted maxima with values from the end of the array. These are less than the pivot, meaning less than any value in the heap, so serve as {{math|ββ}} sentinel values. Once the heapsort is complete (and the pivot moved to just before the now-sorted end of the array), the order of the partitions has been reversed, and the larger partition at the beginning of the array may be sorted in the same way. (Because there is no non-[[tail recursion]], this also eliminates quicksort's {{math|''O''(log ''n'')}} stack usage.) *The [[smoothsort]] algorithm<ref>{{Cite EWD|796a|Smoothsort β an alternative to sorting in situ}}</ref> is a variation of heapsort developed by [[Edsger W. Dijkstra]] in 1981. Like heapsort, smoothsort's upper bound is {{math|''O''(''n'' log ''n'')}}. The advantage of smoothsort is that it comes closer to {{math|''O''(''n'')}} time if the [[Adaptive sort|input is already sorted to some degree]], whereas heapsort averages {{math|''O''(''n'' log ''n'')}} regardless of the initial sorted state. Due to its complexity, smoothsort is rarely used.{{citation needed|date=November 2016}} *Levcopoulos and Petersson<ref>{{cite book | last1 = Levcopoulos | first1 = Christos | last2 = Petersson | first2 = Ola | chapter = HeapsortβAdapted for Presorted Files | location = London, UK | pages = 499β509 | publisher = Springer-Verlag | series = Lecture Notes in Computer Science | title = WADS '89: Proceedings of the Workshop on Algorithms and Data Structures | volume = 382 | doi = 10.1007/3-540-51542-9_41 | year = 1989| isbn = 978-3-540-51542-5 }} {{Q|56049336}}.</ref> describe a variation of heapsort based on a heap of [[Cartesian tree]]s. First, a Cartesian tree is built from the input in {{math|''O''(''n'')}} time, and its root is placed in a 1-element binary heap. Then we repeatedly extract the minimum from the binary heap, output the tree's root element, and add its left and right children (if any) which are themselves Cartesian trees, to the binary heap.<ref>{{cite web |title=CartesianTreeSort.hh |website=Archive of Interesting Code |url=http://www.keithschwarz.com/interesting/code/?dir=cartesian-tree-sort |first=Keith |last=Schwartz |date=27 December 2010 |access-date=2019-03-05 }}</ref> As they show, if the input is already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having left and right children, resulting in the binary heap remaining small, and allowing the algorithm to sort more quickly than {{math|''O''(''n'' log ''n'')}} for inputs that are already nearly sorted. * Several variants such as [[Weak heap#Weak-heap sort|weak heapsort]] require {{math|''n'' log<sub>2</sub> ''n''+''O''(n)}} comparisons in the worst case, close to the theoretical minimum, using one extra bit of state per node. While this extra bit makes the algorithms not truly in-place, if space for it can be found inside the element, these algorithms are simple and efficient,{{r|Bojesen00|p=40}} but still slower than binary heaps if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not matter.<ref name=Kat2014-11-14P>{{cite conference |title=Seeking for the best priority queue: Lessons learnt |first=Jyrki |last=Katajainen |date=23 September 2013 |conference=Algorithm Engineering (Seminar 13391) |location=Dagstuhl |pages=19β20, 24 |url=http://hjemmesider.diku.dk/~jyrki/Myris/Kat2013-09-23P.html}}</ref> * Katajainen's "ultimate heapsort" requires no extra storage, performs {{math|''n'' log<sub>2</sub> ''n''+''O''(n)}} comparisons, and a similar number of element moves.<ref>{{cite conference |title=The Ultimate Heapsort |date=2β3 February 1998 |first=Jyrki |last=Katajainen |conference=Computing: the 4th Australasian Theory Symposium |url=http://hjemmesider.diku.dk/~jyrki/Myris/Kat1998C.html |journal=Australian Computer Science Communications |volume=20 |issue=3 |pages=87β96 |location=Perth }}</ref> It is, however, even more complex and not justified unless comparisons are very expensive. == Comparison with other sorts == Heapsort primarily competes with [[quicksort]], another very efficient general purpose in-place unstable comparison-based sort algorithm. Heapsort's primary advantages are its simple, non-[[Recursion (computer science)|recursive]] code, minimal auxiliary storage requirement, and reliably good performance: its best and worst cases are within a small constant factor of each other, and of the [[Comparison sort#Number of comparisons required to sort a list|theoretical lower bound on comparison sorts]]. While it cannot do better than {{math|''O''(''n'' log ''n'')}} for pre-sorted inputs, it does not suffer from quicksort's {{math|''O''(''n''<sup>2</sup>)}} worst case, either. Real-world quicksort implementations use a variety of [[heuristic]]s to avoid the worst case, but that makes their implementation far more complex, and implementations such as [[introsort]] and pattern-defeating quicksort<ref>{{cite arXiv |first=Orson R. L. |last=Peters |title=Pattern-defeating Quicksort |date=9 Jun 2021 |eprint=2106.05123 |class=cs.DS }}<br/>{{cite web |first=Orson R. L. |last=Peters |url=https://github.com/orlp/pdqsort |title=pdqsort |website=Github |access-date=2023-10-02 }}</ref> use heapsort as a last-resort fallback if they detect degenerate behaviour. Thus, their worst-case performance is slightly worse than if heapsort had been used from the beginning. Heapsort's primary disadvantages are its poor [[locality of reference]] and its inherently serial nature; the accesses to the implicit tree are widely scattered and mostly random, and there is no straightforward way to convert it to a [[parallel algorithm]]. The worst-case performance guarantees make heapsort popular in [[real-time computing]], and systems concerned with maliciously chosen inputs<ref>{{cite web |work=Data Structures and Algorithms |title=Comparing Quick and Heap Sorts |type=Lecture notes |first=John |last=Morris |publisher=University of Western Australia<!--Author and web pages relocated to Auckland since--> |year=1998 |url=https://www.cs.auckland.ac.nz/software/AlgAnim/qsort3.html |access-date=2021-02-12 }}</ref> such as the Linux kernel.<ref>https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/sort.c#n205 Linux kernel source</ref> The combination of small implementation and dependably [[Principle of good enough|"good enough"]] performance make it popular in [[embedded system]]s, and generally any application where sorting is not a [[Bottleneck (software)|performance bottleneck]]. {{abbr|E.g.|Exempli gratiΔ; for example}} heapsort is ideal for sorting a list of [[filename]]s for display, but a [[database management system]] would probably want a more aggressively optimized sorting algorithm. A well-implemented quicksort is usually 2β3 times faster than heapsort.<ref name=LaMarca99>{{cite journal |title=The Influence of Caches on the Performance of Sorting |first1=Anthony |last1=LaMarca |first2=Richard E. |last2=Ladner |author-link2=Richard E. Ladner |journal=Journal of Algorithms |volume=31 |issue=1 |date=April 1999 |pages=66β104 |doi=10.1006/jagm.1998.0985 |citeseerx=10.1.1.456.3616 <!--10.1.1.155.4400 is the 26-page conference version (Fig. 5c on p. 23) and 10.1.1.84.8929 is a shorter 10-page version (Fig. 2c) --> |s2cid=206567217 |url=https://www.cs.auckland.ac.nz/~mcw/Teaching/refs/sorting/ladner-lamarca-cach-sorting.pdf }} See particularly figure 9c on p. 98.</ref><ref>{{cite web |title=Sorting by generating the sorting permutation, and the effect of caching on sorting |first=Arne |last=Maus |author-link=:no:Arne Maus<!--no en page yet--> |date=14 May 2014 |url=https://www.researchgate.net/publication/228620592 }} See Fig. 1 on p. 6.</ref> Although quicksort requires fewer comparisons, this is a minor factor. (Results claiming twice as many comparisons are measuring the top-down version; see {{section link||Bottom-up heapsort}}.) The main advantage of quicksort is its much better locality of reference: partitioning is a linear scan with good spatial locality, and the recursive subdivision has good temporal locality. With additional effort, quicksort can also be implemented in mostly [[branch-free code]],<ref>{{cite journal |title=BlockQuicksort: Avoiding Branch Mispredictions in Quicksort |first1=Stefan |last1=Edelkamp |first2=Armin |last2=WeiΓ |author-link2=Armin Weiss |journal=Journal of Experimental Algorithmics |volume=24 |article-number=1.4 |doi=10.1145/3274660 |date=30 January 2019 |arxiv=1604.06697 |url=https://drops.dagstuhl.de/storage/00lipics/lipics-vol057-esa2016/LIPIcs.ESA.2016.38/LIPIcs.ESA.2016.38.pdf }}</ref><ref>{{cite conference |title=Simple and Fast BlockQuicksort using Lomutoβs Partitioning Scheme |first1=Martin |last1=AumΓΌller |first2=Nikolaj |last2=Hass |conference=Twenty-First Workshop on Algorithm Engineering and Experiments (ALENEX) |date=7β8 January 2019 |location=San Diego |doi=10.1137/1.9781611975499.2 |doi-access=free |arxiv=1810.12047 |url=https://epubs.siam.org/doi/pdf/10.1137/1.9781611975499.2 }}</ref> and multiple CPUs can be used to sort subpartitions in parallel. Thus, quicksort is preferred when the additional performance justifies the implementation effort. The other major {{math|''O''(''n'' log ''n'')}} sorting algorithm is [[merge sort]], but that rarely competes directly with heapsort because it is not in-place. Merge sort's requirement for {{math|Ξ©(''n'')}} extra space (roughly half the size of the input<!--Known techniques for reducing this by a small constant factor omitted for simplicity-->) is usually prohibitive except in the situations where merge sort has a clear advantage: * When a [[stable sort]] is required * When taking advantage of (partially) pre-sorted input * Sorting [[linked list]]s (in which case merge sort requires minimal extra space) * Parallel sorting; merge sort parallelizes even better than quicksort and can easily achieve close to [[linear speedup]] * [[External sorting]]; merge sort has excellent locality of reference == Example == The examples sort the values { 6, 5, 3, 1, 8, 7, 2, 4 } in increasing order using both heap-construction algorithms. The elements being compared are shown in a '''bold''' font. There are typically two when sifting up, and three when sifting down, although there may be fewer when the top or bottom of the tree is reached. === Heap construction (Williams' algorithm) === [[File:Heapsort-example.gif|350px|thumb|right|An example of heapsort using Williams' heap-construction algorithm.]] {{Table alignment}} {| class="wikitable col2right" |- ! Heap !! Unsorted !! Swap elements |- | 6 || 5, 3, 1, 8, 7, 2, 4 || |- | '''6''', '''5''' || 3, 1, 8, 7, 2, 4 || |- | '''6''', 5, '''3''' || 1, 8, 7, 2, 4 || |- | 6, '''5''', 3, '''1''' || 8, 7, 2, 4 || |- | 6, '''5''', 3, 1, '''8''' || 7, 2, 4 || 5 β 8 |- | '''6''', '''8''', 3, 1, 5 || 7, 2, 4 || 6 β 8 |- | '''8''', 6, 3, 1, 5 || 7, 2, 4 || |- | 8, 6, '''3''', 1, 5, '''7''' || 2, 4 || 3 β 7 |- | '''8''', 6, '''7''', 1, 5, 3 || 2, 4 || |- | 8, 6, '''7''', 1, 5, 3, '''2''' || 4 || |- | 8, 6, 7, '''1''', 5, 3, 2, '''4''' || || 1 β 4 |- | 8, '''6''', 7, '''4''', 5, 3, 2, 1 || || |} === Heap construction (Floyd's algorithm) === {| class="wikitable col2right" |- ! Unsorted !! Heap !! Swap elements |- | 6, 5, 3, 1 || 8, 7, 2, 4 || |- | 6, 5, 3 || '''1''', 8, 7, 2, '''4''' || 1 β 4 |- | 6, 5, 3 || 4, 8, 7, 2, '''1''' |- | 6, 5 || '''3''', 4, 8, '''7''', '''2''', 1 || 3 β 7 |- | 6, 5 || 7, 4, 8, '''3''', 2, 1 || |- | 6 || '''5''', 7, '''4''', '''8''', 3, 2, 1 || 5 β 8 |- | 6 || 8, 7, 4, '''5''', 3, 2, 1 || |- | || '''6''', '''8''', '''7''', 4, 5, 3, 2, 1 || 6 β 8 |- | || 8, '''6''', 7, '''4''', '''5''', 3, 2, 1 || |} === Heap extraction === {| class="wikitable col2right" |- ! Heap !! Sorted array !! Swap elements !! Details |- | '''8''', 6, 7, 4, 5, 3, 2, '''1''' || || 8 β 1 || Add 8 to the sorted array by swapping it with 1 |- | '''1''', '''6''', '''7''', 4, 5, 3, 2 || 8 || 1 β 7 || Swap 1 and 7 as they are not in order in the heap |- | 7, 6, '''1''', 4, 5, '''3''', '''2''' || 8 || 1 β 3|| Swap 1 and 3 as they are not in order in the heap |- | 7, 6, 3, 4, 5, '''1''', 2 || 8 || || 1 has no children; siftDown complete |- | '''7''', 6, 3, 4, 5, 1, '''2''' || 8 || 7 β 2 || Add 7 to the sorted array by swapping it with 2 |- | '''2''', '''6''', '''3''', 4, 5, 1 || 7, 8 || 2 β 6 || Swap 2 and 6 as they are not in order in the heap |- | 6, '''2''', 3, '''4''', '''5''', 1 || 7, 8 || 2 β 5 || Swap 2 and 5 as they are not in order in the heap |- | 6, 5, 3, 4, '''2''', 1 || 7, 8 || || 2 has no children; siftDown complete |- | '''6''', 5, 3, 4, 2, '''1''' || 7, 8 || 6 β 1 || Add 6 to the sorted array by swapping it with 1 |- | '''1''', '''5''', '''3''', 4, 2 || 6, 7, 8 || 1 β 5 || Swap 1 and 5 as they are not in order in the heap |- | 5, '''1''', 3, '''4''', '''2''' || 6, 7, 8 || 1 β 4 || Swap 1 and 4 as they are not in order in the heap |- | 5, 4, 3, '''1''', 2 || 6, 7, 8 || || 1 has no children; siftDown complete |- | '''5''', 4, 3, 1, '''2''' || 6, 7, 8 || 5 β 2 || Add 5 to the sorted array by swapping it with 2 |- | '''2''', '''4''', '''3''', 1 || 5, 6, 7, 8 || 2 β 4 || Swap 2 and 4 as they are not in order in the heap |- | 4, '''2''', 3, '''1''' || 5, 6, 7, 8 || || 2 is greater than 1; siftDown complete |- | '''4''', 2, 3, '''1''' || 5, 6, 7, 8 || 4 β 1 || Add 4 to the sorted array by swapping it with 1 |- | '''1''', '''2''', '''3''' || 4, 5, 6, 7, 8 || 1 β 3 || Swap 1 and 3 as they are not in order in the heap |- | 3, 2, '''1''' || 4, 5, 6, 7, 8 || || 1 has no children; siftDown complete |- | '''3''', 2, '''1''' || 4, 5, 6, 7, 8 || 1 β 3 || Add 3 to the sorted array by swapping it with 1 |- | '''1''', '''2''' || 3, 4, 5, 6, 7, 8 || 1 β 2 || Swap 1 and 2 as they are not in order in the heap |- | 2, '''1''' || 3, 4, 5, 6, 7, 8 || || 1 has no children; siftDown complete |- | '''2''', '''1''' || 3, 4, 5, 6, 7, 8 || 2 β 1 || Add 2 to the sorted array by swapping it with 1 |- | 1 || 2, 3, 4, 5, 6, 7, 8 || || Add 1 to the sorted array |- | || 1, 2, 3, 4, 5, 6, 7, 8 || || {{success|Completed}} |} == Notes == {{reflist}} == References == * {{Cite journal |first=J. W. J. |last=Williams |author-link=J. W. J. Williams |title=Algorithm 232 β Heapsort |year=1964 |journal=[[Communications of the ACM]] |volume=7 |issue=6 |pages=347β348 |doi=10.1145/512274.512284}} * {{Cite journal |first=Robert W. |last=Floyd |author-link=Robert W. Floyd |title=Algorithm 245 β Treesort 3 |year=1964 |journal=[[Communications of the ACM]] |volume=7 |issue=12 |page=701 |doi= 10.1145/355588.365103 |s2cid=52864987|doi-access=free }} * {{Cite journal |first=Svante |last=Carlsson |author-link=:sv:Svante Carlsson |title=Average-case results on heapsort |year=1987 |journal=[[BIT Numerical Mathematics]] |volume=27 |issue=1 |pages=2β17 |doi= 10.1007/bf01937350 |s2cid=31450060}} * {{Cite book |first=Donald |last=Knuth |author-link=Donald Knuth |title=[[The Art of Computer Programming]] |volume=3: Sorting and Searching |edition=3rd |publisher=Addison-Wesley |year=1997 |isbn=978-0-201-89685-5 |pages=144β155 |chapter=Β§5.2.3, Sorting by Selection }} * {{Cite book |last1=Cormen |first1=Thomas H. |author-link1=Thomas H. Cormen |last2=Leiserson |first2=Charles E. |author-link2=Charles E. Leiserson |last3=Rivest |first3=Ronald L. |author-link3=Ron Rivest |last4=Stein |first4=Clifford |author-link4=Clifford Stein |year=2001 |title=[[Introduction to Algorithms]] |edition=2nd |publisher=MIT Press and McGraw-Hill |isbn=0-262-03293-7}} Chapters 6 and 7 Respectively: Heapsort and Priority Queues * [http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF A PDF of Dijkstra's original paper on Smoothsort] * [http://cis.stvincent.edu/html/tutorials/swd/heaps/heaps.html Heaps and Heapsort Tutorial] by David Carlson, St. Vincent College == External links == {{wikibooks|Algorithm implementation|Sorting/Heapsort|Heapsort}} * {{webarchive |url=https://web.archive.org/web/20150306071556/http://www.sorting-algorithms.com/heap-sort |date=6 March 2015 |title=Animated Sorting Algorithms: Heap Sort}} β graphical demonstration * [https://web.archive.org/web/20130326084250/http://olli.informatik.uni-oldenburg.de/heapsort_SALA/english/start.html Courseware on Heapsort from Univ. Oldenburg] β With text, animations and interactive exercises * [https://xlinux.nist.gov/dads/HTML/heapSort.html NIST's Dictionary of Algorithms and Data Structures: Heapsort] * [http://www.codecodex.com/wiki/Heapsort Heapsort implemented in 12 languages] {{Webarchive|url=https://web.archive.org/web/20101228072445/http://www.codecodex.com/wiki/Heapsort |date=28 December 2010 }} * [http://www.azillionmonkeys.com/qed/sort.html Sorting revisited] by Paul Hsieh * [http://employees.oneonta.edu/zhangs/powerPointPlatform/index.php A PowerPoint presentation demonstrating how Heap sort works] that is for educators. * [http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_Sorti.html#SECTION001413000000000000000 Open Data Structures β Section 11.1.3 β Heap-Sort], [[Pat Morin]] {{sorting}} [[Category:Articles with example pseudocode]] [[Category:Comparison sorts]] [[Category:Heaps (data structures)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Abbr
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite EWD
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite tech report
(
edit
)
Template:Cite web
(
edit
)
Template:Code
(
edit
)
Template:Frac
(
edit
)
Template:Harvnb
(
edit
)
Template:Infobox Algorithm
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Q
(
edit
)
Template:R
(
edit
)
Template:Reflist
(
edit
)
Template:Refn
(
edit
)
Template:Rp
(
edit
)
Template:Section link
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Sorting
(
edit
)
Template:Success
(
edit
)
Template:Table alignment
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)
Template:Wikibooks
(
edit
)