Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Heapsort
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Other variations === *Ternary heapsort uses a [[ternary heap]] instead of a binary heap; that is, each element in the heap has three children. It is more complicated to program but does a constant number of times fewer swap and comparison operations. This is because each sift-down step in a ternary heap requires three comparisons and one swap, whereas in a binary heap, two comparisons and one swap are required. Two levels in a ternary heap cover 3<sup>2</sup> = 9 elements, doing more work with the same number of comparisons as three levels in the binary heap, which only cover 2<sup>3</sup> = 8.{{Citation needed|date=September 2014}} This is primarily of academic interest, or as a student exercise,<ref>{{cite book |title=Data Structures Using Pascal |chapter=Chapter 8: Sorting |first1=Aaron M. |last1=Tenenbaum |first2=Moshe J. |last2=Augenstein |year=1981 |page=405 |publisher=Prentice-Hall |isbn=0-13-196501-8 |quote=<!--Exercise 8. Define an almost complete ternary tree as a tree in which every node has at most three sons and such that the nodes can be numbered from 1 to n so that the sons of node[i] are node[3*iβl], node[3*i] and node[3*i+l]. Define a ternary heap as an almost complete ternary tree in which the content of each node is greater than or equal to the contents of all its descendants. -->Write a sorting routine similar to the heapsort except that it uses a ternary heap. }}</ref> as the additional complexity is not worth the minor savings, and bottom-up heapsort beats both. *Memory-optimized heapsort{{r|LaMarca99|p=87}} improves heapsort's [[locality of reference]] by increasing the number of children even more. This increases the number of comparisons, but because all children are stored consecutively in memory, reduces the number of [[cache line]]s accessed during heap traversal, a net performance improvement. *The standard implementation of Floyd's heap-construction algorithm causes a large number of [[cache miss]]es once the size of the data exceeds that of the [[CPU cache]].{{r|LaMarca99|p=87}} Better performance on large data sets can be obtained by merging in [[depth-first]] order, combining subheaps as soon as possible, rather than combining all subheaps on one level before proceeding to the one above.<ref name=Bojesen00>{{cite journal |title=Performance Engineering Case Study: Heap Construction |first1=Jesper |last1=Bojesen |first2=Jyrki |last2=Katajainen |first3=Maz |last3=Spork |journal=ACM Journal of Experimental Algorithmics |date=2000 |volume=5 |pages=15βes |number=15 |doi=10.1145/351827.384257 |citeseerx=10.1.1.35.3248 |s2cid=30995934 |url=http://hjemmesider.diku.dk/~jyrki/Paper/katajain.ps |format=PostScript}} [https://www.semanticscholar.org/paper/Performance-Engineering-Case-Study-Heap-Bojesen-Katajainen/6f4ada5912c1da64e16453d67ec99c970173fb5b Alternate PDF source].</ref><ref>{{cite conference |chapter=In-place Heap Construction with Optimized Comparisons, Moves, and Cache Misses |first1=Jingsen |last1=Chen |first2=Stefan |last2=Edelkamp |first3=Amr |last3=Elmasry |first4=Jyrki |last4=Katajainen |title=Mathematical Foundations of Computer Science 2012 |series=Lecture Notes in Computer Science |doi=10.1007/978-3-642-32589-2_25 |conference=37th international conference on Mathematical Foundations of Computer Science |pages=259β270 |location=Bratislava, Slovakia |date=27β31 August 2012 |volume=7464 |isbn=978-3-642-32588-5 |s2cid=1462216 |chapter-url=https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf |archive-url=https://web.archive.org/web/20161229031307/https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf |url-status=dead |archive-date=29 December 2016 }} See particularly Fig. 3.</ref> *Out-of-place heapsort<ref>{{cite conference |title=QuickHeapsort, an efficient mix of classical sorting algorithms |first1=Domenico |last1=Cantone |first2=Gianluca |last2=Concotti |conference=4th Italian Conference on Algorithms and Complexity |series=Lecture Notes in Computer Science |volume=1767 |pages=150β162 |isbn=3-540-67159-5 |date=1β3 March 2000 |location=Rome |url=https://archive.org/details/springer_10.1007-3-540-46521-9 }}</ref><ref>{{cite journal |title=QuickHeapsort, an efficient mix of classical sorting algorithms |first1=Domenico |last1=Cantone |first2=Gianluca |last2=Concotti |journal=Theoretical Computer Science |volume=285 |issue=1 |pages=25β42 |date=August 2002 |doi=10.1016/S0304-3975(01)00288-2 |doi-access=free |zbl=1016.68042 |url=https://core.ac.uk/download/pdf/81957449.pdf }}</ref>{{r|MacKay05}} improves on bottom-up heapsort by eliminating the worst case, guaranteeing {{math|''n'' log<sub>2</sub>''n'' + ''O''(''n'')}} comparisons. When the maximum is taken, rather than fill the vacated space with an unsorted data value, fill it with a {{math|ββ}} sentinel value, which never "bounces" back up. It turns out that this can be used as a primitive in an in-place (and non-recursive) "QuickHeapsort" algorithm.<ref>{{cite journal |title=QuickHeapsort: Modifications and improved analysis |first1=Volker |last1=Diekert |first2=Armin |last2=WeiΓ |journal= Theory of Computing Systems |volume=59 |issue=2 |pages=209β230 |date=August 2016 |doi=10.1007/s00224-015-9656-y <!-- |doi=10.1007/978-3-642-38536-0_3 Conference version, published September 2012--> |arxiv=1209.4214 |s2cid=792585 }}</ref> First, you perform a quicksort-like partitioning pass, but reversing the order of the partitioned data in the array. Suppose ([[without loss of generality]]) that the smaller partition is the one greater than the pivot, which should go at the end of the array, but our reversed partitioning step places it at the beginning. Form a heap out of the smaller partition and do out-of-place heapsort on it, exchanging the extracted maxima with values from the end of the array. These are less than the pivot, meaning less than any value in the heap, so serve as {{math|ββ}} sentinel values. Once the heapsort is complete (and the pivot moved to just before the now-sorted end of the array), the order of the partitions has been reversed, and the larger partition at the beginning of the array may be sorted in the same way. (Because there is no non-[[tail recursion]], this also eliminates quicksort's {{math|''O''(log ''n'')}} stack usage.) *The [[smoothsort]] algorithm<ref>{{Cite EWD|796a|Smoothsort β an alternative to sorting in situ}}</ref> is a variation of heapsort developed by [[Edsger W. Dijkstra]] in 1981. Like heapsort, smoothsort's upper bound is {{math|''O''(''n'' log ''n'')}}. The advantage of smoothsort is that it comes closer to {{math|''O''(''n'')}} time if the [[Adaptive sort|input is already sorted to some degree]], whereas heapsort averages {{math|''O''(''n'' log ''n'')}} regardless of the initial sorted state. Due to its complexity, smoothsort is rarely used.{{citation needed|date=November 2016}} *Levcopoulos and Petersson<ref>{{cite book | last1 = Levcopoulos | first1 = Christos | last2 = Petersson | first2 = Ola | chapter = HeapsortβAdapted for Presorted Files | location = London, UK | pages = 499β509 | publisher = Springer-Verlag | series = Lecture Notes in Computer Science | title = WADS '89: Proceedings of the Workshop on Algorithms and Data Structures | volume = 382 | doi = 10.1007/3-540-51542-9_41 | year = 1989| isbn = 978-3-540-51542-5 }} {{Q|56049336}}.</ref> describe a variation of heapsort based on a heap of [[Cartesian tree]]s. First, a Cartesian tree is built from the input in {{math|''O''(''n'')}} time, and its root is placed in a 1-element binary heap. Then we repeatedly extract the minimum from the binary heap, output the tree's root element, and add its left and right children (if any) which are themselves Cartesian trees, to the binary heap.<ref>{{cite web |title=CartesianTreeSort.hh |website=Archive of Interesting Code |url=http://www.keithschwarz.com/interesting/code/?dir=cartesian-tree-sort |first=Keith |last=Schwartz |date=27 December 2010 |access-date=2019-03-05 }}</ref> As they show, if the input is already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having left and right children, resulting in the binary heap remaining small, and allowing the algorithm to sort more quickly than {{math|''O''(''n'' log ''n'')}} for inputs that are already nearly sorted. * Several variants such as [[Weak heap#Weak-heap sort|weak heapsort]] require {{math|''n'' log<sub>2</sub> ''n''+''O''(n)}} comparisons in the worst case, close to the theoretical minimum, using one extra bit of state per node. While this extra bit makes the algorithms not truly in-place, if space for it can be found inside the element, these algorithms are simple and efficient,{{r|Bojesen00|p=40}} but still slower than binary heaps if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not matter.<ref name=Kat2014-11-14P>{{cite conference |title=Seeking for the best priority queue: Lessons learnt |first=Jyrki |last=Katajainen |date=23 September 2013 |conference=Algorithm Engineering (Seminar 13391) |location=Dagstuhl |pages=19β20, 24 |url=http://hjemmesider.diku.dk/~jyrki/Myris/Kat2013-09-23P.html}}</ref> * Katajainen's "ultimate heapsort" requires no extra storage, performs {{math|''n'' log<sub>2</sub> ''n''+''O''(n)}} comparisons, and a similar number of element moves.<ref>{{cite conference |title=The Ultimate Heapsort |date=2β3 February 1998 |first=Jyrki |last=Katajainen |conference=Computing: the 4th Australasian Theory Symposium |url=http://hjemmesider.diku.dk/~jyrki/Myris/Kat1998C.html |journal=Australian Computer Science Communications |volume=20 |issue=3 |pages=87β96 |location=Perth }}</ref> It is, however, even more complex and not justified unless comparisons are very expensive.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)