Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Radix sort
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Non-comparative integer sorting algorithm}} {{Use dmy dates|date=October 2020}} {{Infobox Algorithm |name={{PAGENAMEBASE}}|class=[[Sorting algorithm]] |image= |data=[[Array data type|Array]] |time=<math>O(w\cdot n)</math>, where <math>n</math> is the number of keys, and <math>w</math> is the key length. |space=<math>O(w+n)</math> |optimal=exactly correct }} In [[computer science]], '''radix sort''' is a non-[[Comparison sort|comparative]] [[sorting algorithm]]. It avoids comparison by creating and [[Distribution sort|distributing]] elements into buckets according to their [[radix]]. For elements with more than one [[significant digit]], this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, '''radix sort''' has also been called '''[[bucket sort]]''' and '''digital sort'''. Radix sort can be applied to data that can be sorted [[Lexicographical order|lexicographically]], be they integers, words, punch cards, playing cards, or the [[Bucket sort#Postman's sort|mail]]. == History == Radix sort dates back as far as 1887 to the work of [[Herman Hollerith]] on [[tabulating machines]].<ref>{{Cite patent|country= US |number= 395781}} and {{Cite patent|country= UK |number= 327}}</ref> Radix sorting algorithms came into common use as a way to sort [[punched card]]s as early as 1923.<ref name="auto"> [[Donald Knuth]]. ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. {{ISBN|0-201-89685-0}}. Section 5.2.5: Sorting by Distribution, pp. 168β179. </ref> The first memory-efficient computer algorithm for this sorting method was developed in 1954 at [[Massachusetts Institute of Technology|MIT]] by [[Harold H. Seward]]. Computerized radix sorts had previously been dismissed as impractical because of the perceived need for variable allocation of buckets of unknown size. Seward's innovation was to use a linear scan to determine the required bucket sizes and offsets beforehand, allowing for a single static allocation of auxiliary memory. The linear scan is closely related to Seward's other algorithm β [[counting sort]]. In the modern era, radix sorts are most commonly applied to collections of binary [[String (computer science)|strings]] and [[Integer (computer science)|integers]]. It has been shown in some benchmarks to be faster than other more general-purpose sorting algorithms, sometimes 50% to three times faster.<ref>{{Cite web|url=https://probablydance.com/2016/12/27/i-wrote-a-faster-sorting-algorithm/|title=I Wrote a Faster Sorting Algorithm|date=28 December 2016}}</ref><ref>{{Cite web|url=https://erik.gorset.no/2011/04/radix-sort-is-faster-than-quicksort.html|title=Is radix sort faster than quicksort for integer arrays?|website=erik.gorset.no}}</ref><ref>{{Cite web|url=https://www.boost.org/doc/libs/1_62_0/libs/sort/doc/html/boost/sort/spreadsort/integer__idm45516074556032.html|title=Function template integer_sort - 1.62.0|website=www.boost.org}}</ref> [[File:SEACComputer 038.jpg|thumb|An [[IBM card sorter]] performing a radix sort on a large set of [[punched cards]]. Cards are fed into a hopper below the operator's chin and are sorted into one of the machine's 13 output baskets, based on the data punched into one column on the cards. The crank near the input hopper is used to move the read head to the next column as the sort progresses. The rack in back holds cards from the previous sorting pass.]] == Digit order == Radix sorts can be implemented to start at either the [[most significant digit]] (MSD) or [[least significant digit]] (LSD). For example, with '''1234''', one could start with 1 (MSD) or 4 (LSD). LSD radix sorts typically use the following sorting order: short keys come before longer keys, and then keys of the same length are sorted [[lexicographically]]. This coincides with the normal order of integer representations, like the sequence '''[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]'''. LSD sorts are generally [[stable sort]]s. MSD radix sorts are most suitable for sorting strings or fixed-length integer representations. A sequence like '''[b, c, e, d, f, g, ba]''' would be sorted as '''[b, ba, c, d, e, f, g]'''. If lexicographic ordering is used to sort variable-length integers in base 10, then numbers from 1 to 10 would be output as '''[1, 10, 2, 3, 4, 5, 6, 7, 8, 9]''', as if the shorter keys were left-justified and padded on the right with blank characters to make the shorter keys as long as the longest key. MSD sorts are not necessarily stable if the original ordering of duplicate keys must always be maintained. Other than the traversal order, MSD and LSD sorts differ in their handling of variable length input. LSD sorts can group by length, radix sort each group, then concatenate the groups in size order. MSD sorts must effectively 'extend' all shorter keys to the size of the largest key and sort them accordingly, which can be more complicated than the grouping required by LSD. However, MSD sorts are more amenable to subdivision and recursion. Each bucket created by an MSD step can itself be radix sorted using the next most significant digit, without reference to any other buckets created in the previous step. Once the last digit is reached, concatenating the buckets is all that is required to complete the sort. == Examples == === Least significant digit === Input list: :'''[170, 45, 75, 90, 2, 802, 2, 66]''' Starting from the rightmost (last) digit, sort the numbers based on that digit: :'''[{17<u>0</u>, 9<u>0</u>}, {<u>2</u>, 80<u>2</u>, <u>2</u>}, {4<u>5</u>, 7<u>5</u>}, {6<u>6</u>}]''' Sorting by the next left digit: :'''[{''<u>0</u>''2, 8<u>0</u>2, ''<u>0</u>''2}, {<u>4</u>5}, {<u>6</u>6}, {1<u>7</u>0, <u>7</u>5}, {<u>9</u>0}]''' :<small>Notice that an implicit digit ''0'' is prepended for the two 2s so that 802 maintains its position between them.</small> And finally by the leftmost digit: :'''[{''<u>0</u>0''2, ''<u>0</u>0''2, ''<u>0</u>''45, ''<u>0</u>''66, ''<u>0</u>''75, ''<u>0</u>''90}, {<u>1</u>70}, {<u>8</u>02}]''' :<small>Notice that a ''0'' is prepended to all of the 1- or 2-digit numbers.</small> Each step requires just a single pass over the data, since each item can be placed in its bucket without comparison with any other element. Some radix sort implementations allocate space for buckets by first counting the number of keys that belong in each bucket before moving keys into those buckets. The number of times that each digit occurs is stored in an [[Array data type|array]]. Although it is always possible to pre-determine the bucket boundaries using counts, some implementations opt to use dynamic memory allocation instead. === Most significant digit, forward recursive === Input list, fixed width numeric strings with leading zeros: :'''[170, 045, 075, 025, 002, 024, 802, 066]''' First digit, with brackets indicating buckets: :'''[{<u>0</u>45, <u>0</u>75, <u>0</u>25, <u>0</u>02, <u>0</u>24, <u>0</u>66}, {<u>1</u>70}, {<u>8</u>02}]''' :<small>Notice that 170 and 802 are already complete because they are all that remain in their buckets, so no further recursion is needed</small> Next digit: :'''[{ {0<u>0</u>2}, {0<u>2</u>5, 0<u>2</u>4}, {0<u>4</u>5}, {0<u>6</u>6}, {0<u>7</u>5} }, 170, 802]''' Final digit: :'''[ 002, { {02<u>4</u>}, {02<u>5</u>} }, 045, 066, 075 , 170, 802]''' All that remains is concatenation: :'''[002, 024, 025, 045, 066, 075, 170, 802]''' == Complexity and performance == Radix sort operates in <math>O(n \cdot w)</math> time, where <math>n</math> is the number of keys, and <math>w</math> is the key length. LSD variants can achieve a lower bound for <math>w</math> of 'average key length' when splitting variable length keys into groups as discussed above. Optimized radix sorts can be very fast when working in a domain that suits them.<ref>{{Cite web |first1=Ranjan |last1=Sinha |first2=Justin |last2=Zobel |title=Efficient Trie-Based Sorting of Large Sets of Strings| citeseerx=10.1.1.12.2367 |url=https://citeseerx.ist.psu.edu/pdf/15b4e7759573298a36ace2f898e437c9218cdfca |access-date=August 24, 2023}}</ref> They are constrained to lexicographic data, but for many practical applications this is not a limitation. Large key sizes can hinder LSD implementations when the induced number of passes becomes the bottleneck.<ref name="auto"/> == Specialized variants == ===In-place MSD radix sort implementations=== Binary MSD radix sort, also called binary quicksort, can be implemented in-place by splitting the input array into two bins - the 0s bin and the 1s bin. The 0s bin is grown from the beginning of the array, whereas the 1s bin is grown from the end of the array. The 0s bin boundary is placed before the first array element. The 1s bin boundary is placed after the last array element. The most significant bit of the first array element is examined. If this bit is a 1, then the first element is swapped with the element in front of the 1s bin boundary (the last element of the array), and the 1s bin is grown by one element by decrementing the 1s boundary array index. If this bit is a 0, then the first element remains at its current location, and the 0s bin is grown by one element. The next array element examined is the one in front of the 0s bin boundary (i.e. the first element that is not in the 0s bin or the 1s bin). This process continues until the 0s bin and the 1s bin reach each other. The 0s bin and the 1s bin are then sorted recursively based on the next bit of each array element. Recursive processing continues until the least significant bit has been used for sorting.<ref>R. Sedgewick, "Algorithms in C++", third edition, 1998, p. 424-427</ref><ref>{{Cite web|url=http://www.drdobbs.com/architecture-and-design/algorithm-improvement-through-performanc/220300654|title=Algorithm Improvement through Performance Measurement: Part 2|first=Victor J.|last=Duvanenko|website=Dr. Dobb's}}</ref> Handling signed [[two's complement]] integers requires treating the most significant bit with the opposite sense, followed by unsigned treatment of the rest of the bits. In-place MSD binary-radix sort can be extended to larger radix and retain in-place capability. [[Counting sort]] is used to determine the size of each bin and their starting index. Swapping is used to place the current element into its bin, followed by expanding the bin boundary. As the array elements are scanned the bins are skipped over and only elements between bins are processed, until the entire array has been processed and all elements end up in their respective bins. The number of bins is the same as the radix used - e.g. 16 bins for 16-radix. Each pass is based on a single digit (e.g. 4-bits per digit in the case of 16-radix), starting from the [[most significant digit]]. Each bin is then processed recursively using the next digit, until all digits have been used for sorting.<ref>{{Cite web|url=http://www.drdobbs.com/architecture-and-design/algorithm-improvement-through-performanc/221600153|title=Algorithm Improvement through Performance Measurement: Part 3|first=Victor J.|last=Duvanenko|website=Dr. Dobb's}}</ref><ref>{{Cite web|url=http://www.drdobbs.com/parallel/parallel-in-place-radix-sort-simplified/229000734|title=Parallel In-Place Radix Sort Simplified|first=Victor J.|last=Duvanenko|website=Dr. Dobb's}}</ref> Neither in-place binary-radix sort nor n-bit-radix sort, discussed in paragraphs above, are [[Sorting algorithm#Stability|stable algorithms]]. ===Stable MSD radix sort implementations=== MSD radix sort can be implemented as a stable algorithm, but requires the use of a memory buffer of the same size as the input array. This extra memory allows the input buffer to be scanned from the first array element to last, and move the array elements to the destination bins in the same order. Thus, equal elements will be placed in the memory buffer in the same order they were in the input array. The MSD-based algorithm uses the extra memory buffer as the output on the first level of recursion, but swaps the input and output on the next level of recursion, to avoid the overhead of copying the output result back to the input buffer. Each of the bins are recursively processed, as is done for the in-place MSD radix sort. After the sort by the last digit has been completed, the output buffer is checked to see if it is the original input array, and if it's not, then a single copy is performed. If the digit size is chosen such that the key size divided by the digit size is an even number, the copy at the end is avoided.<ref>{{Cite web|url=http://www.drdobbs.com/tools/algorithm-improvement-through-performanc/222200161|title=Algorithm Improvement through Performance Measurement: Part 4|first=Victor J.|last=Duvanenko|website=Dr. Dobb's}}</ref> ===Hybrid approaches=== Radix sort, such as the two-pass method where [[counting sort]] is used during the first pass of each level of recursion, has a large constant overhead. Thus, when the bins get small, other sorting algorithms should be used, such as [[insertion sort]]. A good implementation of insertion sort is fast for small arrays, stable, in-place, and can significantly speed up radix sort. ===Application to parallel computing=== This recursive sorting algorithm has particular application to [[parallel computing]], as each of the bins can be sorted independently. In this case, each bin is passed to the next available processor. A single processor would be used at the start (the most significant digit). By the second or third digit, all available processors would likely be engaged. Ideally, as each subdivision is fully sorted, fewer and fewer processors would be utilized. In the worst case, all of the keys will be identical or nearly identical to each other, with the result that there will be little to no advantage to using parallel computing to sort the keys. In the top level of recursion, opportunity for parallelism is in the [[counting sort]] portion of the algorithm. Counting is highly parallel, amenable to the parallel_reduce pattern, and splits the work well across multiple cores until reaching memory bandwidth limit. This portion of the algorithm has data-independent parallelism. Processing each bin in subsequent recursion levels is data-dependent, however. For example, if all keys were of the same value, then there would be only a single bin with any elements in it, and no parallelism would be available. For random inputs all bins would be near equally populated and a large amount of parallelism opportunity would be available.<ref>{{Cite web|url=http://www.drdobbs.com/parallel/parallel-in-place-n-bit-radix-sort/226600004|title=Parallel In-Place N-bit-Radix Sort|first=Victor J.|last=Duvanenko|website=Dr. Dobb's}}</ref> There are faster parallel sorting algorithms available, for example optimal complexity O(log(''n'')) are those of the Three Hungarians and Richard Cole<ref>A. Gibbons and [[Wojciech Rytter|W. Rytter]], ''Efficient Parallel Algorithms''. Cambridge University Press, 1988.</ref><ref>H. Casanova et al, ''Parallel Algorithms''. Chapman & Hall, 2008.</ref> and [[Ken Batcher|Batcher]]'s [[Bitonic sorter|bitonic merge sort]] has an algorithmic complexity of O(log<sup>2</sup>(''n'')), all of which have a lower algorithmic time complexity to radix sort on a CREW-[[Parallel random-access machine|PRAM]]. The fastest known PRAM sorts were described in 1991 by [[David M W Powers]] with a parallelized quicksort that can operate in O(log(n)) time on a CRCW-PRAM with ''n'' processors by performing partitioning implicitly, as well as a radixsort that operates using the same trick in O(''k''), where ''k'' is the maximum keylength.<ref>David M. W. Powers, [https://sites.cs.ucsb.edu/~gilbert/cs140/old/cs140Win2009/sortproject/ParallelQuicksort.pdf Parallelized Quicksort and Radixsort with Optimal Speedup], ''Proceedings of International Conference on Parallel Computing Technologies''. [[Novosibirsk]]. 1991.</ref> However, neither the PRAM architecture or a single sequential processor can actually be built in a way that will scale without the number of constant [[fan-out]] gate delays per cycle increasing as O(log(''n'')), so that in effect a pipelined version of Batcher's bitonic mergesort and the O(log(''n'')) PRAM sorts are all O(log<sup>2</sup>(''n'')) in terms of clock cycles, with Powers acknowledging that Batcher's would have lower constant in terms of gate delays than his Parallel [[quicksort]] and radix sort, or Cole's [[merge sort]], for a keylength-independent [[sorting network]] of O(nlog<sup>2</sup>(''n'')).<ref>David M. W. Powers, [http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf Parallel Unification: Practical Complexity], Australasian Computer Architecture Workshop, Flinders University, January 1995</ref> === Tree-based radix sort === Radix sorting can also be accomplished by building a [[Tree (data structure)|tree]] (or [[radix tree]]) from the input set, and doing a [[Tree traversal#Pre-order, NLR|pre-order]] traversal. This is similar to the relationship between [[heapsort]] and the [[Heap (data structure)|heap]] data structure. This can be useful for certain data types, see [[burstsort]]. ==See also== * [[IBM 80 series Card Sorters]] * Other [[distribution sort]]s * [[Kirkpatrick-Reisch sorting]] * [[Prefix sum]] ==References== {{reflist|30em}} ==External links==<!-- This section is linked from Radix sort --> {{wikibooks|Algorithm implementation|Sorting/Radix_sort|Radix sort}} *[https://www.codingeek.com/algorithms/radix-sort-explanation-pseudocode-and-implementation/ Explanation, Pseudocode and implementation] in C and Java *[https://duvanenko.tech.blog/2017/06/15/faster-sorting-in-javascript/ High Performance Implementation] of LSD Radix sort in [[JavaScript]] *[https://duvanenko.tech.blog/2018/05/23/faster-sorting-in-c/ High Performance Implementation] of LSD & MSD Radix sort in [[C Sharp (programming language)|C#]] with source in [https://github.com/DragonSpit/HPCsharp/ GitHub] *[https://www.youtube.com/watch?v=6YyflHO9GdE Video tutorial of MSD Radix Sort] *[http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Sort/Radix/ Demonstration and comparison] of Radix sort with [[Bubble sort]], [[Merge sort]] and [[Quicksort]] implemented in [[JavaScript]] *[http://www.codercorner.com/RadixSortRevisited.htm Article] about Radix sorting [[IEEE floating-point standard|IEEE floating-point]] numbers with implementation. *:[http://www.stereopsis.com/radix.html Faster Floating Point Sorting and Multiple Histogramming] with implementation in C++ *Pointers to [https://web.archive.org/web/20060829193644/http://web-cat.cs.vt.edu/AlgovizWiki/RadixSort radix sort visualizations] *[http://bitbucket.org/ais/usort/wiki/Home USort library] {{Webarchive|url=https://web.archive.org/web/20110807200359/https://bitbucket.org/ais/usort/wiki/Home |date=7 August 2011 }} contains tuned implementations of radix sort for most numerical C types (C99) * [[Donald Knuth]]. ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. {{ISBN|0-201-89685-0}}. Section 5.2.5: Sorting by Distribution, pp. 168β179. * [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw-Hill, 2001. {{ISBN|0-262-03293-7}}. Section 8.3: Radix sort, pp. 170β173. * [https://web.archive.org/web/20010714213118/http://www.chinet.com/~edlee/bradsort.c BRADSORT v1.50 source code], by Edward Lee. BRADSORT v1.50 is a radix sorting algorithm that combines a binary trie structure with a circular doubly linked list. * [https://web.archive.org/web/20191112193719/https://pdfs.semanticscholar.org/15b4/e7759573298a36ace2f898e437c9218cdfca.pdf Efficient Trie-Based Sorting of Large Sets of Strings], by Ranjan Sinha and Justin Zobel. This paper describes a method of creating tries of buckets which figuratively burst into sub-tries when the buckets hold more than a predetermined capacity of strings, hence the name, "Burstsort". * [http://opendatastructures.org/versions/edition-0.1e/ods-java/11_2_Counting_Sort_Radix_So.html Open Data Structures - Java Edition - Section 11.2 - Counting Sort and Radix Sort], [[Pat Morin]] * [http://opendatastructures.org/ods-cpp/11_2_Counting_Sort_Radix_So.html Open Data Structures - C++ Edition - Section 11.2 - Counting Sort and Radix Sort], [[Pat Morin]] {{sorting}} {{DEFAULTSORT:Radix Sort}} [[Category:Articles with example C code]] [[Category:Sorting algorithms]] [[Category:Stable sorts]] [[Category:String sorting algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite patent
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:Infobox Algorithm
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Sorting
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)
Template:Wikibooks
(
edit
)