Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Binary search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Search algorithm finding the position of a target value within an array}} {{about|searching a finite array|searching continuous function values|bisection method}} {{featured article}} {{use dmy dates|date=April 2016}} {{Infobox algorithm |class=[[Search algorithm]] |image=Binary Search Depiction.svg |caption=Visualization of the binary search algorithm where 7 is the target value |data=[[array data structure|Array]] |time=[[big O notation#Orders of common functions|''O''(log ''n'')]] |space = [[big O notation#Orders of common functions|''O''(1)]] |best-time=[[big O notation#Orders of common functions|''O''(1)]] |average-time=[[big O notation#Orders of common functions|''O''(log ''n'')]] |optimal=Yes }} In [[computer science]], '''binary search''', also known as '''half-interval search''',<ref name="Williams1976">{{cite conference|last1=Williams, Jr.|first1=Louis F.|title=A modification to the half-interval search (binary search) method|conference=Proceedings of the 14th ACM Southeast Conference|date=22 April 1976|pages=95–101|doi=10.1145/503561.503582|url=https://dl.acm.org/citation.cfm?doid=503561.503582|publisher=ACM|access-date=29 June 2018|archive-url=https://web.archive.org/web/20170312215255/http://dl.acm.org/citation.cfm?doid=503561.503582|archive-date=12 March 2017|url-status=live|df=dmy-all|doi-access=free}}</ref> '''logarithmic search''',{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Binary search"}} or '''binary chop''',{{Sfn|Butterfield|Ngondi|2016|p=46}} is a [[search algorithm]] that finds the position of a target value within a [[sorted array]].{{Sfn|Cormen|Leiserson|Rivest|Stein|2009|p=39}}<ref>{{MathWorld |title=Binary search |id=BinarySearch}}</ref> Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Binary search runs in [[Time complexity#Logarithmic time|logarithmic time]] in the [[Best, worst and average case|worst case]], making <math>O(\log n)</math> comparisons, where <math>n</math> is the number of elements in the array.{{Efn|The <math>O</math> is [[Big O notation]], and <math>\log</math> is the [[logarithm]]. In Big O notation, the base of the logarithm does not matter since every logarithm of a given base is a constant factor of another logarithm of another base. That is, <math>\log_b(n) = \log_k(n) \div \log_k(b)</math>, where <math>\log_k(b)</math> is a constant.}}<ref name="FloresMadpis1971">{{cite journal|last1=Flores|first1=Ivan|last2=Madpis|first2=George|s2cid=43325465|title=Average binary search length for dense ordered lists|journal=[[Communications of the ACM]]|date=1 September 1971|volume=14|issue=9|pages=602–603|issn=0001-0782|doi=10.1145/362663.362752|doi-access=free}}</ref> Binary search is faster than [[linear search]] except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized [[data structures]] designed for fast searching, such as [[hash tables]], that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array. There are numerous variations of binary search. In particular, [[fractional cascading]] speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in [[computational geometry]] and in numerous other fields. [[Exponential search]] extends binary search to unbounded lists. The [[binary search tree]] and [[B-tree]] data structures are based on binary search. == Algorithm == Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Algorithm B"}} === Procedure === Given an array <math>A</math> of <math>n</math> elements with values or [[Record (computer science)|records]] <math>A_0,A_1,A_2,\ldots,A_{n-1}</math>sorted such that <math>A_0 \leq A_1 \leq A_2 \leq \cdots \leq A_{n-1}</math>, and target value <math>T</math>, the following [[subroutine]] uses binary search to find the index of <math>T</math> in <math>A</math>.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Algorithm B"}} # Set <math>L</math> to <math> 0</math> and <math>R</math> to <math>n-1</math>. # If <math>L>R</math>, the search terminates as unsuccessful. # Set <math>m</math> (the position of the middle element) to <math>L</math> plus the [[Floor and ceiling functions|floor]] of <math>\frac{R-L}{2}</math>, which is the greatest integer less than or equal to <math>\frac{R-L}{2}</math>. # If <math>A_m < T</math>, set <math>L</math> to <math>m+1</math> and go to step 2. # If <math>A_m > T</math>, set <math>R</math> to <math>m-1</math> and go to step 2. # Now <math>A_m = T</math>, the search is done; return <math>m</math>. This iterative procedure keeps track of the search boundaries with the two variables <math>L</math> and <math>R</math>. The procedure may be expressed in [[pseudocode]] as follows, where the variable names and types remain the same as above, <code>floor</code> is the [[floor function]], and <code>unsuccessful</code> refers to a specific value that conveys the failure of the search.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Algorithm B"}} [[File:Binary-search-work.gif|thumb|right|binary-search]] '''function''' binary_search(A, n, T) '''is''' L := 0 R := n − 1 '''while''' L ≤ R '''do''' m := L + floor((R - L) / 2) '''if''' A[m] < T '''then''' L := m + 1 '''else if''' A[m] > T '''then''' R := m − 1 '''else''': '''return''' m '''return''' unsuccessful Alternatively, the algorithm may take the [[Floor and ceiling functions|ceiling]] of <math>\frac{R-L}{2}</math>. This may change the result if the target value appears more than once in the array. ==== Alternative procedure ==== In the above procedure, the algorithm checks whether the middle element (<math>m</math>) is equal to the target (<math>T</math>) in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when <math>L=R</math>). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average.<ref name="Bottenbruch1962">{{cite journal|last1=Bottenbruch|first1=Hermann|s2cid=13406983|title=Structure and use of ALGOL 60|journal=[[Journal of the ACM]] |date=1 April 1962|volume=9|issue=2|pages=161–221 |issn=0004-5411|doi=10.1145/321119.321120 |doi-access=free}} Procedure is described at p. 214 (§43), titled "Program for Binary Search".</ref> [[Hermann Bottenbruch]] published the first implementation to leave out this check in 1962.<ref name="Bottenbruch1962" />{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}} # Set <math>L</math> to <math> 0</math> and <math>R</math> to <math>n-1</math>. # While <math>L \neq R</math>, ## Set <math>m</math> (the position of the middle element) to <math>L</math> plus the [[Floor and ceiling functions|ceiling]] of <math>\frac{R-L}{2}</math>, which is the least integer greater than or equal to <math>\frac{R-L}{2}</math>. ## If <math>A_m > T</math>, set <math>R</math> to <math>m-1</math>. ## Else, <math>A_m \leq T</math>; set <math>L</math> to <math>m</math>. # Now <math>L=R</math>, the search is done. If <math>A_L=T</math>, return <math>L</math>. Otherwise, the search terminates as unsuccessful. Where <code>ceil</code> is the ceiling function, the pseudocode for this version is: '''function''' binary_search_alternative(A, n, T) '''is''' L := 0 R := n − 1 '''while''' L != R '''do''' m := L + ceil((R - L) / 2) '''if''' A[m] > T '''then''' R := m − 1 '''else''': L := m '''if''' A[L] = T '''then''' '''return''' L '''return''' unsuccessful === Duplicate elements === The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was <math>[1,2,3,4,4,5,6,7]</math> and the target was <math>4</math>, then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider <math>[1,2,4,4,4,5,6,7]</math> which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}} ==== Procedure for finding the leftmost element ==== To find the leftmost element, the following procedure can be used:{{Sfn|Kasahara|Morishita|2006|pp=8–9}} # Set <math>L</math> to <math> 0</math> and <math>R</math> to <math>n</math>. # While <math>L < R</math>, ## Set <math>m</math> (the position of the middle element) to <math>L</math> plus the [[Floor and ceiling functions|floor]] of <math>\frac{R-L}{2}</math>, which is the greatest integer less than or equal to <math>\frac{R-L}{2}</math>. ## If <math>A_m < T</math>, set <math>L</math> to <math>m+1</math>. ## Else, <math>A_m \geq T</math>; set ''<math>R</math>'' to <math>m</math>. # Return <math>L</math>. If <math>L < n</math> and <math>A_L = T</math>, then <math>A_L</math> is the leftmost element that equals <math>T</math>. Even if <math>T</math> is not in the array, <math>L</math> is the [[#Approximate matches|rank]] of <math>T</math> in the array, or the number of elements in the array that are less than <math>T</math>. Where <code>floor</code> is the floor function, the pseudocode for this version is: '''function''' binary_search_leftmost(A, n, T): L := 0 R := n '''while''' L < R: m := L + floor((R - L) / 2) '''if''' A[m] < T: L := m + 1 '''else''': R := m '''return''' L ==== Procedure for finding the rightmost element ==== To find the rightmost element, the following procedure can be used:{{Sfn|Kasahara|Morishita|2006|pp=8–9}} # Set <math>L</math> to <math> 0</math> and <math>R</math> to <math>n</math>. # While <math>L < R</math>, ## Set <math>m</math> (the position of the middle element) to <math>L</math> plus the [[Floor and ceiling functions|floor]] of <math>\frac{R-L}{2}</math>, which is the greatest integer less than or equal to <math>\frac{R-L}{2}</math>. ## If <math>A_m > T</math>, set <math>R</math> to <math>m</math>. ## Else, <math>A_m \leq T</math>; set ''<math>L</math>'' to <math>m+1</math>. # Return <math>R - 1</math>. If <math>R > 0</math> and <math>A_{R-1}=T</math>, then <math>A_{R-1}</math> is the rightmost element that equals <math>T</math>. Even if ''<math>T</math>'' is not in the array, <math>n-R</math> is the number of elements in the array that are greater than ''<math>T</math>''. Where <code>floor</code> is the floor function, the pseudocode for this version is: '''function''' binary_search_rightmost(A, n, T): L := 0 R := n '''while''' L < R: m := L + floor((R - L) / 2) '''if''' A[m] > T: R := m '''else''': L := m + 1 '''return''' R - 1 === Approximate matches === [[File:Approximate-binary-search.svg|thumb|upright=1.5|Binary search can be adapted to compute approximate matches. In the example above, the rank, predecessor, successor, and nearest neighbor are shown for the target value <math>5</math>, which is not in the array.]] The above procedure only performs ''exact'' matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and [[Nearest neighbor search|nearest neighbor]]. [[Range query (data structures)|Range queries]] seeking the number of elements between two values can be performed with two rank queries.{{sfn|Sedgewick|Wayne|2011|loc=§3.1, subsection "Rank and selection"}} * Rank queries can be performed with the [[#Procedure for finding the leftmost element|procedure for finding the leftmost element]]. The number of elements ''less than'' the target value is returned by the procedure.{{sfn|Sedgewick|Wayne|2011|loc=§3.1, subsection "Rank and selection"}} * Predecessor queries can be performed with rank queries. If the rank of the target value is <math>r</math>, its predecessor is <math>r-1</math>.{{Sfn|Goldman|Goldman|2008|pp=461–463}} * For successor queries, the [[#Procedure for finding the rightmost element|procedure for finding the rightmost element]] can be used. If the result of running the procedure for the target value is ''<math>r</math>'', then the successor of the target value is <math>r+1</math>.{{Sfn|Goldman|Goldman|2008|pp=461–463}} * The nearest neighbor of the target value is either its predecessor or successor, whichever is closer. * Range queries are also straightforward.{{Sfn|Goldman|Goldman|2008|pp=461–463}} Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains entries matching those endpoints.{{sfn|Sedgewick|Wayne|2011|loc=§3.1, subsection "Range queries"}} == Performance == [[File:Binary search example tree.svg|thumb|upright|A [[Tree (data structure)|tree]] representing binary search. The array being searched here is <math>[20, 30, 40, 50, 80, 90, 100]</math>, and the target value is <math>40</math>.]] [[File:Binary search complexity.svg|thumb|upright=1.6|The worst case is reached when the search reaches the deepest level of the tree, while the best case is reached when the target value is the middle element.]] In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration.<ref name="FloresMadpis1971" />{{sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} In the worst case, binary search makes <math display="inline">\lfloor \log_2 (n) + 1 \rfloor</math> iterations of the comparison loop, where the <math display="inline">\lfloor\cdot \rfloor</math> notation denotes the [[floor function]] that yields the greatest integer less than or equal to the argument, and <math display="inline">\log_2</math> is the [[binary logarithm]]. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always <math display="inline">\lfloor \log_2 (n) + 1 \rfloor</math> levels in the tree for any binary search. The worst case may also be reached when the target element is not in the array. If <math display="inline">n</math> is one less than a power of two, then this is always the case. Otherwise, the search may perform <math display="inline">\lfloor \log_2 (n) + 1 \rfloor</math>iterations if the search reaches the deepest level of the tree. However, it may make <math display="inline">\lfloor \log_2 (n) \rfloor</math> iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree.{{sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), "Theorem B"}} On average, assuming that each element is equally likely to be searched, binary search makes <math>\lfloor \log_2 (n) \rfloor + 1 - (2^{\lfloor \log_2 (n) \rfloor + 1} - \lfloor \log_2 (n) \rfloor - 2)/n</math> iterations when the target element is in the array. This is approximately equal to <math>\log_2(n) - 1</math> iterations. When the target element is not in the array, binary search makes <math>\lfloor \log_2 (n) \rfloor + 2 - 2^{\lfloor \log_2 (n) \rfloor + 1}/(n + 1)</math> iterations on average, assuming that the range between and outside elements is equally likely to be searched.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} In the best case, where the target value is the middle element of the array, its position is returned after one iteration.{{Sfn|Chang|2003|p=169}} In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely.{{Efn|Any search algorithm based solely on comparisons can be represented using a binary comparison tree. An ''internal path'' is any path from the root to an existing node. Let <math>I</math> be the ''internal path length'', the sum of the lengths of all internal paths. If each element is equally likely to be searched, the average case is <math>1 + \frac{I}{n}</math> or simply one plus the average of all the internal path lengths of the tree. This is because internal paths represent the elements that the search algorithm compares to the target. The lengths of these internal paths represent the number of iterations ''after'' the root node. Adding the average of these lengths to the one iteration at the root yields the average case. Therefore, to minimize the average number of comparisons, the internal path length <math>I</math> must be minimized. It turns out that the tree for binary search minimizes the internal path length. {{Harvnb|Knuth|1998}} proved that the ''external path'' length (the path length over all nodes where both children are present for each already-existing node) is minimized when the external nodes (the nodes with no children) lie within two consecutive levels of the tree. This also applies to internal paths as internal path length <math>I</math> is linearly related to external path length <math>E</math>. For any tree of <math>n</math> nodes, <math>I = E - 2n</math>. When each subtree has a similar number of nodes, or equivalently the array is divided into halves in each iteration, the external nodes as well as their interior parent nodes lie within two levels. It follows that binary search minimizes the number of average comparisons as its comparison tree has the lowest possible internal path length.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}}}} Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over ''all'' elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} === Space complexity === Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search is <math>O(1)</math> in the [[word RAM]] model of computation. === Derivation of average case === The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that the [[Interval (mathematics)|intervals]] between and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided by <math>n</math>, the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by the <math>n + 1</math> intervals.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} ==== Successful searches ==== <!-- E stands for "expected" --> In the binary tree representation, a successful search can be represented by a path from the root to the target node, called an ''internal path''. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has length {{mvar|l}}, is <math>l + 1</math> counting the initial iteration. The ''internal path length'' is the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there are {{mvar|n}} elements, which is a positive integer, and the internal path length is <math>I(n)</math>, then the average number of iterations for a successful search <math>T(n) = 1 + \frac{I(n)}{n}</math>, with the one iteration added to count the initial iteration.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees with {{mvar|n}} nodes, which is equal to:{{Sfn|Knuth|1997|loc=§2.3.4.5 ("Path length")}} <math display="block"> I(n) = \sum_{k=1}^n \left \lfloor \log_2(k) \right \rfloor </math> For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is:{{Sfn|Knuth|1997|loc=§2.3.4.5 ("Path length")}} <math display="block"> \sum_{k=1}^7 \left \lfloor \log_2(k) \right \rfloor = 0 + 2(1) + 4(2) = 2 + 8 = 10 </math> The average number of iterations would be <math>1 + \frac{10}{7} = 2 \frac{3}{7}</math> based on the equation for the average case. The sum for <math>I(n)</math> can be simplified to:{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} <math display="block"> I(n) = \sum_{k=1}^n \left \lfloor \log_2(k) \right \rfloor = (n + 1)\left \lfloor \log_2(n + 1) \right \rfloor - 2^{\left \lfloor \log_2(n+1) \right \rfloor + 1} + 2 </math> Substituting the equation for <math>I(n)</math> into the equation for <math>T(n)</math>:{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} <math display="block"> T(n) = 1 + \frac{(n + 1)\left \lfloor \log_2(n + 1) \right \rfloor - 2^{\left \lfloor \log_2(n+1) \right \rfloor + 1} + 2}{n} = \lfloor \log_2 (n) \rfloor + 1 - (2^{\lfloor \log_2 (n) \rfloor + 1} - \lfloor \log_2 (n) \rfloor - 2)/n </math> For integer {{mvar|n}}, this is equivalent to the equation for the average case on a successful search specified above. ==== Unsuccessful searches ==== Unsuccessful searches can be represented by augmenting the tree with ''external nodes'', which forms an ''extended binary tree''. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. An ''external path'' is a path from the root to an external node. The ''external path length'' is the sum of the lengths of all unique external paths. If there are <math>n</math> elements, which is a positive integer, and the external path length is <math>E(n)</math>, then the average number of iterations for an unsuccessful search <math>T'(n)=\frac{E(n)}{n+1}</math>, with the one iteration added to count the initial iteration. The external path length is divided by <math>n+1</math> instead of <math>n</math> because there are <math>n+1</math> external paths, representing the intervals between and outside the elements of the array.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} This problem can similarly be reduced to determining the minimum external path length of all binary trees with <math>n</math> nodes. For all binary trees, the external path length is equal to the internal path length plus <math>2n</math>.{{Sfn|Knuth|1997|loc=§2.3.4.5 ("Path length")}} Substituting the equation for <math>I(n)</math>:{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} <math> E(n) = I(n) + 2n = \left[(n + 1)\left \lfloor \log_2(n + 1) \right \rfloor - 2^{\left \lfloor \log_2(n+1) \right \rfloor + 1} + 2\right] + 2n = (n + 1) (\lfloor \log_2 (n) \rfloor + 2) - 2^{\lfloor \log_2 (n) \rfloor + 1} </math> Substituting the equation for <math>E(n)</math> into the equation for <math>T'(n)</math>, the average case for unsuccessful searches can be determined:{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}} <math> T'(n) = \frac{(n + 1) (\lfloor \log_2 (n) \rfloor + 2) - 2^{\lfloor \log_2 (n) \rfloor + 1}}{(n+1)} = \lfloor \log_2 (n) \rfloor + 2 - 2^{\lfloor \log_2 (n) \rfloor + 1}/(n + 1) </math> ==== Performance of alternative procedure ==== Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only <math display="inline">\lfloor \log_2 (n) + 1 \rfloor</math> times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very large <math display="inline">n</math>.{{Efn|{{Harvnb|Knuth|1998}} showed on his [[MIX (abstract machine)|MIX]] computer model, which Knuth designed as a representation of an ordinary computer, that the average running time of this variation for a successful search is <math display="inline">17.5 \log_2 n + 17</math> units of time compared to <math display="inline">18 \log_2 n - 16</math> units for regular binary search. The time complexity for this variation grows slightly more slowly, but at the cost of higher initial complexity.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Exercise 23"}}}}{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Exercise 23"}}<ref>{{cite journal|last1=Rolfe|first1=Timothy J.|s2cid=23752485|title=Analytic derivation of comparisons in binary search|journal=ACM SIGNUM Newsletter|date=1997|volume=32|issue=4|pages=15–19|doi=10.1145/289251.289255|doi-access=free}}</ref> === Running time and cache use === In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number of [[bit]]s) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparing [[Floating-point arithmetic|floating-point]] values (the most common digital representation of [[real number]]s) is often more expensive than comparing integers or short strings. On most computer architectures, the [[Central processing unit|processor]] has a hardware [[Cache (computing)|cache]] separate from [[Random-access memory|RAM]]. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other ([[locality of reference]]). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such as [[linear search]] and [[linear probing]] in [[hash tables]]) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems.<ref>{{cite journal|last1=Khuong|first1=Paul-Virak|last2=Morin|first2=Pat|s2cid=23752485|author2-link= Pat Morin |title=Array Layouts for Comparison-Based Searching|journal=Journal of Experimental Algorithmics|year=2017|volume=22|at=Article 1.3|doi=10.1145/3053370|arxiv=1509.05053}}</ref> == Binary search versus other schemes == Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking <math display="inline">O(n)</math> time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array.{{Sfn|Knuth|1997|loc=§2.2.2 ("Sequential Allocation")}} There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and [[Set (abstract data type)|set membership]] (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in <math display="inline">O(\log n)</math> time regardless of the type or structure of the values themselves.<ref name="pred">{{cite journal|last1=Beame|first1=Paul|last2=Fich|first2=Faith E.|author-link2=Faith Ellen|title=Optimal bounds for the predecessor problem and related problems|journal=[[Journal of Computer and System Sciences]]|date=2001|volume=65|issue=1|pages=38–72|doi=10.1006/jcss.2002.1822|doi-access=free}}</ref> In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array.{{sfn|Sedgewick|Wayne|2011|loc=§3.1, subsection "Rank and selection"}} === Linear search === [[Linear search]] is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand.{{Efn|{{Harvnb|Knuth|1998}} performed a formal time performance analysis of both of these search algorithms. On Knuth's [[MIX (abstract machine)|MIX]] computer, which Knuth designed as a representation of an ordinary computer, binary search takes on average <math display="inline">18 \log n - 16</math> units of time for a successful search, while linear search with a [[sentinel node]] at the end of the list takes <math display="inline">1.75n + 8.5 - \frac{n \text{ mod } 2}{4n}</math> units. Linear search has lower initial complexity because it requires minimal computation, but it quickly outgrows binary search in complexity. On the MIX computer, binary search only outperforms linear search with a sentinel if <math display="inline">n > 44</math>.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search"}}{{Sfn|Knuth|1998|loc=Answers to Exercises (§6.2.1) for "Exercise 5"}}}}{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table")}} All [[sorting algorithm]]s based on comparing elements, such as [[quicksort]] and [[merge sort]], require at least <math display="inline">O(n \log n)</math> comparisons in the worst case.{{Sfn|Knuth|1998|loc=§5.3.1 ("Minimum-Comparison sorting")}} Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array.{{Sfn|Sedgewick|Wayne|2011|loc=§3.2 ("Ordered symbol tables")}} === Trees === [[File:Binary search tree search 4.svg|thumb|right|[[Binary search tree]]s are searched using an algorithm similar to binary search.]] A [[binary search tree]] is a [[binary tree]] data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries.<ref name="pred" />{{Sfn|Sedgewick|Wayne|2011|loc=§3.2 ("Binary Search Trees"), subsection "Order-based methods and deletion"}} However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to [[self-balancing binary search tree|balanced binary search tree]]s, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching <math display="inline">n</math> comparisons.{{Efn|Inserting the values in sorted order or in an alternating lowest-highest key pattern will result in a binary search tree that maximizes the average and worst-case search time.{{Sfn|Knuth|1998|loc=§6.2.2 ("Binary tree searching"), subsection "But what about the worst case?"}}}} Binary search trees take more space than sorted arrays.{{Sfn|Sedgewick|Wayne|2011|loc=§3.5 ("Applications"), "Which symbol-table implementation should I use?"}} Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. The [[B-tree]] generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as [[database]]s and [[filesystem]]s.{{Sfn|Knuth|1998|loc=§5.4.9 ("Disks and Drums")}}{{Sfn|Knuth|1998|loc=§6.2.4 ("Multiway trees")}} === Hashing === For implementing [[associative arrays]], [[hash table]]s, a data structure that maps keys to [[record (computer science)|records]] using a [[hash function]], are generally faster than binary search on a sorted array of records.{{Sfn|Knuth|1998|loc=§6.4 ("Hashing")}} Most hash table implementations require only [[Amortized analysis|amortized]] constant time on average.{{efn|It is possible to search some hash table implementations in guaranteed constant time.{{Sfn|Knuth|1998|loc=§6.4 ("Hashing"), subsection "History"}}}}<ref>{{cite journal|last1=Dietzfelbinger|first1=Martin|last2=Karlin|first2=Anna|last3=Mehlhorn|first3=Kurt|last4=Meyer auf der Heide|first4=Friedhelm|last5=Rohnert|first5=Hans|last6=Tarjan|first6=Robert E.|author-link2=Anna Karlin|author-link3=Kurt Mehlhorn|author-link6=Robert Tarjan|title=Dynamic perfect hashing: upper and lower bounds|journal=[[SIAM Journal on Computing]]|date=August 1994|volume=23|issue=4|pages=738–761|doi=10.1137/S0097539791194094}}</ref> However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record.<ref>{{cite web|last1=Morin|first1=Pat|title=Hash tables|url=http://cglab.ca/~morin/teaching/5408/notes/hashing.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://cglab.ca/~morin/teaching/5408/notes/hashing.pdf |archive-date=2022-10-09 |url-status=live|access-date=28 March 2016|page=1}}</ref> Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables.<ref name="pred" /> === Set membership algorithms === A related problem to search is [[Set (abstract data type)|set membership]]. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A [[bit array]] is the simplest, useful when the range of keys is limited. It compactly stores a collection of [[bit]]s, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only <math display="inline">O(1)</math> time.{{Sfn|Knuth|2011|loc=§7.1.3 ("Bitwise Tricks and Techniques")}} The Judy1 type of [[Judy array]] handles 64-bit keys efficiently.<ref name="judyarray">{{Citation|last1=Silverstein|first1=Alan|title=Judy IV shop manual|publisher=[[Hewlett-Packard]]|url=http://judy.sourceforge.net/doc/shop_interm.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://judy.sourceforge.net/doc/shop_interm.pdf |archive-date=2022-10-09 |url-status=live|pages=80–81}}</ref> For approximate results, [[Bloom filter]]s, another probabilistic data structure based on hashing, store a [[set (mathematics)|set]] of keys by encoding the keys using a [[bit array]] and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with <math display="inline">k</math> hash functions, membership queries require only <math display="inline">O(k)</math> time. However, Bloom filters suffer from [[False positives and false negatives|false positives]].{{Efn|This is because simply setting all of the bits which the hash functions point to for a specific key can affect queries for other keys which have a common hash location for one or more of the functions.<ref name="cuckoofilter">{{cite conference|last1=Fan|first1=Bin|last2=Andersen|first2=Dave G.|last3=Kaminsky|first3=Michael|last4=Mitzenmacher|first4=Michael D.|title=Cuckoo filter: practically better than Bloom|conference=Proceedings of the 10th ACM International on Conference on Emerging Networking Experiments and Technologies|date=2014|pages=75–88|doi=10.1145/2674005.2674994|doi-access=free}}</ref>}}{{Efn|There exist improvements of the Bloom filter which improve on its complexity or support deletion; for example, the cuckoo filter exploits [[cuckoo hashing]] to gain these advantages.<ref name="cuckoofilter" />}}<ref>{{cite journal|last1=Bloom|first1=Burton H.|s2cid=7931252|title=Space/time trade-offs in hash coding with allowable errors|journal=[[Communications of the ACM]]|date=1970|volume=13|issue=7|pages=422–426|doi=10.1145/362686.362692|df=dmy-all|citeseerx=10.1.1.641.9096}}</ref> === Other data structures === There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as [[van Emde Boas tree]]s, [[fusion tree]]s, [[trie]]s, and [[bit array]]s. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute.<ref name="pred" /> As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching.<ref name="judyarray" /> == Variations == === Uniform binary search === {{Main|Uniform binary search}} [[File:Uniform binary search.svg|thumb|upright=1.0|[[Uniform binary search]] stores the difference between the current and the two next possible middle elements instead of specific bounds.]] Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. A [[lookup table]] containing the differences is computed beforehand. For example, if the array to be searched is {{math|[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]}}, the middle element (<math>m</math>) would be {{math|6}}. In this case, the middle element of the left subarray ({{math|[1, 2, 3, 4, 5]}}) is {{math|3}} and the middle element of the right subarray ({{math|[7, 8, 9, 10, 11]}}) is {{math|9}}. Uniform binary search would store the value of {{math|3}} as both indices differ from {{math|6}} by this same amount.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "An important variation"}} To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on [[decimal computer]]s.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Algorithm U"}} === Exponential search{{anchor|One-sided search}} === {{Main|Exponential search}} [[File:Exponential search.svg|thumb|upright=1.5|Visualization of [[exponential search]]ing finding the upper bound for the subsequent binary search]] Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes <math display="inline">\lfloor \log_2 x + 1\rfloor</math> iterations before binary search is started and at most <math display="inline">\lfloor \log_2 x \rfloor</math> iterations of the binary search, where <math display="inline">x</math> is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array.{{sfn|Moffat|Turpin|2002|p=33}} === Interpolation search === {{Main|Interpolation search}} [[File:Interpolation search.svg|thumb|upright=1.5|Visualization of [[interpolation search]] using linear interpolation. In this case, no searching is needed because the estimate of the target's location within the array is correct. Other implementations may specify another function for estimating the target's location.]] Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Interpolation search"}} A common interpolation function is [[linear interpolation]]. If <math>A</math> is the array, <math>L, R</math> are the lower and upper bounds respectively, and <math>T</math> is the target, then the target is estimated to be about <math>(T - A_L) / (A_R - A_L)</math> of the way between <math>L</math> and <math>R</math>. When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makes <math display="inline">O(\log \log n)</math> comparisons.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Interpolation search"}}{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Exercise 22"}}<ref>{{cite journal|last1=Perl|first1=Yehoshua|last2=Itai|first2=Alon|last3=Avni|first3=Haim|s2cid=11089655|title=Interpolation search—a log log ''n'' search|journal=[[Communications of the ACM]]|date=1978|volume=21|issue=7|pages=550–553|doi=10.1145/359545.359557|doi-access=free}}</ref> In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Interpolation search"}} === Fractional cascading === {{Main|Fractional cascading}} [[File:Fractional cascading.svg|thumb|upright=1.8|In [[fractional cascading]], each array has pointers to every second element of another array, so only one binary search has to be performed to search all the arrays.]] Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires <math display="inline">O(k \log n)</math> time, where <math display="inline">k</math> is the number of arrays. Fractional cascading reduces this to <math display="inline">O(k + \log n)</math> by storing specific information in each array about each element and its position in the other arrays.<ref name="ChazelleLiu2001">{{cite conference|last1=Chazelle|first1=Bernard|last2=Liu|first2=Ding|author-link1=Bernard Chazelle|title=Lower bounds for intersection searching and fractional cascading in higher dimension|conference=33rd [[Symposium on Theory of Computing|ACM Symposium on Theory of Computing]]|pages=322–329|date=6 July 2001|doi=10.1145/380752.380818|url=https://dl.acm.org/citation.cfm?doid=380752.380818 |access-date=30 June 2018 |publisher=ACM|isbn=978-1-58113-349-3}}</ref><ref name="ChazelleLiu2004">{{cite journal|last1=Chazelle|first1=Bernard|last2=Liu|first2=Ding|author-link1=Bernard Chazelle|title=Lower bounds for intersection searching and fractional cascading in higher dimension|journal=Journal of Computer and System Sciences|date=1 March 2004 |volume=68|issue=2|pages=269–284 |language=en |issn=0022-0000|doi=10.1016/j.jcss.2003.07.003|citeseerx=10.1.1.298.7772|url=http://www.cs.princeton.edu/~chazelle/pubs/FClowerbounds.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www.cs.princeton.edu/~chazelle/pubs/FClowerbounds.pdf |archive-date=2022-10-09 |url-status=live|access-date=30 June 2018}}</ref> Fractional cascading was originally developed to efficiently solve various [[computational geometry]] problems. Fractional cascading has been applied elsewhere, such as in [[data mining]] and [[Internet Protocol]] routing.<ref name="ChazelleLiu2001" /> === Generalization to graphs === Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization—when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex in <math>O(\log n)</math> queries in the worst case.<ref>{{cite conference|last1=Emamjomeh-Zadeh|first1=Ehsan|last2=Kempe|first2=David|last3=Singhal|first3=Vikrant|title=Deterministic and probabilistic binary search in graphs|date=2016|pages=519–532|conference=48th [[Symposium on Theory of Computing|ACM Symposium on Theory of Computing]]|arxiv=1503.00805|doi=10.1145/2897518.2897656}}</ref> === Noisy binary search === [[File:Noisy binary search.svg|thumb|right|In noisy binary search, there is a certain probability that a comparison is incorrect.]] Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least <math>(1 - \tau)\frac{\log_2 (n)}{H(p)} - \frac{10}{H(p)}</math> comparisons on average, where <math>H(p) = -p \log_2 (p) - (1 - p) \log_2 (1 - p)</math><!-- Attribution of LaTeX code: see history of https://en.wikipedia.org/wiki/Binary_entropy_function --> is the [[binary entropy function]] and <math>\tau</math> is the probability that the procedure yields the wrong position.<ref>{{cite conference |last1=Ben-Or |first1=Michael |last2=Hassidim |first2=Avinatan |title=The Bayesian learner is optimal for noisy binary search (and pretty good for quantum as well) |date=2008 |book-title=49th [[Annual IEEE Symposium on Foundations of Computer Science|Symposium on Foundations of Computer Science]] |pages=221–230 |doi=10.1109/FOCS.2008.58 |url=http://www2.lns.mit.edu/~avinatan/research/search-full.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://www2.lns.mit.edu/~avinatan/research/search-full.pdf |archive-date=2022-10-09 |url-status=live |isbn=978-0-7695-3436-7}}</ref><ref name="pelc1989">{{cite journal|last1=Pelc|first1=Andrzej|title=Searching with known error probability|journal=Theoretical Computer Science|date=1989|volume=63|issue=2|pages=185–202|doi=10.1016/0304-3975(89)90077-7|doi-access=free}}</ref><ref>{{cite conference|last1=Rivest|first1=Ronald L.|last2=Meyer|first2=Albert R.|last3=Kleitman|first3=Daniel J.|last4=Winklmann|first4=K.|author-link1=Ronald Rivest|author-link2=Albert R. Meyer|author-link3=Daniel Kleitman|title=Coping with errors in binary search procedures|conference=10th [[Symposium on Theory of Computing|ACM Symposium on Theory of Computing]]|doi=10.1145/800133.804351|doi-access=free}}</ref> The noisy binary search problem can be considered as a case of the [[Ulam's game|Rényi-Ulam game]],<ref>{{cite journal|last1=Pelc|first1=Andrzej|title=Searching games with errors—fifty years of coping with liars|journal=Theoretical Computer Science|date=2002|volume=270|issue=1–2|pages=71–109|doi=10.1016/S0304-3975(01)00303-6|doi-access=free}}</ref> a variant of [[Twenty Questions]] where the answers may be wrong.<ref>{{Cite journal | last1=Rényi | first1=Alfréd | title=On a problem in information theory | language=hu | mr=0143666 | year=1961 | journal=Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei| volume=6 | pages=505–516}}</ref> === Quantum binary search === Classical computers are bounded to the worst case of exactly <math display="inline">\lfloor \log_2 n + 1 \rfloor</math> iterations when performing binary search. [[Quantum algorithm]]s for binary search are still bounded to a proportion of <math display="inline">\log_2 n</math> queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity on [[quantum computing|quantum computers]]. Any ''exact'' quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least <math display="inline">\frac{1}{\pi}(\ln n - 1) \approx 0.22 \log_2 n</math> queries in the worst case, where <math display="inline">\ln</math> is the [[natural logarithm]].<ref>{{cite journal|last1=Høyer|first1=Peter|last2=Neerbek|first2=Jan|last3=Shi|first3=Yaoyun|s2cid=13717616|title=Quantum complexities of ordered searching, sorting, and element distinctness|journal=[[Algorithmica]]|date=2002|volume=34|issue=4|pages=429–448|doi=10.1007/s00453-002-0976-3|arxiv=quant-ph/0102078}}</ref> There is an exact quantum binary search procedure that runs in <math display="inline">4 \log_{605} n \approx 0.433 \log_2 n</math> queries in the worst case.<ref name="quantumalgo">{{cite journal|last1=Childs|first1=Andrew M.|last2=Landahl|first2=Andrew J.|last3=Parrilo|first3=Pablo A.|s2cid=41539957|title=Quantum algorithms for the ordered search problem via semidefinite programming|journal=Physical Review A|date=2007|volume=75|issue=3|at=032335|doi=10.1103/PhysRevA.75.032335|arxiv=quant-ph/0608161|bibcode=2007PhRvA..75c2335C}}</ref> In comparison, [[Grover's algorithm]] is the optimal quantum algorithm for searching an unordered list of elements, and it requires <math>O(\sqrt{n})</math> queries.<ref>{{cite conference |last1=Grover |first1=Lov K. | author-link=Lov Grover | title=A fast quantum mechanical algorithm for database search | conference=28th [[Symposium on Theory of Computing|ACM Symposium on Theory of Computing]] |pages=212–219|date=1996| location=Philadelphia, PA | doi=10.1145/237814.237866| arxiv=quant-ph/9605043}}</ref> == History == The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back to {{circa|200 BCE}}. The tablet contained about 500 [[sexagesimal]] numbers and their [[Multiplicative inverse|reciprocals]] sorted in [[lexicographical order]], which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on the [[Aegean Islands]]. ''[[Catholicon (1286)|Catholicon]]'', a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}} In 1946, [[John Mauchly]] made the first mention of binary search as part of the [[Moore School Lectures]], a seminal and foundational college course in computing.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}} In 1957, [[W. Wesley Peterson|William Wesley Peterson]] published the first method for interpolation search.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}}<ref>{{cite journal |last1=Peterson |first1=William Wesley |author-link=W. Wesley Peterson|title=Addressing for random-access storage |journal=IBM Journal of Research and Development |date=1957 |volume=1 |issue=2 |pages=130–146 |doi=10.1147/rd.12.0130}}</ref> Every published binary search algorithm worked only for arrays whose length is one less than a power of two{{efn|That is, arrays of length 1, 3, 7, 15, 31 ...<ref>"2<sup>''n''</sup>−1". [[On-Line Encyclopedia of Integer Sequences|OEIS]] [http://oeis.org/A000225 A000225] {{Webarchive|url=https://web.archive.org/web/20160608084228/http://oeis.org/A000225 |date=8 June 2016 }}. Retrieved 7 May 2016.</ref>}} until 1960, when [[Derrick Henry Lehmer]] published a binary search algorithm that worked on all arrays.<ref>{{cite book | author=Lehmer, Derrick | title=Combinatorial Analysis | chapter=Teaching combinatorial tricks to a computer | series=Proceedings of Symposia in Applied Mathematics | year=1960 | volume=10 | pages=180–181 | doi=10.1090/psapm/010/0113289| isbn=9780821813102 | doi-access=free |mr=0113289}}</ref> In 1962, Hermann Bottenbruch presented an [[ALGOL 60]] implementation of binary search that placed the [[#Alternative procedure|comparison for equality at the end]], increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration.<ref name="Bottenbruch1962" /> The [[#Uniform binary search|uniform binary search]] was developed by A. K. Chandra of [[Stanford University]] in 1971.{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "History and bibliography"}} In 1986, [[Bernard Chazelle]] and [[Leonidas J. Guibas]] introduced [[fractional cascading]] as a method to solve numerous search problems in [[computational geometry]].<ref name="ChazelleLiu2001" /><ref>{{cite journal | last1 = Chazelle | first1 = Bernard | author-link1 = Bernard Chazelle| last2 = Guibas | first2 = Leonidas J. | s2cid = 12745042 | author-link2 = Leonidas J. Guibas| title = Fractional cascading: I. A data structuring technique| journal = [[Algorithmica]]| volume = 1 | issue = 1–4 | year = 1986 | pages = 133–162 | doi = 10.1007/BF01840440| url = http://www.cs.princeton.edu/~chazelle/pubs/FractionalCascading1.pdf| citeseerx = 10.1.1.117.8349 }}</ref><ref>{{citation| last1 = Chazelle | first1 = Bernard | author-link1 = Bernard Chazelle| last2 = Guibas | first2 = Leonidas J. | s2cid = 11232235 | author-link2 = Leonidas J. Guibas| title = Fractional cascading: II. Applications| journal = [[Algorithmica]]| volume = 1 | issue = 1–4 | year = 1986 | pages = 163–191 | doi = 10.1007/BF01840441| url = http://www.cs.princeton.edu/~chazelle/pubs/FractionalCascading2.pdf}}</ref> == Implementation issues == {{Blockquote|Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky|[[Donald Knuth]]{{Sfn|Knuth|1998|loc=§6.2.1 ("Searching an ordered table"), subsection "Binary search"}} }} When [[Jon Bentley (computer scientist)|Jon Bentley]] assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare [[edge case]]s.{{Sfn|Bentley|2000|loc=§4.1 ("The Challenge of Binary Search")}} A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks.<ref name="textbook">{{cite journal | first = Richard E. | last = Pattis | author-link1=Richard E. Pattis| doi = 10.1145/52965.53012 | title = Textbook errors in binary searching | journal = SIGCSE Bulletin | volume = 20 | year = 1988 | pages = 190–194 }}</ref> Furthermore, Bentley's own implementation of binary search, published in his 1986 book ''Programming Pearls'', contained an [[Integer overflow|overflow error]] that remained undetected for over twenty years. The [[Java (programming language)|Java programming language]] library implementation of binary search had the same overflow bug for more than nine years.<ref>{{cite web | url = http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html | title = Extra, extra – read all about it: nearly all binary searches and mergesorts are broken | work = Google Research Blog | first = Joshua | last = Bloch | author-link1 = Joshua Bloch | date = 2 June 2006 | access-date = 21 April 2016 | archive-url = https://web.archive.org/web/20160401140544/http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html | archive-date = 1 April 2016 | url-status = live | df = dmy-all }}</ref> In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in an [[integer overflow|arithmetic overflow]] for very large arrays. If the midpoint of the span is calculated as <math>\frac{L+R}{2}</math>, then the value of <math>L+R</math> may exceed the range of integers of the data type used to store the midpoint, even if <math>L</math> and <math>R</math> are within the range. If ''<math>L</math>'' and ''<math>R</math>'' are nonnegative, this can be avoided by calculating the midpoint as <math>L+\frac{R-L}{2}</math>.<ref name="semisum">{{cite journal|last1=Ruggieri|first1=Salvatore|title=On computing the semi-sum of two integers|journal=[[Information Processing Letters]]|date=2003|volume=87|issue=2|pages=67–71|doi=10.1016/S0020-0190(03)00263-1|url=http://www.di.unipi.it/~ruggieri/Papers/semisum.pdf|citeseerx=10.1.1.13.5631|access-date=19 March 2016|archive-url=https://web.archive.org/web/20060703173514/http://www.di.unipi.it/~ruggieri/Papers/semisum.pdf|archive-date=3 July 2006|url-status=live|df=dmy-all}}</ref> An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once ''<math>L</math>'' exceeds ''<math>R</math>'', the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.<ref name="Bottenbruch1962" />{{Sfn|Bentley|2000|loc=§4.4 ("Principles")}} == Library support == Many languages' [[standard library|standard libraries]] include binary search routines: * [[C (programming language)|C]] provides the [[subroutine|function]] <code>bsearch()</code> in its [[C standard library|standard library]], which is typically implemented via binary search, although the official standard does not require it so.<ref>{{cite web|title=bsearch – binary search a sorted table|url=http://pubs.opengroup.org/onlinepubs/9699919799/functions/bsearch.html|website=The Open Group Base Specifications|edition=7th|publisher=[[The Open Group]]|access-date=28 March 2016|date=2013|archive-url=https://web.archive.org/web/20160321211605/http://pubs.opengroup.org/onlinepubs/9699919799/functions/bsearch.html|archive-date=21 March 2016|url-status=live|df=dmy-all}}</ref> * [[C++]]'s [[C++ Standard Library|standard library]] provides the functions <code>binary_search()</code>, <code>lower_bound()</code>, <code>upper_bound()</code> and <code>equal_range()</code>.{{Sfn|Stroustrup|2013|p=945}} * [[D (programming language)|D]]'s standard library Phobos, in <code>std.range</code> module provides a type <code>SortedRange</code> (returned by <code>sort()</code> and <code>assumeSorted()</code> functions) with methods <code>contains()</code>, <code>equaleRange()</code>, <code>lowerBound()</code> and <code>trisect()</code>, that use binary search techniques by default for ranges that offer random access.<ref>{{cite web |title=std.range - D Programming Language |url=https://dlang.org/phobos/std_range.html#SortedRange |website=dlang.org |access-date=2020-04-29}}</ref> * [[COBOL]] provides the <code>SEARCH ALL</code> verb for performing binary searches on COBOL ordered tables.<ref>{{Citation|title=COBOL ANSI-85 programming reference manual|author=Unisys|date=2012|volume=1|pages=598–601|author-link=Unisys}}</ref> * [[Go (programming language)|Go]]'s <code>sort</code> standard library package contains the functions <code>Search</code>, <code>SearchInts</code>, <code>SearchFloat64s</code>, and <code>SearchStrings</code>, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively.<ref>{{cite web | work = The Go Programming Language | url = http://golang.org/pkg/sort/ | title = Package sort | access-date = 28 April 2016 | archive-url = https://web.archive.org/web/20160425055919/https://golang.org/pkg/sort/ | archive-date = 25 April 2016 | url-status = live | df = dmy-all }}</ref> * [[Java (programming language)|Java]] offers a set of [[function overloading|overloaded]] <code>binarySearch()</code> static methods in the classes {{Javadoc:SE|java/util|Arrays}} and {{Javadoc:SE|java/util|Collections}} in the standard <code>java.util</code> package for performing binary searches on Java arrays and on <code>List</code>s, respectively.<ref>{{cite web|title=java.util.Arrays|url=https://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html|website=Java Platform Standard Edition 8 Documentation|publisher=[[Oracle Corporation]]|access-date=1 May 2016|archive-url=https://web.archive.org/web/20160429064301/http://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html|archive-date=29 April 2016|url-status=live|df=dmy-all}}</ref><ref>{{cite web|title=java.util.Collections|url=https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html|website=Java Platform Standard Edition 8 Documentation|publisher=[[Oracle Corporation]]|access-date=1 May 2016|archive-url=https://web.archive.org/web/20160423092424/https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html|archive-date=23 April 2016|url-status=live|df=dmy-all}}</ref> * [[Microsoft]]'s [[.NET Framework]] 2.0 offers static [[generic programming|generic]] versions of the binary search algorithm in its collection base classes. An example would be <code>System.Array</code>'s method <code>BinarySearch<T>(T[] array, T value)</code>.<ref>{{cite web|title=List<T>.BinarySearch method (T)|url=https://msdn.microsoft.com/en-us/library/w4e7fxsh%28v=vs.110%29.aspx|website=Microsoft Developer Network|access-date=10 April 2016|archive-url=https://web.archive.org/web/20160507141014/https://msdn.microsoft.com/en-us/library/w4e7fxsh%28v=vs.110%29.aspx|archive-date=7 May 2016|url-status=live|df=dmy-all}}</ref> * For [[Objective-C]], the [[Cocoa (API)|Cocoa]] framework provides the [https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/NSArray.html#//apple_ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator: {{code|NSArray -indexOfObject:inSortedRange:options:usingComparator:|objc}}] method in Mac OS X 10.6+.<ref>{{cite web|title=NSArray|url=https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/index.html#//apple_ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator:|website=Mac Developer Library|publisher=[[Apple Inc.]]|access-date=1 May 2016|archive-url=https://web.archive.org/web/20160417163718/https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/index.html#//apple_ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator:|archive-date=17 April 2016|url-status=live|df=dmy-all}}</ref> Apple's [[Core Foundation]] C framework also contains a <code>[https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFArrayRef/Reference/reference.html#//apple_ref/c/func/CFArrayBSearchValues CFArrayBSearchValues()]</code> function.<ref>{{cite web|title=CFArray|url=https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFArrayRef/index.html#//apple_ref/c/func/CFArrayBSearchValues|website=Mac Developer Library|publisher=[[Apple Inc.]]|access-date=1 May 2016|archive-url=https://web.archive.org/web/20160420193823/https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFArrayRef/index.html#//apple_ref/c/func/CFArrayBSearchValues|archive-date=20 April 2016|url-status=live|df=dmy-all}}</ref> * [[Python (programming language)|Python]] provides the <code>bisect</code> module that keeps a list in sorted order without having to sort the list after each insertion.<ref>{{cite web|title=8.6. bisect — Array bisection algorithm|url=https://docs.python.org/3.6/library/bisect.html#module-bisect|website=The Python Standard Library|publisher=Python Software Foundation|access-date=26 March 2018|archive-url=https://web.archive.org/web/20180325105932/https://docs.python.org/3.6/library/bisect.html#module-bisect|archive-date=25 March 2018|url-status=live|df=dmy-all}}</ref> * [[Ruby (programming language)|Ruby]]'s Array class includes a <code>bsearch</code> method with built-in approximate matching.{{Sfn|Fitzgerald|2015|p=152}} * [[Rust (programming language)|Rust]]'s slice primitive provides <code>binary_search()</code>, <code>binary_search_by()</code>, <code>binary_search_by_key()</code>, and <code>partition_point()</code>. <ref>{{cite web|title=Primitive Type <code>slice</code>|url=https://doc.rust-lang.org/std/primitive.slice.html|website=The Rust Standard Library|publisher=[[The Rust Foundation]]|access-date=25 May 2024|date=2024}}</ref> == See also == * {{annotated link|Bisection method}} – the same idea used to solve equations in the real numbers * {{annotated link|Multiplicative binary search}} == Notes and references == <!-- Referencing conventions for this article: use CS1 templates, add book and monograph sources to the Sources section and then cite them using Harvard citations, directly cite other sources. Source titles are in sentence case, but the first letter of proper nouns (e.g., "Bayesian") is capitalized. --> {{Academic peer reviewed|Q=Q81434400|doi-access=free}} === Notes === {{Notelist}} === Citations === {{Reflist}} === Sources === {{columns-list|colwidth=30em|style=font-size:90%;| <!-- * {{cite book|last1=Alexandrescu|first1=Andrei|title=The D Programming Language|date=2010|publisher=Addison-Wesley Professional|location=Upper Saddle River, New Jersey|isbn=0-321-63536-1}}--> * {{cite book | last = Bentley | first = Jon | author-link = Jon Bentley (computer scientist) | title = Programming pearls | edition = 2nd | publisher = [[Addison-Wesley]] | year = 2000 | isbn = 978-0-201-65788-3}} * {{cite book|last1=Butterfield|first1=Andrew|last2=Ngondi|first2=Gerard E.|title=A dictionary of computer science|date=2016|publisher=[[Oxford University Press]]|location=Oxford, UK|isbn=978-0-19-968897-5|edition=7th}} * {{cite book | title=Data structures and algorithms | publisher=[[World Scientific]] | last1=Chang|first1=Shi-Kuo| year=2003 | location=Singapore | isbn=978-981-238-348-8 | volume=13 | series=Software Engineering and Knowledge Engineering}} * {{cite book | last1 = Cormen | first1 = Thomas H. | author1-link=Thomas H. Cormen | last2= Leiserson | first2= Charles E. | author2-link=Charles E. Leiserson | last3= Rivest | first3= Ronald L. | author3-link=Ron Rivest | last4= Stein | first4=Clifford | author4-link=Clifford Stein |title=Introduction to algorithms | year = 2009 | edition = 3rd | publisher = MIT Press and McGraw-Hill | isbn = 978-0-262-03384-8 | title-link = Introduction to Algorithms }} * {{cite book|last1=Fitzgerald|first1=Michael|title=Ruby pocket reference|date=2015|publisher=[[O'Reilly Media]]|location=Sebastopol, California|isbn=978-1-4919-2601-7}} * {{cite book|last1=Goldman|first1=Sally A.|author1-link= Sally Goldman |last2=Goldman|first2=Kenneth J.|title=A practical guide to data structures and algorithms using Java|date=2008|publisher=[[CRC Press]]|location=Boca Raton, Florida|isbn=978-1-58488-455-2}} * {{cite book |last1=Kasahara |first1=Masahiro |last2=Morishita |first2=Shinichi |title=Large-scale genome sequence processing |date=2006 |publisher=Imperial College Press |location=London, UK |isbn=978-1-86094-635-6 }} * {{cite book|last=Knuth|first=Donald|year=1997|author-link=Donald Knuth|title=Fundamental algorithms|series=[[The Art of Computer Programming]]|volume=1|edition=3rd|location=Reading, MA|publisher=Addison-Wesley Professional|isbn=978-0-201-89683-1}} * {{cite book|last=Knuth|first=Donald|year=1998|author-link=Donald Knuth|title=Sorting and searching|series=[[The Art of Computer Programming]]|volume=3|edition=2nd|location=Reading, MA|publisher=Addison-Wesley Professional|isbn=978-0-201-89685-5}} * {{cite book|last=Knuth|first=Donald|year=2011|author-link=Donald Knuth|title=Combinatorial algorithms|series=[[The Art of Computer Programming]]|volume=4A|edition=1st|location=Reading, MA|publisher=Addison-Wesley Professional|isbn=978-0-201-03804-0}} <!-- * {{cite book|last1=Leiss|first1=Ernst|title=A Programmer's Companion to Algorithm Analysis|date=2007|publisher=CRC Press|location=Boca Raton, Florida|isbn=1-58488-673-0}} --> * {{cite book|last1=Moffat|first1=Alistair|last2=Turpin|first2=Andrew|title=Compression and coding algorithms|date=2002|publisher=Kluwer Academic Publishers|location=Hamburg, Germany|isbn=978-0-7923-7668-2|doi=10.1007/978-1-4615-0935-6}} * {{cite book|last1=Sedgewick|first1=Robert|last2=Wayne|first2=Kevin|author-link1=Robert Sedgewick (computer scientist)|title=Algorithms|date=2011|publisher=Addison-Wesley Professional|location=Upper Saddle River, New Jersey|isbn=978-0-321-57351-3|edition=4th|url=http://algs4.cs.princeton.edu/home/}} Condensed web version {{open access}}; book version {{closed access}}. * {{cite book|last1=Stroustrup|first1=Bjarne|author-link=Bjarne Stroustrup|title=The C++ programming language|edition=4th|date=2013|publisher=Addison-Wesley Professional|location=Upper Saddle River, New Jersey|isbn=978-0-321-56384-2}} }} ==External links== {{Wikibooks|Algorithm implementation|Search/Binary search|Binary search}} * [https://web.archive.org/web/20161104005739/https://xlinux.nist.gov/dads/HTML/binarySearch.html NIST Dictionary of Algorithms and Data Structures: binary search] * [https://sites.google.com/site/binarysearchcube/binary-search Comparisons and benchmarks of a variety of binary search implementations in C] {{Webarchive|url=https://web.archive.org/web/20190925012527/https://sites.google.com/site/binarysearchcube/binary-search |date=25 September 2019 }} {{Data structures and algorithms}} {{DEFAULTSORT:Binary Search Algorithm}} [[Category:Articles with example pseudocode]] [[Category:Search algorithms]] [[Category:2 (number)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Academic peer reviewed
(
edit
)
Template:Anchor
(
edit
)
Template:Annotated link
(
edit
)
Template:Blockquote
(
edit
)
Template:Circa
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Code
(
edit
)
Template:Columns-list
(
edit
)
Template:Comma separated entries
(
edit
)
Template:Data structures and algorithms
(
edit
)
Template:Efn
(
edit
)
Template:Featured article
(
edit
)
Template:Infobox algorithm
(
edit
)
Template:Javadoc:SE
(
edit
)
Template:Main
(
edit
)
Template:Main other
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:Mvar
(
edit
)
Template:Notelist
(
edit
)
Template:Reflist
(
edit
)
Template:Sfn
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)
Template:Wikibooks
(
edit
)