Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Binary logarithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== {{See also|Doubling time}} ===Information theory=== The number of digits ([[bit]]s) in the [[binary representation]] of a positive integer {{mvar|n}} is the [[Floor and ceiling functions|integral part]] of {{math|1 + log<sub>2</sub>{{hsp}}''n''}}, i.e.<ref name="sw11"/> :<math> \lfloor \log_2 n\rfloor + 1. </math> In information theory, the definition of the amount of [[self-information]] and [[information entropy]] is often expressed with the binary logarithm, corresponding to making the bit the fundamental [[Units of information|unit of information]]. With these units, the [[Shannon–Hartley theorem]] expresses the information capacity of a channel as the binary logarithm of its signal-to-noise ratio, plus one. However, the [[natural logarithm]] and the [[Nat (unit)|nat]] are also used in alternative notations for these definitions.<ref>{{citation|title=Information Theory|first=Jan C. A.|last=Van der Lubbe|publisher=Cambridge University Press|year=1997|isbn=978-0-521-46760-5|page=3|url=https://books.google.com/books?id=tBuI_6MQTcwC&pg=PA3}}.</ref> ===Combinatorics=== [[File:SixteenPlayerSingleEliminationTournamentBracket.svg|thumb|upright=1.2|A 16-player [[single elimination]] [[Bracket (tournament)|tournament bracket]] with the structure of a [[complete binary tree]]. The height of the tree (number of rounds of the tournament) is the binary logarithm of the number of players, rounded up to an integer.]] Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as [[number theory]] and [[mathematical analysis]],<ref>{{citation|title=Taming the Infinite|first=Ian|last=Stewart|author-link=Ian Stewart (mathematician)|publisher=Quercus|year=2015|isbn=9781623654733|quote=in advanced mathematics and science the only logarithm of importance is the natural logarithm|page=120|url=https://books.google.com/books?id=u4HPBAAAQBAJ&pg=PT120}}.</ref> the binary logarithm has several applications in [[combinatorics]]: *Every [[binary tree]] with {{mvar|n}} leaves has height at least {{math|log<sub>2</sub>{{hsp}}''n''}}, with equality when {{mvar|n}} is a [[power of two]] and the tree is a [[complete binary tree]].<ref>{{citation | last = Leiss | first = Ernst L. | isbn = 978-1-4200-1170-8 | page = 28 | publisher = CRC Press | title = A Programmer's Companion to Algorithm Analysis | url = https://books.google.com/books?id=E6BNGFQ6m_IC&pg=RA2-PA28 | year = 2006}}.</ref> Relatedly, the [[Strahler number]] of a river system with {{mvar|n}} tributary streams is at most {{math|log<sub>2</sub>{{hsp}}''n'' + 1}}.<ref>{{citation | last1 = Devroye | first1 = L. | author1-link = Luc Devroye | last2 = Kruszewski | first2 = P. | issue = 5 | journal = RAIRO Informatique Théorique et Applications | mr = 1435732 | pages = 443–456 | title = On the Horton–Strahler number for random tries | url = https://eudml.org/doc/92635 | volume = 30 | year = 1996| doi = 10.1051/ita/1996300504431 | doi-access = free }}.</ref> *Every [[family of sets]] with {{mvar|n}} different sets has at least {{math|log<sub>2</sub>{{hsp}}''n''}} elements in its union, with equality when the family is a [[power set]].<ref>Equivalently, a family with {{mvar|k}} distinct elements has at most {{math|2<sup>''k''</sup>}} distinct sets, with equality when it is a power set.</ref> *Every [[partial cube]] with {{mvar|n}} vertices has isometric dimension at least {{math|log<sub>2</sub>{{hsp}}''n''}}, and has at most {{math|{{sfrac|1|2}} ''n'' log<sub>2</sub>{{hsp}}''n''}} edges, with equality when the partial cube is a [[hypercube graph]].<ref>{{citation | last = Eppstein | first = David | author-link = David Eppstein | arxiv = cs.DS/0402028 | doi = 10.1016/j.ejc.2004.05.001 | issue = 5 | journal = European Journal of Combinatorics | mr = 2127682 | pages = 585–592 | title = The lattice dimension of a graph | volume = 26 | year = 2005| s2cid = 7482443 }}.</ref> *According to [[Ramsey's theorem]], every {{mvar|n}}-vertex [[undirected graph]] has either a [[Clique (graph theory)|clique]] or an [[Independent set (graph theory)|independent set]] of size logarithmic in {{mvar|n}}. The precise size that can be guaranteed is not known, but the best bounds known on its size involve binary logarithms. In particular, all graphs have a clique or independent set of size at least {{math|{{sfrac|1|2}} log<sub>2</sub>{{hsp}}''n'' (1 − ''o''(1))}} and almost all graphs do not have a clique or independent set of size larger than {{math|2 log<sub>2</sub>{{hsp}}''n'' (1 + ''o''(1))}}.<ref>{{citation | last1 = Graham | first1 = Ronald L. | author1-link = Ronald Graham | last2 = Rothschild | first2 = Bruce L. | author2-link = Bruce Lee Rothschild | last3 = Spencer | first3 = Joel H. | author3-link = Joel Spencer | page = 78 | publisher = Wiley-Interscience | title = Ramsey Theory | year = 1980}}.</ref> *From a mathematical analysis of the [[Gilbert–Shannon–Reeds model]] of random shuffles, one can show that the number of times one needs to shuffle an {{mvar|n}}-card deck of cards, using [[riffle shuffle]]s, to get a distribution on permutations that is close to uniformly random, is approximately {{math|{{sfrac|3|2}} log<sub>2</sub>{{hsp}}''n''}}. This calculation forms the basis for a recommendation that 52-card decks should be shuffled seven times.<ref>{{citation | last1 = Bayer | first1 = Dave | author1-link = Dave Bayer | last2 = Diaconis | first2 = Persi | author2-link = Persi Diaconis | issue = 2 | journal = The Annals of Applied Probability | jstor = 2959752 | mr = 1161056 | pages = 294–313 | title = Trailing the dovetail shuffle to its lair | volume = 2 | year = 1992 | doi=10.1214/aoap/1177005705| doi-access = free }}.</ref> ===Computational complexity=== [[File:Binary search into array - example.svg|thumb|upright=1.2|[[Binary search]] in a sorted array, an algorithm whose time complexity involves binary logarithms]] The binary logarithm also frequently appears in the [[analysis of algorithms]],<ref name="gt02"/> not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching.<ref name="knuth"/> If a problem initially has {{mvar|n}} choices for its solution, and each iteration of the algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of {{math|log<sub>2</sub>{{hsp}}''n''}}. This idea is used in the analysis of several [[algorithm]]s and [[data structure]]s. For example, in [[binary search]], the size of the problem to be solved is halved with each iteration, and therefore roughly {{math|log<sub>2</sub>{{hsp}}''n''}} iterations are needed to obtain a solution for a problem of size {{math|n}}.<ref>{{citation |title=Algorithms and Data Structures: The Basic Toolbox|first1=Kurt|last1=Mehlhorn|author1-link=Kurt Mehlhorn|first2=Peter|last2=Sanders|author2-link=Peter Sanders (computer scientist)|publisher=Springer|year=2008|isbn=978-3-540-77977-3|pages=34–36|contribution=2.5 An example – binary search|url=http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/Introduction.pdf}}.</ref> Similarly, a perfectly balanced [[binary search tree]] containing {{mvar|n}} elements has height {{math|log<sub>2</sub>(''n'' + 1) − 1}}.<ref>{{citation|title=Applied Combinatorics|first1=Fred|last1=Roberts|author1-link=Fred S. Roberts|first2=Barry|last2=Tesman|edition=2nd|publisher=CRC Press|year=2009|isbn=978-1-4200-9983-6|page=206|url=https://books.google.com/books?id=szBLJUhmYOQC&pg=PA206}}.</ref> The running time of an algorithm is usually expressed in [[big O notation]], which is used to simplify expressions by omitting their constant factors and lower-order terms. Because logarithms in different bases differ from each other only by a constant factor, algorithms that run in {{math|''O''(log<sub>2</sub>{{hsp}}''n'')}} time can also be said to run in, say, {{math|''O''(log<sub>13</sub> ''n'')}} time. The base of the logarithm in expressions such as {{math|''O''(log ''n'')}} or {{math|''O''(''n'' log ''n'')}} is therefore not important and can be omitted.<ref name="clrs"/><ref>{{citation|title=Introduction to the Theory of Computation|first=Michael|last=Sipser|author-link=Michael Sipser|edition=3rd|publisher=Cengage Learning|year=2012|isbn=9781133187790|contribution=Example 7.4|url=https://books.google.com/books?id=P3f6CAAAQBAJ&pg=PA277|pages=277–278}}.</ref> However, for logarithms that appear in the exponent of a time bound, the base of the logarithm cannot be omitted. For example, {{math|''O''(2<sup>log<sub>2</sub>{{hsp}}''n''</sup>)}} is not the same as {{math|''O''(2<sup>ln ''n''</sup>)}} because the former is equal to {{math|''O''(''n'')}} and the latter to {{math|''O''(''n''<sup>0.6931...</sup>)}}. Algorithms with running time {{math|''O''(''n'' log ''n'')}} are sometimes called [[linearithmic]].<ref>{{harvtxt|Sedgewick|Wayne|2011}}, [https://books.google.com/books?id=MTpsAQAAQBAJ&pg=PA186 p. 186].</ref> Some examples of algorithms with running time {{math|''O''(log ''n'')}} or {{math|''O''(''n'' log ''n'')}} are: *[[quicksort|Average time quicksort]] and other [[comparison sort]] algorithms{{sfnmp|1a1 = Cormen|1a2 = Leiserson|1a3=Rivest|1a4=Stein|1y=2001|1p=156|2a1=Goodrich|2a2=Tamassia|2y=2002|2p=238}} *Searching in balanced [[binary search tree]]s{{sfnmp|1a1 = Cormen|1a2 = Leiserson|1a3=Rivest|1a4=Stein|1y=2001|1p=276|2a1=Goodrich|2a2=Tamassia|2y=2002|2p=159}} *[[Exponentiation by squaring]]{{sfnmp|1a1 = Cormen|1a2 = Leiserson|1a3=Rivest|1a4=Stein|1y=2001|1p=879–880|2a1=Goodrich|2a2=Tamassia|2y=2002|2p=464}} *[[Longest increasing subsequence]]<ref>{{citation|title=How to Think About Algorithms|first=Jeff|last=Edmonds|publisher=Cambridge University Press|year=2008|isbn=978-1-139-47175-6|page=302|url=https://books.google.com/books?id=hGuixQMQS_0C&pg=PT280}}.</ref> Binary logarithms also occur in the exponents of the time bounds for some [[divide and conquer algorithm]]s, such as the [[Karatsuba algorithm]] for multiplying {{mvar|n}}-bit numbers in time {{math|''O''(''n''<sup>log<sub>2</sub>{{hsp}}3</sup>)}},{{sfnmp|1a1 = Cormen|1a2 = Leiserson|1a3=Rivest|1a4=Stein|1y=2001|1p=844|2a1=Goodrich|2a2=Tamassia|2y=2002|2p=279}} and the [[Strassen algorithm]] for multiplying {{math|''n'' × ''n''}} matrices in time {{math|''O''(''n''<sup>log<sub>2</sub>{{hsp}}7</sup>)}}.{{sfnp|Cormen|Leiserson|Rivest|Stein|2001|loc=section 28.2.}} The occurrence of binary logarithms in these running times can be explained by reference to the [[master theorem (analysis of algorithms)|master theorem for divide-and-conquer recurrences]]. ===Bioinformatics=== [[File:Mouse cdna microarray.jpg|thumb|upright=1|A [[microarray]] for approximately 8700 genes. The expression rates of these genes are compared using binary logarithms.]] In [[bioinformatics]], [[microarray]]s are used to measure how strongly different genes are expressed in a sample of biological material. Different rates of expression of a gene are often compared by using the binary logarithm of the ratio of expression rates: the log ratio of two expression rates is defined as the binary logarithm of the ratio of the two rates. Binary logarithms allow for a convenient comparison of expression rates: a doubled expression rate can be described by a log ratio of {{math|1}}, a halved expression rate can be described by a log ratio of {{math|−1}}, and an unchanged expression rate can be described by a log ratio of zero, for instance.<ref>{{citation|title=Microarray Gene Expression Data Analysis: A Beginner's Guide|first1=Helen|last1=Causton|first2=John|last2=Quackenbush|first3=Alvis|last3=Brazma|publisher=John Wiley & Sons|year=2009|isbn=978-1-4443-1156-3|pages=49–50|url=https://books.google.com/books?id=bg6D_7mdG70C&pg=PA49}}.</ref> Data points obtained in this way are often visualized as a [[scatterplot]] in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the [[MA plot]] and [[RA plot]] that rotate and scale these log ratio scatterplots.<ref>{{citation|title=Computational and Statistical Methods for Protein Quantification by Mass Spectrometry|first1=Ingvar|last1=Eidhammer|first2=Harald|last2=Barsnes|first3=Geir Egil|last3=Eide|first4=Lennart|last4=Martens|publisher=John Wiley & Sons|year=2012|isbn=978-1-118-49378-6|page=105|url=https://books.google.com/books?id=3Z3VbhLz6pMC&pg=PA105}}.</ref> ===Music theory=== In [[music theory]], the [[Interval (music)|interval]] or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from [[rational number]] ratios with small numerators and denominators are perceived as particularly euphonious. The simplest and most important of these intervals is the [[octave]], a frequency ratio of {{math|2:1}}. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.<ref name="mga">{{citation|title=The Musician's Guide to Acoustics|first1=Murray|last1=Campbell|first2=Clive|last2=Greated|publisher=Oxford University Press|year=1994|isbn=978-0-19-159167-9|page=78|url=https://books.google.com/books?id=iiCZwwFG0x0C&pg=PA78}}.</ref> To study [[tuning system]]s and other aspects of music theory that require finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones {{mvar|x}}, {{mvar|y}}, and {{mvar|z}} form a rising sequence of tones, then the measure of the interval from {{mvar|x}} to {{mvar|y}} plus the measure of the interval from {{mvar|y}} to {{mvar|z}} should equal the measure of the interval from {{mvar|x}} to {{mvar|z}}. Such a measure is given by the [[Cent (music)|cent]], which divides the octave into {{math|1200}} equal intervals ({{math|12}} [[semitone]]s of {{math|100}} cents each). Mathematically, given tones with frequencies {{math|''f''<sub>1</sub>}} and {{math|''f''<sub>2</sub>}}, the number of cents in the interval from {{math|''f''<sub>1</sub>}} to {{math|''f''<sub>2</sub>}} is<ref name="mga"/> :<math>\left|1200\log_2\frac{f_1}{f_2}\right|.</math> The [[millioctave]] is defined in the same way, but with a multiplier of {{math|1000}} instead of {{math|1200}}.<ref>{{citation|title=The Harvard Dictionary of Music|edition=4th|editor-first=Don Michael|editor-last=Randel|editor-link=Don Michael Randel|publisher=The Belknap Press of Harvard University Press|year=2003|isbn=978-0-674-01163-2|page=416|url=https://books.google.com/books?id=02rFSecPhEsC&pg=PA416}}.</ref> ===Sports scheduling=== In competitive games and sports involving two players or teams in each game or match, the binary logarithm indicates the number of rounds necessary in a [[single-elimination tournament]] required to determine a winner. For example, a tournament of {{math|4}} players requires {{math|1=log<sub>2</sub>{{hsp}}4 = 2}} rounds to determine the winner, a tournament of {{math|32}} teams requires {{math|1=log<sub>2</sub>{{hsp}}32 = 5}} rounds, etc. In this case, for {{mvar|n}} players/teams where {{mvar|n}} is not a power of 2, {{math|log<sub>2</sub>{{hsp}}''n''}} is rounded up since it is necessary to have at least one round in which not all remaining competitors play. For example, {{math|log<sub>2</sub>{{hsp}}6}} is approximately {{math|2.585}}, which rounds up to {{math|3}}, indicating that a tournament of {{math|6}} teams requires {{math|3}} rounds (either two teams sit out the first round, or one team sits out the second round). The same number of rounds is also necessary to determine a clear winner in a [[Swiss-system tournament]].<ref>{{citation|title=Introduction to Physical Education and Sport Science|first=Robert|last=France|publisher=Cengage Learning|year=2008|isbn=978-1-4180-5529-5|page=282|url=https://books.google.com/books?id=dH2nB1CX2SMC&pg=PA282}}.</ref> ===Photography=== In [[photography]], [[exposure value]]s are measured in terms of the binary logarithm of the amount of light reaching the film or sensor, in accordance with the [[Weber–Fechner law]] describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base-{{math|2}} logarithmic scale.<ref>{{citation|title=The Manual of Photography|first1=Elizabeth|last1=Allen|first2=Sophie|last2=Triantaphillidou|publisher=Taylor & Francis|year=2011|isbn=978-0-240-52037-7|page=228|url=https://books.google.com/books?id=IfWivY3mIgAC&pg=PA228}}.</ref><ref name="btzs">{{citation|title=Beyond the Zone System|first=Phil|last=Davis|publisher=CRC Press|year=1998|isbn=978-1-136-09294-7|page=17|url=https://books.google.com/books?id=YaVEAQAAQBAJ&pg=PA17}}.</ref> More precisely, the exposure value of a photograph is defined as :<math>\log_2 \frac{N^2}{t}</math> where {{mvar|N}} is the [[f-number]] measuring the [[aperture]] of the lens during the exposure, and {{mvar|t}} is the number of seconds of exposure.<ref>{{harvtxt|Allen|Triantaphillidou|2011}}, [https://books.google.com/books?id=IfWivY3mIgAC&pg=PA235 p. 235].</ref> Binary logarithms (expressed as stops) are also used in [[densitometry]], to express the [[dynamic range]] of light-sensitive materials or digital sensors.<ref>{{citation|title=Visual Effects Society Handbook: Workflow and Techniques|first1=Susan|last1=Zwerman|first2=Jeffrey A.|last2=Okun|publisher=CRC Press|year=2012|isbn=978-1-136-13614-6|page=205|url=https://books.google.com/books?id=3rLpAwAAQBAJ&pg=PA205}}.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)