Template:Short description Template:Good article Template:CS1 config Template:Use mdy dates

File:Binary logarithm plot with ticks.svg
Graph of Template:Math as a function of a positive real number Template:Mvar

In mathematics, the binary logarithm (Template:Math) is the power to which the number Template:Math must be raised to obtain the value Template:Mvar. That is, for any real number Template:Mvar,

<math>x=\log_2 n \quad\Longleftrightarrow\quad 2^x=n.</math>

For example, the binary logarithm of Template:Math is Template:Math, the binary logarithm of Template:Math is Template:Math, the binary logarithm of Template:Math is Template:Math, and the binary logarithm of Template:Math is Template:Math.

The binary logarithm is the logarithm to the base Template:Math and is the inverse function of the power of two function. There are several alternatives to the Template:Math notation for the binary logarithm; see the Notation section below.

Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.

Binary logarithms are included in the standard C mathematical functions and other mathematical software packages.

HistoryEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements, Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem, on the structure of even perfect numbers). And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two. On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Arithmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms.<ref> Template:Citation.</ref><ref>Template:Citation. A copy of the same table with two more entries appears on p. 237, and another copy extended to negative powers appears on p. 249b.</ref>

Earlier than Stifel, the 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm. Virasena's concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the binary logarithm on the powers of two,<ref>Template:Citation.</ref> but it is different for other integers, giving the 2-adic order rather than the logarithm.<ref>See, e.g., Template:Citation.</ref>

The modern form of a binary logarithm, applying to any number (not just powers of two) was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their applications in information theory and computer science became known. As part of his work in this area, Euler published a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy.<ref>Template:Citation.</ref><ref>Template:Citation.</ref>

Definition and propertiesEdit

The binary logarithm function may be defined as the inverse function to the power of two function, which is a strictly increasing function over the positive real numbers and therefore has a unique inverse.<ref>Template:Citation.</ref> Alternatively, it may be defined as Template:Math, where Template:Math is the natural logarithm, defined in any of its standard ways. Using the complex logarithm in this definition allows the binary logarithm to be extended to the complex numbers.<ref>For instance, Microsoft Excel provides the IMLOG2 function for complex binary logarithms: see Template:Citation.</ref>

As with other logarithms, the binary logarithm obeys the following equations, which can be used to simplify formulas that combine binary logarithms with multiplication or exponentiation:<ref>Template:Citation.</ref>

<math>\log_2 xy=\log_2 x + \log_2 y</math>
<math>\log_2\frac{x}{y}=\log_2 x - \log_2 y</math>
<math>\log_2 x^y = y\log_2 x.</math>

For more, see list of logarithmic identities.

NotationEdit

In mathematics, the binary logarithm of a number Template:Mvar is often written as Template:Math.<ref>For instance, this is the notation used in the Encyclopedia of Mathematics and The Princeton Companion to Mathematics.</ref> However, several other notations for this function have been used or proposed, especially in application areas.

Some authors write the binary logarithm as Template:Math,<ref name="clrs">Template:Citation</ref><ref name="sw11">Template:Citation.</ref> the notation listed in The Chicago Manual of Style.<ref>Template:Citation.</ref> Donald Knuth credits this notation to a suggestion of Edward Reingold,<ref name="knuth">Template:Citation, p. 11. The same notation was in the 1973 2nd edition of the same book (p. 23) but without the credit to Reingold.</ref> but its use in both information theory and computer science dates to before Reingold was active.<ref>Template:Citation.</ref><ref name=mitchell>Template:Citation.</ref> The binary logarithm has also been written as Template:Math with a prior statement that the default base for the logarithm is Template:Math.<ref>Template:Citation.</ref><ref>Template:Citation.</ref><ref name="gt02">Template:Citation</ref> Another notation that is often used for the same function (especially in the German scientific literature) is Template:Math,<ref name="Tafel_1971">Template:Citation</ref><ref name="Tietze_Schenk_1999">Template:Citation</ref><ref name="Bauer_2009">Template:Citation.</ref> from Latin logarithmus dualis<ref name="Tafel_1971"/> or logarithmus dyadis.<ref name="Tafel_1971"/> The Template:Interlanguage link multi, ISO 31-11 and ISO 80000-2 standards recommend yet another notation, Template:Math. According to these standards, Template:Math should not be used for the binary logarithm, as it is instead reserved for the common logarithm Template:Math.<ref>For DIN 1302 see Template:Citation.</ref><ref>For ISO 31-11 see Template:Citation.</ref><ref>For ISO 80000-2 see Template:Citation.</ref>

ApplicationsEdit

Template:See also

Information theoryEdit

The number of digits (bits) in the binary representation of a positive integer Template:Mvar is the integral part of Template:Math, i.e.<ref name="sw11"/>

<math> \lfloor \log_2 n\rfloor + 1. </math>

In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making the bit the fundamental unit of information. With these units, the Shannon–Hartley theorem expresses the information capacity of a channel as the binary logarithm of its signal-to-noise ratio, plus one. However, the natural logarithm and the nat are also used in alternative notations for these definitions.<ref>Template:Citation.</ref>

CombinatoricsEdit

File:SixteenPlayerSingleEliminationTournamentBracket.svg
A 16-player single elimination tournament bracket with the structure of a complete binary tree. The height of the tree (number of rounds of the tournament) is the binary logarithm of the number of players, rounded up to an integer.

Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as number theory and mathematical analysis,<ref>Template:Citation.</ref> the binary logarithm has several applications in combinatorics:

Computational complexityEdit

File:Binary search into array - example.svg
Binary search in a sorted array, an algorithm whose time complexity involves binary logarithms

The binary logarithm also frequently appears in the analysis of algorithms,<ref name="gt02"/> not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching.<ref name="knuth"/> If a problem initially has Template:Mvar choices for its solution, and each iteration of the algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of Template:Math. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly Template:Math iterations are needed to obtain a solution for a problem of size Template:Math.<ref>Template:Citation.</ref> Similarly, a perfectly balanced binary search tree containing Template:Mvar elements has height Template:Math.<ref>Template:Citation.</ref>

The running time of an algorithm is usually expressed in big O notation, which is used to simplify expressions by omitting their constant factors and lower-order terms. Because logarithms in different bases differ from each other only by a constant factor, algorithms that run in Template:Math time can also be said to run in, say, Template:Math time. The base of the logarithm in expressions such as Template:Math or Template:Math is therefore not important and can be omitted.<ref name="clrs"/><ref>Template:Citation.</ref> However, for logarithms that appear in the exponent of a time bound, the base of the logarithm cannot be omitted. For example, Template:Math is not the same as Template:Math because the former is equal to Template:Math and the latter to Template:Math.

Algorithms with running time Template:Math are sometimes called linearithmic.<ref>Template:Harvtxt, p. 186.</ref> Some examples of algorithms with running time Template:Math or Template:Math are:

Binary logarithms also occur in the exponents of the time bounds for some divide and conquer algorithms, such as the Karatsuba algorithm for multiplying Template:Mvar-bit numbers in time Template:Math,Template:Sfnmp and the Strassen algorithm for multiplying Template:Math matrices in time Template:Math.Template:Sfnp The occurrence of binary logarithms in these running times can be explained by reference to the master theorem for divide-and-conquer recurrences.

BioinformaticsEdit

File:Mouse cdna microarray.jpg
A microarray for approximately 8700 genes. The expression rates of these genes are compared using binary logarithms.

In bioinformatics, microarrays are used to measure how strongly different genes are expressed in a sample of biological material. Different rates of expression of a gene are often compared by using the binary logarithm of the ratio of expression rates: the log ratio of two expression rates is defined as the binary logarithm of the ratio of the two rates. Binary logarithms allow for a convenient comparison of expression rates: a doubled expression rate can be described by a log ratio of Template:Math, a halved expression rate can be described by a log ratio of Template:Math, and an unchanged expression rate can be described by a log ratio of zero, for instance.<ref>Template:Citation.</ref>

Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot that rotate and scale these log ratio scatterplots.<ref>Template:Citation.</ref>

Music theoryEdit

In music theory, the interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonious. The simplest and most important of these intervals is the octave, a frequency ratio of Template:Math. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.<ref name="mga">Template:Citation.</ref>

To study tuning systems and other aspects of music theory that require finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones Template:Mvar, Template:Mvar, and Template:Mvar form a rising sequence of tones, then the measure of the interval from Template:Mvar to Template:Mvar plus the measure of the interval from Template:Mvar to Template:Mvar should equal the measure of the interval from Template:Mvar to Template:Mvar. Such a measure is given by the cent, which divides the octave into Template:Math equal intervals (Template:Math semitones of Template:Math cents each). Mathematically, given tones with frequencies Template:Math and Template:Math, the number of cents in the interval from Template:Math to Template:Math is<ref name="mga"/>

<math>\left|1200\log_2\frac{f_1}{f_2}\right|.</math>

The millioctave is defined in the same way, but with a multiplier of Template:Math instead of Template:Math.<ref>Template:Citation.</ref>

Sports schedulingEdit

In competitive games and sports involving two players or teams in each game or match, the binary logarithm indicates the number of rounds necessary in a single-elimination tournament required to determine a winner. For example, a tournament of Template:Math players requires Template:Math rounds to determine the winner, a tournament of Template:Math teams requires Template:Math rounds, etc. In this case, for Template:Mvar players/teams where Template:Mvar is not a power of 2, Template:Math is rounded up since it is necessary to have at least one round in which not all remaining competitors play. For example, Template:Math is approximately Template:Math, which rounds up to Template:Math, indicating that a tournament of Template:Math teams requires Template:Math rounds (either two teams sit out the first round, or one team sits out the second round). The same number of rounds is also necessary to determine a clear winner in a Swiss-system tournament.<ref>Template:Citation.</ref>

PhotographyEdit

In photography, exposure values are measured in terms of the binary logarithm of the amount of light reaching the film or sensor, in accordance with the Weber–Fechner law describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base-Template:Math logarithmic scale.<ref>Template:Citation.</ref><ref name="btzs">Template:Citation.</ref> More precisely, the exposure value of a photograph is defined as

<math>\log_2 \frac{N^2}{t}</math>

where Template:Mvar is the f-number measuring the aperture of the lens during the exposure, and Template:Mvar is the number of seconds of exposure.<ref>Template:Harvtxt, p. 235.</ref>

Binary logarithms (expressed as stops) are also used in densitometry, to express the dynamic range of light-sensitive materials or digital sensors.<ref>Template:Citation.</ref>

CalculationEdit

File:HP-35 Red Dot.jpg
HP-35 scientific calculator (1972). The log and ln keys are in the top row; there is no log2 key.

Conversion from other basesEdit

An easy way to calculate Template:Math on calculators that do not have a Template:Math function is to use the natural logarithm (Template:Math) or the common logarithm (Template:Math or Template:Math) functions, which are found on most scientific calculators. To change the logarithm base to 2 from Template:Mvar, Template:Math, or any other base Template:Mvar, one can use the formulae:<ref name="btzs"/><ref>Template:Citation.</ref>

<math>\log_2 n = \frac{\ln n}{\ln 2} = \frac{\log_{10} n}{\log_{10} 2} = \frac{\log_{b} n}{\log_{b} 2}\,.</math>

Approximately,<ref>Template:Cite OEIS</ref><ref>Template:Cite OEIS</ref>

<math>\log_2 n \approx 1.442695\ln n \approx 3.321928\log_{10} n\,.</math>

Integer roundingEdit

The binary logarithm can be made into a function from integers and to integers by rounding it up or down. These two forms of integer binary logarithm are related by this formula:

<math> \lfloor \log_2(n) \rfloor = \lceil \log_2(n + 1) \rceil - 1, \text{ if }n \ge 1.</math><ref name="Warren_2002">Template:Citation</ref>

The definition can be extended by defining <math> \lfloor \log_2(0) \rfloor = -1</math>. Extended in this way, this function is related to the number of leading zeros of the 32-bit unsigned binary representation of Template:Mvar, Template:Math.

<math>\lfloor \log_2(n) \rfloor = 31 - \operatorname{nlz}(n).</math><ref name="Warren_2002" />

The integer binary logarithm can be interpreted as the zero-based index of the most significant Template:Math bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant Template:Math bit. Many hardware platforms include support for finding the number of leading zeros, or equivalent operations, which can be used to quickly find the binary logarithm. The fls and flsl functions in the Linux kernel<ref>fls, Linux kernel API, kernel.org, retrieved 2010-10-17.</ref> and in some versions of the libc software library also compute the binary logarithm (rounded up to an integer, plus one).

Piecewise-linear approximationEdit

For a number <math>x</math> represented in floating point as <math>x=2^E(1+m)</math>, with integer exponent <math>E</math> and mantissa <math>m</math> in the range <math>0\le m<1</math>, the binary logarithm can be roughly approximated as <math>\log_2 x\approx E+m</math>.<ref name=mitchell/> This approximation is exact at both ends of the range of mantissas but underestimates the logarithm in the middle of the range, reaching a maximum error of approximately 0.086 at a mantissa of approximately 0.44. It can be made more accurate by using a piecewise linear function of <math>m</math>,<ref>Template:Citation</ref> or more crudely by adding a constant correction term <math>\log_2 x\approx E+m+\sigma</math>. For instance choosing <math>\sigma\approx 0.043</math> would halve the maximum error. The fast inverse square root algorithm uses this idea, with a different correction term that can be inferred to be <math>\sigma\approx 0.0450466</math>, by directly manipulating the binary representation of <math>x</math> to multiply this approximate logarithm by <math>-\tfrac12</math>, obtaining a floating point value that approximates <math>1/\sqrt x</math>.<ref>Template:Citation</ref>

Iterative approximationEdit

For a general positive real number, the binary logarithm may be computed in two parts.<ref name="ml73">Template:Citation.</ref> First, one computes the integer part, <math>\lfloor\log_2 x\rfloor</math> (called the characteristic of the logarithm). This reduces the problem to one where the argument of the logarithm is in a restricted range, the interval Template:Closed-open, simplifying the second step of computing the fractional part (the mantissa of the logarithm). For any Template:Math, there exists a unique integer Template:Mvar such that Template:Math, or equivalently Template:Math. Now the integer part of the logarithm is simply Template:Mvar, and the fractional part is Template:Math.<ref name="ml73"/> In other words:

<math>\log_2 x = n + \log_2 y \quad\text{where } y = 2^{-n}x \text{ and } y \in [1,2)</math>

For normalized floating-point numbers, the integer part is given by the floating-point exponent,<ref>Template:Citation.</ref> and for integers it can be determined by performing a count leading zeros operation.<ref name="Warren_2013">Template:Citation.</ref>

The fractional part of the result is Template:Math and can be computed iteratively, using only elementary multiplication and division.<ref name="ml73"/> The algorithm for computing the fractional part can be described in pseudocode as follows:

  1. Start with a real number Template:Mvar in the half-open interval Template:Closed-open. If Template:Math, then the algorithm is done, and the fractional part is zero.
  2. Otherwise, square Template:Mvar repeatedly until the result Template:Mvar lies in the interval Template:Closed-open. Let Template:Mvar be the number of squarings needed. That is, Template:Math with Template:Mvar chosen such that Template:Mvar is in Template:Closed-open.
  3. Taking the logarithm of both sides and doing some algebra: <math display="block">\begin{align}
\log_2 z &= 2^m \log_2 y \\
\log_2 y &= \frac{ \log_2 z }{ 2^m } \\
         &= \frac{ 1 + \log_2(z/2) }{ 2^m } \\
         &= 2^{-m} + 2^{-m}\log_2(z/2).

\end{align}</math>

  1. Once again Template:Math is a real number in the interval Template:Closed-open. Return to step 1 and compute the binary logarithm of Template:Math using the same method.

The result of this is expressed by the following recursive formulas, in which <math>m_i</math> is the number of squarings required in the i-th iteration of the algorithm: <math display="block">\begin{align}

\log_2 x &= n + 2^{-m_1} \left( 1 + 2^{-m_2} \left( 1 + 2^{-m_3} \left( 1 + \cdots \right)\right)\right) \\
         &= n + 2^{-m_1} + 2^{-m_1-m_2} + 2^{-m_1-m_2-m_3} + \cdots

\end{align}</math>

In the special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series that converges according to the ratio test, since each term is strictly less than the previous one (since every Template:Math). See Horner's method. For practical use, this infinite series must be truncated to reach an approximate result. If the series is truncated after the Template:Mvar-th term, then the error in the result is less than Template:Math.<ref name="ml73"/>

Software library supportEdit

The log2 function is included in the standard C mathematical functions. The default version of this function takes double precision arguments but variants of it allow the argument to be single-precision or to be a long double.<ref>Template:Citation.</ref> In computing environments supporting complex numbers and implicit type conversion such as MATLAB the argument to the log2 function is allowed to be a negative number, returning a complex one.<ref>Template:Citation.</ref>

ReferencesEdit

Template:Reflist

External linksEdit