Iterated logarithm
Template:Short description Template:For
In computer science, the iterated logarithm of <math>n</math>, written Template:Log-star <math>n</math> (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to <math>1</math>.<ref>Template:Introduction to Algorithms</ref> The simplest formal definition is the result of this recurrence relation:
- <math>
\log^* n := \begin{cases} 0 & \mbox{if } n \le 1; \\ 1 + \log^*(\log n) & \mbox{if } n > 1 \end{cases} </math>
In computer science, Template:Lg-star is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base <math>2</math>) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than <math>e^{1/e} \approx 1.444667</math>, not only for base <math>2</math> and base e. The "super-logarithm" function <math>\mathrm {slog}_b(n)</math> is "essentially equivalent" to the base <math>b</math> iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.<ref>Template:Cite journal</ref>
Analysis of algorithmsEdit
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
- Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n Template:Log-star n) time.<ref>Template:Cite journal</ref>
- Fürer's algorithm for integer multiplication: O(n log n 2O(Template:Log-star n)).
- Finding an approximate maximum (element at least as large as the median): Template:Log-star n − 1 ± 3 parallel operations.<ref>Template:Cite journal</ref>
- Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(Template:Log-star n) synchronous communication rounds.<ref>Template:Cite journal</ref>
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
<math display="block">{^{y}b} = \underbrace{b^{b^{\cdot^{\cdot^{b}}}}}_y \gg \underbrace{b^{b^{\cdot^{\cdot^{b^{y}}}}}}_n</math>
the inverse grows much slower: <math>\log_b^* x \ll \log_b^n x</math>.
For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
x | Template:Lg-star x |
---|---|
Template:Open-closed | 0 |
Template:Open-closed | 1 |
Template:Open-closed | 2 |
Template:Open-closed | 3 |
Template:Open-closed | 4 |
Template:Open-closed | 5 |
Higher bases give smaller iterated logarithms.
Other applicationsEdit
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is <math>O(\log^* n)</math>.
In computational complexity theory, Santhanam<ref>Template:Cite conference</ref> shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to <math>n\sqrt{\log^*n}.</math>
See alsoEdit
- Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory