Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Logarithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Calculation== [[File:Logarithm keys.jpg|thumb|The logarithm keys (LOG for base 10 and LN for base {{mvar|e}}) on a [[TI-83 series|TI-83 Plus]] graphing calculator]] Logarithms are easy to compute in some cases, such as {{math|1=log<sub>10</sub> (1000) = 3}}. In general, logarithms can be calculated using [[power series]] or the [[arithmetic–geometric mean]], or be retrieved from a precalculated [[logarithm table]] that provides a fixed precision.<ref>{{Citation | last1=Muller | first1=Jean-Michel | title=Elementary functions | publisher=Birkhäuser Boston | location=Boston, MA | edition=2nd | isbn=978-0-8176-4372-0 | year=2006}}, sections 4.2.2 (p. 72) and 5.5.2 (p. 95)</ref><ref>{{Citation |last1=Hart |last2=Cheney |last3=Lawson |year=1968|publisher=John Wiley|location=New York|title=Computer Approximations|journal=Physics Today |series=SIAM Series in Applied Mathematics|volume=21 |issue=2 |page=91 |doi=10.1063/1.3034795 |bibcode=1968PhT....21b..91D |display-authors=etal}}, section 6.3, pp. 105–11</ref> [[Newton's method]], an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently.<ref>{{Citation|last1=Zhang |first1=M. |last2=Delgado-Frias |first2=J.G. |last3=Vassiliadis |first3=S. |title=Table driven Newton scheme for high precision logarithm generation |doi=10.1049/ip-cdt:19941268 |journal= IEE Proceedings - Computers and Digital Techniques|issn=1350-2387 |volume=141 |year=1994 |issue=5 |pages=281–92 |doi-broken-date=7 December 2024| url=https://digital-library.theiet.org/doi/10.1049/ip-cdt%3A19941268 }}, section 1 for an overview</ref> Using look-up tables, [[CORDIC]]-like methods can be used to compute logarithms by using only the operations of addition and [[Arithmetic shift|bit shifts]].<ref>{{Citation |first=J.E.|last=Meggitt|title=Pseudo Division and Pseudo Multiplication Processes|journal= IBM Journal of Research and Development|date=April 1962|doi=10.1147/rd.62.0210|volume=6|issue=2|pages=210–26|s2cid=19387286}}</ref><ref>{{Citation |last=Kahan |first=W. |author-link= William Kahan |title=Pseudo-Division Algorithms for Floating-Point Logarithms and Exponentials |date= 20 May 2001 }}</ref> Moreover, the [[Binary logarithm#Algorithm|binary logarithm algorithm]] calculates {{math|lb(''x'')}} [[recursion|recursively]], based on repeated squarings of {{mvar|x}}, taking advantage of the relation <math display="block">\log_2\left(x^2\right) = 2 \log_2 |x|.</math> ===Power series=== ====Taylor series==== [[File:Taylor approximation of natural logarithm.gif|right|thumb|The Taylor series of {{math|ln(''z'')}} centered at {{math|''z'' {{=}} 1}}. The animation shows the first 10 approximations along with the 99th and 100th. The approximations do not converge beyond a distance of 1 from the center.|alt=An animation showing increasingly good approximations of the logarithm graph.]] For any real number {{mvar|z}} that satisfies {{math|0 < ''z'' ≤ 2}}, the following formula holds:{{refn|The same series holds for the principal value of the complex logarithm for complex numbers {{mvar|z}} satisfying {{math|{{!}}''z'' − 1{{!}} < 1}}.|group=nb}}<ref name=AbramowitzStegunp.68>{{Harvard citations|editor1-last=Abramowitz|editor2-last=Stegun|year=1972 |nb=yes|loc=p. 68}}</ref> <math display="block"> \begin{align}\ln (z) &= \frac{(z-1)^1}{1} - \frac{(z-1)^2}{2} + \frac{(z-1)^3}{3} - \frac{(z-1)^4}{4} + \cdots \\ &= \sum_{k=1}^\infty (-1)^{k+1}\frac{(z-1)^k}{k}. \end{align} </math> Equating the function {{math|ln(''z'')}} to this infinite sum ([[series (mathematics)|series]]) is shorthand for saying that the function can be approximated to a more and more accurate value by the following expressions (known as [[partial sum]]s): <math display=block> (z-1),\ \ (z-1) - \frac{(z-1)^2}{2},\ \ (z-1) - \frac{(z-1)^2}{2} + \frac{(z-1)^3}{3},\ \ldots </math> For example, with {{math|''z'' {{=}} 1.5}} the third approximation yields {{math|0.4167}}, which is about {{math|0.011}} greater than {{math|ln(1.5) {{=}} 0.405465}}, and the ninth approximation yields {{math|0.40553}}, which is only about {{math|0.0001}} greater. The {{mvar|n}}th partial sum can approximate {{math|ln(''z'')}} with arbitrary precision, provided the number of summands {{mvar|n}} is large enough. In elementary calculus, the series is said to [[convergent series|converge]] to the function {{math|ln(''z'')}}, and the function is the [[limit (mathematics)|limit]] of the series. It is the [[Taylor series]] of the [[natural logarithm]] at {{math|1=''z'' = 1}}. The Taylor series of {{math|ln(''z'')}} provides a particularly useful approximation to {{math|ln(1 + ''z'')}} when {{mvar|z}} is small, {{math|{{!}}''z''{{!}} < 1}}, since then <math display="block"> \ln (1+z) = z - \frac{z^2}{2} +\frac{z^3}{3} -\cdots \approx z. </math> For example, with {{math|1=''z'' = 0.1}} the first-order approximation gives {{math|ln(1.1) ≈ 0.1}}, which is less than {{math|5%}} off the correct value {{math|0.0953}}. ====Inverse hyperbolic tangent==== Another series is based on the [[area hyperbolic tangent#Inverse hyperbolic tangent|inverse hyperbolic tangent]] function: <math display="block"> \ln (z) = 2\cdot\operatorname{artanh}\,\frac{z-1}{z+1} = 2 \left ( \frac{z-1}{z+1} + \frac{1}{3}{\left(\frac{z-1}{z+1}\right)}^3 + \frac{1}{5}{\left(\frac{z-1}{z+1}\right)}^5 + \cdots \right ), </math> for any real number {{math|''z'' > 0}}.{{refn|The same series holds for the principal value of the complex logarithm for complex numbers {{mvar|z}} with positive real part.|group=nb}}<ref name=AbramowitzStegunp.68 /> Using [[sigma notation]], this is also written as <math display="block">\ln (z) = 2\sum_{k=0}^\infty\frac{1}{2k+1}\left(\frac{z-1}{z+1}\right)^{2k+1}.</math> This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially if {{mvar|z}} is close to 1. For example, for {{math|1=''z'' = 1.5}}, the first three terms of the second series approximate {{math|ln(1.5)}} with an error of about {{val|3|e=-6}}. The quick convergence for {{mvar|z}} close to 1 can be taken advantage of in the following way: given a low-accuracy approximation {{math|''y'' ≈ ln(''z'')}} and putting <math display="block">A = \frac z{\exp(y)},</math> the logarithm of {{mvar|z}} is: <math display="block">\ln (z)=y+\ln (A).</math> The better the initial approximation {{mvar|y}} is, the closer {{mvar|A}} is to 1, so its logarithm can be calculated efficiently. {{mvar|A}} can be calculated using the [[exponential function|exponential series]], which converges quickly provided {{mvar|y}} is not too large. Calculating the logarithm of larger {{mvar|z}} can be reduced to smaller values of {{mvar|z}} by writing {{math|''z'' {{=}} ''a'' · 10<sup>''b''</sup>}}, so that {{math|ln(''z'') {{=}} ln(''a'') + {{mvar|b}} · ln(10)}}. A closely related method can be used to compute the logarithm of integers. Putting <math>\textstyle z=\frac{n+1}{n}</math> in the above series, it follows that: <math display="block">\ln (n+1) = \ln(n) + 2\sum_{k=0}^\infty\frac{1}{2k+1}\left(\frac{1}{2 n+1}\right)^{2k+1}.</math> If the logarithm of a large integer {{mvar|n}} is known, then this series yields a fast converging series for {{math|log(''n''+1)}}, with a [[rate of convergence]] of <math display="inline">\left(\frac{1}{2 n+1}\right)^{2}</math>. ===Arithmetic–geometric mean approximation=== The [[arithmetic–geometric mean]] yields high-precision approximations of the [[natural logarithm]]. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work {{math|ln(''x'')}} is approximated to a precision of {{math|2<sup>−''p''</sup>}} (or {{Mvar|p}} precise bits) by the following formula (due to [[Carl Friedrich Gauss]]):<ref>{{Citation |first1=T. |last1=Sasaki |first2=Y. |last2=Kanada |title=Practically fast multiple-precision evaluation of log(x) |journal=Journal of Information Processing |volume=5|issue=4 |pages=247–50 |year=1982 | url=http://ci.nii.ac.jp/naid/110002673332 | access-date=30 March 2011}}</ref><ref>{{Citation |first1=Timm |title=Stacs 99|last1=Ahrendt|publisher=Springer|location=Berlin, New York|series=Lecture notes in computer science|doi=10.1007/3-540-49116-3_28|volume=1564|year=1999|pages=302–12|isbn=978-3-540-65691-3|chapter=Fast Computations of the Exponential Function}}</ref> <math display="block">\ln (x) \approx \frac{\pi}{2\, \mathrm{M}\!\left(1, 2^{2 - m}/x \right)} - m \ln(2).</math> Here {{math|M(''x'', ''y'')}} denotes the [[arithmetic–geometric mean]] of {{mvar|x}} and {{mvar|y}}. It is obtained by repeatedly calculating the average {{Math|(''x'' + ''y'')/2}} ([[arithmetic mean]]) and <math display="inline">\sqrt{xy}</math> ([[geometric mean]]) of {{mvar|x}} and {{mvar|y}} then let those two numbers become the next {{mvar|x}} and {{mvar|y}}. The two numbers quickly converge to a common limit which is the value of {{math|M(''x'', ''y'')}}. {{mvar|m}} is chosen such that <math display="block">x \,2^m > 2^{p/2}.\, </math> to ensure the required precision. A larger {{mvar|m}} makes the {{math|M(''x'', ''y'')}} calculation take more steps (the initial {{mvar|x}} and {{mvar|y}} are farther apart so it takes more steps to converge) but gives more precision. The constants {{math|{{pi}}}} and {{math|ln(2)}} can be calculated with quickly converging series. ===Feynman's algorithm=== While at [[Los Alamos National Laboratory]] working on the [[Manhattan Project]], [[Richard Feynman]] developed a bit-processing algorithm to compute the logarithm that is similar to long division and was later used in the [[Connection Machine]]. The algorithm relies on the fact that every real number {{mvar|x}} where {{Math|1 < ''x'' < 2}} can be represented as a product of distinct factors of the form {{Math|1 + 2<sup>−''k''</sup>}}. The algorithm sequentially builds that product {{Mvar|P}}, starting with {{math|''P'' {{=}} 1}} and {{math|''k'' {{=}} 1}}: if {{math|''P'' · (1 + 2<sup>−''k''</sup>) < ''x''}}, then it changes {{Mvar|P}} to {{math|''P'' · (1 + 2<sup>−''k''</sup>)}}. It then increases <math>k</math> by one regardless. The algorithm stops when {{Mvar|k}} is large enough to give the desired accuracy. Because {{Math|log(''x'')}} is the sum of the terms of the form {{Math|log(1 + 2<sup>−''k''</sup>)}} corresponding to those {{Mvar|k}} for which the factor {{Math|1 + 2<sup>−''k''</sup>}} was included in the product {{Mvar|P}}, {{Math|log(''x'')}} may be computed by simple addition, using a table of {{Math|log(1 + 2<sup>−''k''</sup>)}} for all {{Mvar|k}}. Any base may be used for the logarithm table.<ref>{{citation |first=Danny |last=Hillis |author-link=Danny Hillis |title=Richard Feynman and The Connection Machine |journal=Physics Today |volume= 42|issue= 2|page= 78|date=15 January 1989 |doi=10.1063/1.881196|bibcode=1989PhT....42b..78H}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)