Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Logarithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== [[File:NautilusCutawayLogarithmicSpiral.jpg|thumb|A [[nautilus]] shell displaying a logarithmic spiral|alt=A photograph of a nautilus' shell.]] Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of [[scale invariance]]. For example, each chamber of the shell of a [[nautilus]] is an approximate copy of the next one, scaled by a constant factor. This gives rise to a [[logarithmic spiral]].<ref>{{Harvard citations |last1=Maor |year=2009 |nb=yes |loc=p. 135 }}</ref> [[Benford's law]] on the distribution of leading digits can also be explained by scale invariance.<ref>{{Citation | last1=Frey | first1=Bruce | title=Statistics hacks | publisher=[[O'Reilly Media|O'Reilly]]|location=Sebastopol, CA| series=Hacks Series |url={{google books |plainurl=y |id=HOPyiNb9UqwC|page=275}}| isbn=978-0-596-10164-0 | year=2006}}, chapter 6, section 64</ref> Logarithms are also linked to [[self-similarity]]. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions.<ref>{{Citation | last1=Ricciardi | first1=Luigi M. | title=Lectures in applied mathematics and informatics | url={{google books |plainurl=y |id=Cw4NAQAAIAAJ}} | publisher=Manchester University Press | location=Manchester | isbn=978-0-7190-2671-3 | year=1990}}, p. 21, section 1.3.2</ref> The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. [[Logarithmic scale]]s are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function {{math|log(''x'')}} grows very slowly for large {{mvar|x}}, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the [[Tsiolkovsky rocket equation]], the [[Fenske equation]], or the [[Nernst equation]]. ===Logarithmic scale=== {{Main|Logarithmic scale}} [[File:Germany Hyperinflation.svg|A logarithmic chart depicting the value of one [[German gold mark|Goldmark]] in [[German Papiermark|Papiermarks]] during the [[Inflation in the Weimar Republic|German hyperinflation in the 1920s]]|right|thumb|alt=A graph of the value of one mark over time. The line showing its value is increasing very quickly, even with logarithmic scale.]] Scientific quantities are often expressed as logarithms of other quantities, using a ''logarithmic scale''. For example, the [[decibel]] is a [[unit of measurement]] associated with [[logarithmic-scale]] [[level quantity|quantities]]. It is based on the common logarithm of [[ratio]]s—10 times the common logarithm of a [[power (physics)|power]] ratio or 20 times the common logarithm of a [[voltage]] ratio. It is used to quantify the attenuation or amplification of electrical signals,<ref>{{cite book|contribution=7.5.1 Decibel (dB)|title=Power Quality|first=C.|last=Sankaran|publisher=Taylor & Francis|year=2001|isbn=9780849310409|quote=The decibel is used to express the ratio between two quantities. The quantities may be voltage, current, or power.}}</ref> to describe power levels of sounds in [[acoustics]],<ref>{{Citation|last1=Maling|first1=George C.|editor1-last=Rossing|editor1-first=Thomas D.|title=Springer handbook of acoustics|publisher=[[Springer-Verlag]]|location=Berlin, New York|isbn=978-0-387-30446-5|year=2007|chapter=Noise}}, section 23.0.2</ref> and the [[absorbance]] of light in the fields of [[spectrometer|spectrometry]] and [[optics]]. The [[signal-to-noise ratio]] describing the amount of unwanted [[noise (electronic)|noise]] in relation to a (meaningful) [[signal (information theory)|signal]] is also measured in decibels.<ref>{{Citation | last1=Tashev | first1=Ivan Jelev | title=Sound Capture and Processing: Practical Approaches | publisher=[[John Wiley & Sons]] | location=New York | isbn=978-0-470-31983-3 | year=2009|url={{google books |plainurl=y |id=plll9smnbOIC|page=48}}|page=98}}</ref> In a similar vein, the [[peak signal-to-noise ratio]] is commonly used to assess the quality of sound and [[image compression]] methods using the logarithm.<ref>{{Citation | last1=Chui | first1=C.K. | title=Wavelets: a mathematical tool for signal processing | publisher=[[Society for Industrial and Applied Mathematics]] | location=Philadelphia | series=SIAM monographs on mathematical modeling and computation | isbn=978-0-89871-384-8 | year=1997|url={{google books |plainurl=y |id=N06Gu433PawC|page=180}}}}</ref> The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the [[moment magnitude scale]] or the [[Richter magnitude scale]]. For example, a 5.0 earthquake releases 32 times {{math|(10<sup>1.5</sup>)}} and a 6.0 releases 1000 times {{math|(10<sup>3</sup>)}} the energy of a 4.0.<ref>{{Citation|last1=Crauder|first1=Bruce|last2=Evans|first2=Benny|last3=Noell|first3=Alan|title=Functions and Change: A Modeling Approach to College Algebra|publisher=Cengage Learning|location=Boston|edition=4th|isbn=978-0-547-15669-9|year=2008}}, section 4.4.</ref> [[Apparent magnitude]] measures the brightness of stars logarithmically.<ref>{{Citation|last1=Bradt|first1=Hale|title=Astronomy methods: a physical approach to astronomical observations|publisher=[[Cambridge University Press]]|series=Cambridge Planetary Science|isbn=978-0-521-53551-9|year=2004}}, section 8.3, p. 231</ref> In [[chemistry]] the negative of the decimal logarithm, the decimal '''{{vanchor|cologarithm}}''', is indicated by the letter p.<ref name="Jens">{{cite journal|author=Nørby, Jens|year=2000|title=The origin and the meaning of the little p in pH|journal=Trends in Biochemical Sciences|volume=25|issue=1|pages=36–37|doi=10.1016/S0968-0004(99)01517-0|pmid=10637613}}</ref> For instance, [[pH]] is the decimal cologarithm of the [[Activity (chemistry)|activity]] of [[hydronium]] ions (the form [[hydrogen]] [[ion]]s {{H+}} take in water).<ref>{{Citation|author=IUPAC|title=Compendium of Chemical Terminology ("Gold Book")|edition=2nd|editor=A. D. McNaught, A. Wilkinson|publisher=Blackwell Scientific Publications| location=Oxford| year=1997| url=http://goldbook.iupac.org/P04524.html|isbn=978-0-9678550-9-7|doi=10.1351/goldbook|author-link=IUPAC|doi-access=free}}</ref> The activity of hydronium ions in neutral water is 10<sup>−7</sup> [[molar concentration|mol·L<sup>−1</sup>]], hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 10<sup>4</sup> of the activity, that is, vinegar's hydronium ion activity is about {{math|10<sup>−3</sup> mol·L<sup>−1</sup>}}. [[Semi-log plot|Semilog]] (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, [[exponential function]]s of the form {{math|1=''f''(''x'') = ''a'' · ''b''{{i sup|''x''}}}} appear as straight lines with [[slope]] equal to the logarithm of {{mvar|b}}. [[Log-log plot|Log-log]] graphs scale both axes logarithmically, which causes functions of the form {{math|1=''f''(''x'') = ''a'' · ''x''{{i sup|''k''}}}} to be depicted as straight lines with slope equal to the exponent {{mvar|k}}. This is applied in visualizing and analyzing [[power law]]s.<ref>{{Citation|last1=Bird|first1=J.O.|title=Newnes engineering mathematics pocket book |publisher=Newnes|location=Oxford|edition=3rd|isbn=978-0-7506-4992-6|year=2001}}, section 34</ref> ===Psychology=== Logarithms occur in several laws describing [[human perception]]:<ref>{{Citation | last1=Goldstein | first1=E. Bruce | title=Encyclopedia of Perception | url={{google books |plainurl=y |id=Y4TOEN4f5ZMC}} | publisher=Sage | location=Thousand Oaks, CA | isbn=978-1-4129-4081-8 | year=2009}}, pp. 355–56</ref><ref>{{Citation | last1=Matthews | first1=Gerald | title=Human Performance: Cognition, Stress, and Individual Differences | url={{google books |plainurl=y |id=0XrpulSM1HUC}} | publisher=Psychology Press | location=Hove | isbn=978-0-415-04406-6 | year=2000}}, p. 48</ref> [[Hick's law]] proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have.<ref>{{Citation|last1=Welford|first1=A.T.|title=Fundamentals of skill|publisher=Methuen|location=London|isbn=978-0-416-03000-6 |oclc=219156|year=1968}}, p. 61</ref> [[Fitts's law]] predicts that the time required to rapidly move to a target area is a logarithmic function of the ratio between the distance to a target and the size of the target.<ref>{{Citation|author=Paul M. Fitts|date=June 1954|title=The information capacity of the human motor system in controlling the amplitude of movement|journal=Journal of Experimental Psychology|volume=47|issue=6|pages=381–91 | pmid=13174710 | doi =10.1037/h0055392 |s2cid=501599}}, reprinted in {{Citation|journal=Journal of Experimental Psychology: General|volume=121|issue=3|pages=262–69|year=1992 | pmid=1402698 | url=http://sing.stanford.edu/cs303-sp10/papers/1954-Fitts.pdf | access-date=30 March 2011 |title=The information capacity of the human motor system in controlling the amplitude of movement|author=Paul M. Fitts|doi=10.1037/0096-3445.121.3.262}}</ref> In [[psychophysics]], the [[Weber–Fechner law]] proposes a logarithmic relationship between [[stimulus (psychology)|stimulus]] and [[sensation (psychology)|sensation]] such as the actual vs. the perceived weight of an item a person is carrying.<ref>{{Citation | last1=Banerjee | first1=J.C. | title=Encyclopaedic dictionary of psychological terms | publisher=M.D. Publications | location=New Delhi | isbn=978-81-85880-28-0 | oclc=33860167 | year=1994|url={{google books |plainurl=y |id=Pwl5U2q5hfcC|page=306}} |page=304}}</ref> (This "law", however, is less realistic than more recent models, such as [[Stevens's power law]].<ref>{{Citation|last1=Nadel|first1=Lynn|author1-link=Lynn Nadel|title=Encyclopedia of cognitive science|publisher=[[John Wiley & Sons]]|location=New York|isbn=978-0-470-01619-0|year=2005}}, lemmas ''Psychophysics'' and ''Perception: Overview''</ref>) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.<ref> {{Citation|doi=10.1111/1467-9280.02438|last1=Siegler|first1=Robert S.|last2=Opfer|first2=John E.|title=The Development of Numerical Estimation. Evidence for Multiple Representations of Numerical Quantity|volume=14|issue=3|pages=237–43|year=2003|journal=Psychological Science|url=http://www.psy.cmu.edu/~siegler/sieglerbooth-cd04.pdf|pmid=12741747|citeseerx=10.1.1.727.3696|s2cid=9583202|access-date=7 January 2011|archive-url=https://web.archive.org/web/20110517002232/http://www.psy.cmu.edu/~siegler/sieglerbooth-cd04.pdf|archive-date=17 May 2011|url-status=dead}} </ref><ref>{{Citation|last1=Dehaene| first1=Stanislas|last2=Izard|first2=Véronique |last3=Spelke| first3=Elizabeth|last4=Pica| first4=Pierre| title=Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures|volume=320|issue=5880|pages=1217–20|doi=10.1126/science.1156540|pmc=2610411|pmid=18511690| year=2008|journal=Science|bibcode=2008Sci...320.1217D| citeseerx=10.1.1.362.2390}}</ref> ===Probability theory and statistics=== [[File:PDF-log normal distributions.svg|thumb|right|alt=Three asymmetric PDF curves|Three [[probability density function]]s (PDF) of random variables with log-normal distributions. The location parameter {{math|μ}}, which is zero for all three of the PDFs shown, is the mean of the logarithm of the random variable, not the mean of the variable itself.]] [[File:Benfords law illustrated by world's countries population.svg|Distribution of first digits (in %, red bars) in the [[List of countries by population|population of the 237 countries]] of the world. Black dots indicate the distribution predicted by Benford's law.|thumb|right|alt=A bar chart and a superimposed second chart. The two differ slightly, but both decrease in a similar fashion.]] Logarithms arise in [[probability theory]]: the [[law of large numbers]] dictates that, for a [[fair coin]], as the number of coin-tosses increases to infinity, the observed proportion of heads [[binomial distribution|approaches one-half]]. The fluctuations of this proportion about one-half are described by the [[law of the iterated logarithm]].<ref>{{Citation | last1=Breiman | first1=Leo | title=Probability | publisher=[[Society for Industrial and Applied Mathematics]] | location=Philadelphia | series=Classics in applied mathematics | isbn=978-0-89871-296-4 | year=1992}}, section 12.9</ref> Logarithms also occur in [[log-normal distribution]]s. When the logarithm of a [[random variable]] has a [[normal distribution]], the variable is said to have a log-normal distribution.<ref>{{Citation|last1=Aitchison|first1=J.|last2=Brown|first2=J.A.C.|title=The lognormal distribution|publisher=[[Cambridge University Press]]|isbn=978-0-521-04011-2 |oclc=301100935|year=1969}}</ref> Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.<ref> {{Citation | title = An introduction to turbulent flow | author = Jean Mathieu and Julian Scott | publisher = Cambridge University Press | year = 2000 | isbn = 978-0-521-77538-0 | page = 50 | url = {{google books |plainurl=y |id=nVA53NEAx64C|page=50}} }}</ref> Logarithms are used for [[maximum-likelihood estimation]] of parametric [[statistical model]]s. For such a model, the [[likelihood function]] depends on at least one [[parametric model|parameter]] that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "''log likelihood''"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for [[independence (probability)|independent]] random variables.<ref>{{Citation|last1=Rose|first1=Colin|last2=Smith|first2=Murray D.|title=Mathematical statistics with Mathematica|publisher=[[Springer-Verlag]]|location=Berlin, New York|series=Springer texts in statistics|isbn=978-0-387-95234-5|year=2002}}, section 11.3</ref> [[Benford's law]] describes the occurrence of digits in many [[data set]]s, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is {{Mvar|d}} (from 1 to 9) equals {{math|log<sub>10</sub> (''d'' + 1) − log<sub>10</sub> (''d'')}}, ''regardless'' of the unit of measurement.<ref>{{Citation|last1=Tabachnikov|first1=Serge|author-link1=Sergei Tabachnikov|title=Geometry and Billiards|publisher=[[American Mathematical Society]]|location=Providence, RI|isbn=978-0-8218-3919-5|year=2005|pages=36–40}}, section 2.1</ref> Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.<ref>{{citation |title=The Effective Use of Benford's Law in Detecting Fraud in Accounting Data |first1=Cindy |last1=Durtschi |first2=William |last2=Hillison |first3=Carl |last3=Pacini |url=http://faculty.usfsp.edu/gkearns/Articles_Fraud/Benford%20Analysis%20Article.pdf |volume=V |pages=17–34 |year=2004 |journal=Journal of Forensic Accounting |archive-url=https://web.archive.org/web/20170829062510/http://faculty.usfsp.edu/gkearns/Articles_Fraud/Benford%20Analysis%20Article.pdf |archive-date=29 August 2017 |access-date=28 May 2018}}</ref> The [[logarithm transformation]] is a type of [[data transformation (statistics)|data transformation]] used to bring the empirical distribution closer to the assumed one. ===Computational complexity=== [[Analysis of algorithms]] is a branch of [[computer science]] that studies the [[time complexity|performance]] of [[algorithm]]s (computer programs solving a certain problem).<ref name=Wegener>{{Citation|last1=Wegener|first1=Ingo| title=Complexity theory: exploring the limits of efficient algorithms|publisher=[[Springer-Verlag]]|location=Berlin, New York|isbn=978-3-540-21045-0|year=2005}}, pp. 1–2</ref> Logarithms are valuable for describing algorithms that [[Divide and conquer algorithm|divide a problem]] into smaller ones, and join the solutions of the subproblems.<ref>{{Citation|last1=Harel|first1=David|last2=Feldman|first2=Yishai A.|title=Algorithmics: the spirit of computing|location=New York|publisher=[[Addison-Wesley]]|isbn=978-0-321-11784-7|year=2004}}, p. 143</ref> For example, to find a number in a sorted list, the [[binary search algorithm]] checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, {{math|log<sub>2</sub> (''N'')}} comparisons, where {{mvar|N}} is the list's length.<ref>{{citation | last = Knuth | first = Donald | author-link = Donald Knuth | title = The Art of Computer Programming | publisher = Addison-Wesley |location=Reading, MA | year= 1998| isbn = 978-0-201-89685-5 | title-link = The Art of Computer Programming }}, section 6.2.1, pp. 409–26</ref> Similarly, the [[merge sort]] algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time [[big O notation|approximately proportional to]] {{math|''N'' · log(''N'')}}.<ref>{{Harvard citations|last = Knuth | first = Donald|year=1998|loc=section 5.2.4, pp. 158–68|nb=yes}}</ref> The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard [[uniform cost model]].<ref name=Wegener20>{{Citation|last1=Wegener|first1=Ingo| title=Complexity theory: exploring the limits of efficient algorithms|publisher=[[Springer-Verlag]]|location=Berlin, New York|isbn=978-3-540-21045-0|year=2005|page=20}}</ref> A function {{math|''f''(''x'')}} is said to [[Logarithmic growth|grow logarithmically]] if {{math|''f''(''x'')}} is (exactly or approximately) proportional to the logarithm of {{mvar|x}}. (Biological descriptions of organism growth, however, use this term for an exponential function.<ref>{{Citation|last1=Mohr|first1=Hans|last2=Schopfer|first2=Peter|title=Plant physiology|publisher=Springer-Verlag|location=Berlin, New York|isbn=978-3-540-58016-4|year=1995|url-access=registration|url=https://archive.org/details/plantphysiology0000mohr}}, chapter 19, p. 298</ref>) For example, any [[natural number]] {{mvar|N}} can be represented in [[Binary numeral system|binary form]] in no more than {{math|log<sub>2</sub> ''N'' + 1}} [[bit]]s. In other words, the amount of [[memory (computing)|memory]] needed to store {{mvar|N}} grows logarithmically with {{mvar|N}}. ===Entropy and chaos=== [[File:Chaotic Bunimovich stadium.svg|right|thumb|[[Dynamical billiards|Billiards]] on an oval [[billiard table]]. Two particles, starting at the center with an angle differing by one degree, take paths that diverge chaotically because of [[reflection (physics)|reflections]] at the boundary.|alt=An oval shape with the trajectories of two particles.]] [[Entropy]] is broadly a measure of the disorder of some system. In [[statistical thermodynamics]], the entropy {{mvar|S}} of some physical system is defined as <math display="block"> S = - k \sum_i p_i \ln(p_i).\, </math> The sum is over all possible states {{Mvar|i}} of the system in question, such as the positions of gas particles in a container. Moreover, {{math|''p''<sub>''i''</sub>}} is the probability that the state {{Mvar|i}} is attained and {{mvar|k}} is the [[Boltzmann constant]]. Similarly, [[entropy (information theory)|entropy in information theory]] measures the quantity of information. If a message recipient may expect any one of {{mvar|N}} possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as {{math|log<sub>2</sub> ''N''}} bits.<ref>{{Citation|last1=Eco|first1=Umberto|author1-link=Umberto Eco|title=The open work |publisher=[[Harvard University Press]]|isbn=978-0-674-63976-8|year=1989}}, section III.I</ref> [[Lyapunov exponent]]s use logarithms to gauge the degree of chaoticity of a [[dynamical system]]. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are [[chaos theory|chaotic]] in a [[Deterministic system|deterministic]] way, because small measurement errors of the initial state predictably lead to largely different final states.<ref>{{Citation | last1=Sprott | first1=Julien Clinton | title=Elegant Chaos: Algebraically Simple Chaotic Flows | journal=Elegant Chaos: Algebraically Simple Chaotic Flows. Edited by Sprott Julien Clinton. Published by World Scientific Publishing Co. Pte. Ltd | url={{google books |plainurl=y |id=buILBDre9S4C}} | publisher=[[World Scientific]] |location=New Jersey|isbn=978-981-283-881-0| year=2010| bibcode=2010ecas.book.....S | doi=10.1142/7183 }}, section 1.9</ref> At least one Lyapunov exponent of a deterministically chaotic system is positive. ===Fractals=== [[File:Sierpinski dimension.svg|The Sierpinski triangle (at the right) is constructed by repeatedly replacing [[equilateral triangle]]s by three smaller ones.|thumb|alt=Parts of a triangle are removed in an iterated way.]] Logarithms occur in definitions of the [[fractal dimension|dimension]] of [[fractal]]s.<ref>{{Citation|last1=Helmberg|first1=Gilbert|title=Getting acquainted with fractals|publisher=Walter de Gruyter|series=De Gruyter Textbook|location=Berlin, New York|isbn=978-3-11-019092-2|year=2007}}</ref> Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The [[Sierpinski triangle]] (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the [[Hausdorff dimension]] of this structure {{math|1=ln(3)/ln(2) ≈ 1.58}}. Another logarithm-based notion of dimension is obtained by [[box-counting dimension|counting the number of boxes]] needed to cover the fractal in question. ===Music=== {{multiple image | direction = vertical | width = 350 | footer = Four different octaves shown on a linear scale, then shown on a logarithmic scale (as the ear hears them) | image1 = 4Octaves.and.Frequencies.svg | alt1 = Four different octaves shown on a linear scale. | image2 = 4Octaves.and.Frequencies.Ears.svg | alt2 = Four different octaves shown on a logarithmic scale }} Logarithms are related to musical tones and [[interval (music)|intervals]]. In [[equal temperament]] tunings, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or [[pitch (music)|pitch]], of the individual tones. In the [[12-tone equal temperament]] tuning common in modern Western music, each [[octave]] (doubling of frequency) is broken into twelve equally spaced intervals called [[semitone]]s. For example, if the [[a (musical note)|note ''A'']] has a frequency of 440 [[Hertz|Hz]] then the note [[B♭ (musical note)|''B-flat'']] has a frequency of 466 Hz. The interval between ''A'' and ''B-flat'' is a [[semitone]], as is the one between ''B-flat'' and [[b (musical note)|''B'']] (frequency 493 Hz). Accordingly, the frequency ratios agree: <math display="block">\frac{466}{440} \approx \frac{493}{466} \approx 1.059 \approx \sqrt[12]2.</math> Intervals between arbitrary pitches can be measured in octaves by taking the {{Nowrap|base-{{math|2}}}} logarithm of the [[frequency]] ratio, can be measured in equally tempered semitones by taking the {{Nowrap|base-{{math|2<sup>1/12</sup>}}}} logarithm ({{math|12}} times the {{Nowrap|base-{{math|2}}}} logarithm), or can be measured in [[cent (music)|cents]], hundredths of a semitone, by taking the {{Nowrap|base-{{math|2<sup>1/1200</sup>}}}} logarithm ({{math|1200}} times the {{Nowrap|base-{{math|2}}}} logarithm). The latter is used for finer encoding, as it is needed for finer measurements or non-equal temperaments.<ref>{{Citation|last1=Wright|first1=David|title=Mathematics and music|location=Providence, RI|publisher=AMS Bookstore|isbn=978-0-8218-4873-9|year=2009}}, chapter 5</ref> {| class="wikitable" style="text-align:center;" |- ! Interval<br /> <span style="font-weight: normal">(the two tones are played<br> at the same time)</span> | [[72 tone equal temperament|1/12 tone]]<br> {{audio|1_step_in_72-et_on_C.mid|play}} | [[Semitone]]<br> {{audio|help=no|Minor_second_on_C.mid|play}} | [[Just major third]]<br> {{audio|help=no|Just_major_third_on_C.mid|play}} | [[Major third]]<br> {{audio|help=no|Major_third_on_C.mid|play}} | [[Tritone]]<br> {{audio|help=no|Tritone_on_C.mid|play}} | [[Octave]]<br> {{audio|help=no|Perfect_octave_on_C.mid|play}} |- ! '''Frequency ratio'''<br> <math>r</math> | <math>2^{\frac 1 {72}} \approx 1.0097</math> | <math>2^{\frac 1 {12}} \approx 1.0595</math> | <math>\tfrac 5 4 = 1.25</math> | <math>\begin{align} 2^{\frac 4 {12}} & = \sqrt[3] 2 \\ & \approx 1.2599 \end{align} </math> | <math>\begin{align} 2^{\frac 6 {12}} & = \sqrt 2 \\ & \approx 1.4142 \end{align} </math> | <math> 2^{\frac {12} {12}} = 2 </math> |- ! '''Number of semitones'''<br /><math>12 \log_2 r</math> | <math>\tfrac 1 6</math> | <math>1</math> | <math>\approx 3.8631</math> | <math>4</math> | <math>6</math> | <math>12</math> |- ! '''Number of cents'''<br /><math>1200 \log_2 r</math> | <math>16 \tfrac 2 3</math> | <math>100</math> | <math>\approx 386.31</math> | <math>400</math> | <math>600</math> | <math>1200</math> |} ===Number theory=== [[Natural logarithm]]s are closely linked to [[prime-counting function|counting prime numbers]] (2, 3, 5, 7, 11, ...), an important topic in [[number theory]]. For any [[integer]] {{mvar|x}}, the quantity of [[prime number]]s less than or equal to {{mvar|x}} is denoted {{math|[[prime-counting function|{{pi}}(''x'')]]}}. The [[prime number theorem]] asserts that {{math|{{pi}}(''x'')}} is approximately given by <math display="block">\frac{x}{\ln(x)},</math> in the sense that the ratio of {{math|{{pi}}(''x'')}} and that fraction approaches 1 when {{mvar|x}} tends to infinity.<ref>{{Citation|last1=Bateman|first1=P.T.|last2=Diamond|first2=Harold G.|title=Analytic number theory: an introductory course|publisher=[[World Scientific]]|location=New Jersey|isbn=978-981-256-080-3 |oclc=492669517|year=2004}}, theorem 4.1</ref> As a consequence, the probability that a randomly chosen number between 1 and {{mvar|x}} is prime is inversely [[proportionality (mathematics)|proportional]] to the number of decimal digits of {{mvar|x}}. A far better estimate of {{math|{{pi}}(''x'')}} is given by the [[logarithmic integral function|offset logarithmic integral]] function {{math|Li(''x'')}}, defined by <math display="block"> \mathrm{Li}(x) = \int_2^x \frac1{\ln(t)} \,dt. </math> The [[Riemann hypothesis]], one of the oldest open mathematical [[conjecture]]s, can be stated in terms of comparing {{math|{{pi}}(''x'')}} and {{math|Li(''x'')}}.<ref>{{Harvard citations|last1=Bateman|first1=P. T.|last2=Diamond|year=2004|nb=yes |loc=Theorem 8.15}}</ref> The [[Erdős–Kac theorem]] describing the number of distinct [[prime factor]]s also involves the [[natural logarithm]]. The logarithm of ''n'' [[factorial]], {{math|1=''n''! = 1 · 2 · ... · ''n''}}, is given by <math display="block"> \ln (n!) = \ln (1) + \ln (2) + \cdots + \ln (n).</math> This can be used to obtain [[Stirling's formula]], an approximation of {{math|''n''!}} for large {{mvar|n}}.<ref>{{Citation|last1=Slomson|first1=Alan B.|title=An introduction to combinatorics|publisher=[[CRC Press]]|location=London|isbn=978-0-412-35370-3|year=1991}}, chapter 4</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)