Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Big O notation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Related asymptotic notations == Big ''O'' is widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations.{{citation needed|date=April 2021}} ===Little-o notation=== <!-- [[Little-o notation]] redirects here --> {{Redirect|Little o|the baseball player|Omar Vizquel|the Greek letter|Omicron}} Intuitively, the assertion "{{math|''f''(''x'')}} is {{math|''o''(''g''(''x''))}}" (read "{{math|''f''(''x'')}} is little-o of {{math|''g''(''x'')}}" or "{{math|''f''(''x'')}} is of inferior order to {{math|''g''(''x'')}}") means that {{math|''g''(''x'')}} grows much faster than {{math|''f''(''x'')}}, or equivalently {{math|''f''(''x'')}} grows much slower than {{math|''g''(''x'')}}. As before, let ''f'' be a real or complex valued function and ''g'' a real valued function, both defined on some unbounded subset of the positive [[real number]]s, such that <math>g(x)</math> is strictly positive for all large enough values of ''x''. One writes :<math>f(x) = o(g(x)) \quad \text{ as } x \to \infty</math> if for every positive constant {{mvar|ε}} there exists a constant <math>x_0</math> such that :<math>|f(x)| \leq \varepsilon g(x) \quad \text{ for all } x \geq x_0.</math><ref name=Landausmallo>{{cite book |first=Edmund |last=Landau |author-link=Edmund Landau |title=Handbuch der Lehre von der Verteilung der Primzahlen |publisher=B. G. Teubner |date=1909 |location=Leipzig |trans-title=Handbook on the theory of the distribution of the primes |language=de |page=61 | url=https://archive.org/stream/handbuchderlehre01landuoft#page/61/mode/2up}}</ref> For example, one has : <math>2x = o(x^2)</math> and <math>1/x = o(1),</math> both as <math> x \to \infty .</math> The difference between the [[#Formal definition|definition of the big-O notation]] and the definition of little-o is that while the former has to be true for ''at least one'' constant ''M'', the latter must hold for ''every'' positive constant {{math|''ε''}}, however small.<ref name="Introduction to Algorithms">Thomas H. Cormen et al., 2001, [http://highered.mcgraw-hill.com/sites/0070131511/ Introduction to Algorithms, Second Edition, Ch. 3.1] {{Webarchive|url=https://web.archive.org/web/20090116115944/http://highered.mcgraw-hill.com/sites/0070131511/ |date=2009-01-16 }}</ref> In this way, little-o notation makes a ''stronger statement'' than the corresponding big-O notation: every function that is little-o of ''g'' is also big-O of ''g'', but not every function that is big-O of ''g'' is little-o of ''g''. For example, <math>2x^2 = O(x^2) </math> but {{nowrap|<math>2x^2 \neq o(x^2)</math>.}} If <math>g(x)</math> is nonzero, or at least becomes nonzero beyond a certain point, the relation <math>f(x) = o(g(x))</math> is equivalent to :<math>\lim_{x \to \infty}\frac{f(x)}{g(x)} = 0</math> (and this is in fact how Landau<ref name=Landausmallo /> originally defined the little-o notation). Little-o respects a number of arithmetic operations. For example, : if {{mvar|c}} is a nonzero constant and <math>f = o(g)</math> then <math>c \cdot f = o(g)</math>, and : if <math>f = o(F)</math> and <math>g = o(G)</math> then <math> f \cdot g = o(F \cdot G).</math> : if <math>f = o(F)</math> and <math>g = o(G)</math> then <math>f+g=o(F+G)</math> It also satisfies a [[Transitive relation|transitivity]] relation: : if <math>f = o(g)</math> and <math> g = o(h)</math> then <math>f = o(h).</math> Little-o can also be generalized to the finite case:<ref>{{Cite journal |last1=Baratchart |first1=L. |last2=Grimm |first2=J. |last3=LeBlond |first3=J. |last4=Partington |first4=J.R. |date=2003 |title=Asymptotic estimates for interpolation and constrained approximation in H2 by diagonalization of Toeplitz operators. |url=https://www.researchgate.net/publication/225672883 |journal=Integral Equations and Operator Theory |volume=45 |issue=3 |pages=269–29|doi=10.1007/s000200300005 }}</ref> <math>f(x) = o(g(x)) \quad \text{ as } x \to x_0</math> if <math>f(x) = \alpha(x)g(x)</math> for some <math>\alpha(x)</math> with <math>\lim_{x\to x_0} \alpha(x) = 0</math>. Or, if <math>g(x)</math> is nonzero in a neighbourhood around <math>x_0</math>: <math>f(x) = o(g(x)) \quad \text{ as } x \to x_0</math> if <math>\lim_{x \to x_0}\frac{f(x)}{g(x)} = 0</math>. This definition especially useful in the computation of [[Limit of a function|limits]] using [[Taylor series]]. For example: <math>\sin x = x - \frac{x^3}{3!} + \ldots = x + o(x^2) \text{ as } x\to 0</math>, so <math>\lim_{x\to 0}\frac{\sin x}x = \lim_{x\to 0} \frac{x + o(x^2)}{x} = \lim_{x\to 0} 1 + o(x) = 1</math> === Big Omega notation === <!-- This section is linked from redirects. Please update those links when renaming the section. --> Another asymptotic notation is <math>\Omega</math>, read "big omega".<ref>{{Cite book|url=https://www.worldcat.org/oclc/676697295|title=Introduction to algorithms|date=2009|publisher=MIT Press|vauthors=Cormen TH, Leiserson CE, Rivest RL, Stein C|isbn=978-0-262-27083-0|edition=3rd|location=Cambridge, Mass.|oclc=676697295|page=48}}</ref> There are two widespread and incompatible definitions of the statement :<math>f(x)=\Omega(g(x))</math> as <math>x \to a,</math> where ''a'' is some real number, <math>\infty</math>, or <math>-\infty</math>, where ''f'' and ''g'' are real functions defined in a neighbourhood of ''a'', and where ''g'' is positive in this neighbourhood. The Hardy–Littlewood definition is used mainly in [[analytic number theory]], and the Knuth definition mainly in [[computational complexity theory]]; the definitions are not equivalent. ==== The Hardy–Littlewood definition ==== In 1914 [[Godfrey Harold Hardy|G.H. Hardy]] and [[John Edensor Littlewood|J.E. Littlewood]] introduced the new symbol <math>\ \Omega\ ,</math><ref name=HL>{{cite journal |last1=Hardy |first1=G.H. |author1-link=Godfrey Harold Hardy |last2=Littlewood |first2=J.E. |author2-link=John Edensor Littlewood |year=1914 |title=Some problems of diophantine approximation: {{nobr|Part II. The}} trigonometrical series associated with the elliptic {{mvar|θ}} functions |journal=[[Acta Mathematica]] |volume=37 |page=225 |doi=10.1007/BF02401834 |doi-access=free |url=http://projecteuclid.org/download/pdf_1/euclid.acta/1485887376 |url-status=live |access-date=2017-03-14 |archive-url=https://web.archive.org/web/20181212063403/https://projecteuclid.org/download/pdf_1/euclid.acta/1485887376 |archive-date=2018-12-12 }}</ref> which is defined as follows: :<math> f(x) = \Omega\bigl(\ g(x)\ \bigr) \quad </math> as <math>\quad x \to \infty \quad</math> if <math>\quad \limsup_{x \to \infty}\ \left|\frac{\ f(x)\ }{ g(x) }\right| > 0 ~.</math> Thus <math>~ f(x) = \Omega\bigl(\ g(x)\ \bigr) ~</math> is the negation of <math>~ f(x) = o\bigl(\ g(x)\ \bigr) ~.</math> In 1916 the same authors introduced the two new symbols <math>\ \Omega_R\ </math> and <math>\ \Omega_L\ ,</math> defined as:<ref name=HL2>{{cite journal |first1=G.H. |last1=Hardy |author1-link=Godfrey Harold Hardy |first2=J.E. |last2=Littlewood |author2-link=John Edensor Littlewood |year=1916 |title=Contribution to the theory of the Riemann zeta-function and the theory of the distribution of primes |journal=[[Acta Mathematica]] |volume=41 |pages=119–196 |doi=10.1007/BF02422942 }}</ref> :<math> f(x) = \Omega_R\bigl(\ g(x)\ \bigr) \quad</math> as <math>\quad x \to \infty \quad</math> if <math>\quad \limsup_{x \to \infty}\ \frac{\ f(x)\ }{ g(x) } > 0\ ;</math> :<math> f(x)=\Omega_L\bigl(\ g(x)\ \bigr) \quad</math> as <math>\quad x \to \infty \quad</math> if <math>\quad ~ \liminf_{x \to \infty}\ \frac{\ f(x)\ }{ g(x) }< 0 ~.</math> These symbols were used by [[Edmund Landau|E. Landau]], with the same meanings, in 1924.<ref name=landau>{{cite journal |first=E. |last=Landau |author-link=Edmund Landau |year=1924 |title=Über die Anzahl der Gitterpunkte in gewissen Bereichen. IV. |lang=de |trans-title=On the number of grid points in known regions |journal=Nachr. Gesell. Wiss. Gött. Math-phys. |pages=137–150 }}</ref> Authors that followed Landau, however, use a different notation for the same definitions:{{citation needed|date=December 2018}} The symbol <math>\ \Omega_R\ </math> has been replaced by the current notation <math>\ \Omega_{+}\ </math> with the same definition, and <math>\ \Omega_L\ </math> became <math>\ \Omega_{-} ~.</math> These three symbols <math>\ \Omega\ , \Omega_{+}\ , \Omega_{-}\ ,</math> as well as <math>\ f(x) = \Omega_{\pm}\bigl(\ g(x)\ \bigr)\ </math> (meaning that <math>\ f(x) = \Omega_{+}\bigl(\ g(x)\ \bigr)\ </math> and <math>\ f(x) = \Omega_{-}\bigl(\ g(x)\ \bigr)\ </math> are both satisfied), are now currently used in [[analytic number theory]].<ref name=Ivic>{{cite book |first=A. |last=Ivić |author-link=Aleksandar Ivić |year=1985 |title=The Riemann Zeta-Function |at=chapter 9 |publisher=John Wiley & Sons }}</ref><ref>{{cite book |first=G. |last=Tenenbaum |author-link=Gérald Tenenbaum |year=2015 |title=Introduction to Analytic and Probabilistic Number Theory |at=§ I.5 |publisher=American Mathematical Society |place=Providence, RI }}</ref> ===== Simple examples ===== We have :<math>\sin x = \Omega(1) \quad</math> as <math>\quad x \to \infty\ ,</math> and more precisely :<math> \sin x = \Omega_\pm(1) \quad</math> as <math>\quad x\to\infty ~.</math> We have :<math> 1 + \sin x = \Omega(1) \quad</math> as <math>\quad x \to \infty\ ,</math> and more precisely :<math> 1 + \sin x = \Omega_{+}(1) \quad</math> as <math>\quad x \to \infty\ ;</math> however :<math> 1 + \sin x \ne \Omega_{-}(1) \quad</math> as <math>\quad x \to \infty ~.</math> ==== The Knuth definition ==== In 1976 [[Donald Knuth]] published a paper to justify his use of the <math>\Omega</math>-symbol to describe a stronger property.<ref name="knuth"/> Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". He defined :<math>f(x)=\Omega(g(x))\Longleftrightarrow g(x)=O(f(x))</math> with the comment: "Although I have changed Hardy and Littlewood's definition of <math>\Omega</math>, I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies."<ref name="knuth">{{cite journal |first=Donald |last=Knuth |doi-access=free |s2cid-access=free |title=Big Omicron and big Omega and big Theta |journal=SIGACT News |date=April–June 1976 |volume=8 |issue=2 |pages=18–24 |doi=10.1145/1008328.1008329 |s2cid=5230246 }}</ref> === Family of Bachmann–Landau notations === {| class="wikitable" |- ! Notation ! Name<ref name="knuth" /> ! Description ! Formal definition ! Limit definition<ref name=Balcázar>{{cite journal |last1=Balcázar |first1=José L. |last2=Gabarró |first2=Joaquim |title=Nonuniform complexity classes specified by lower and upper bounds |journal=RAIRO – Theoretical Informatics and Applications – Informatique Théorique et Applications |volume=23 |issue=2 |page=180 |url=http://archive.numdam.org/article/ITA_1989__23_2_177_0.pdf |via=Numdam |access-date=14 March 2017 |language=en |issn=0988-3754 |archive-date=14 March 2017 |archive-url=https://web.archive.org/web/20170314153158/http://archive.numdam.org/article/ITA_1989__23_2_177_0.pdf |url-status=live }}</ref><ref name=Cucker>{{cite book |last1=Cucker |first1=Felipe |last2=Bürgisser |first2=Peter |title=Condition: The Geometry of Numerical Algorithms |year=2013 |publisher=Springer |location=Berlin, Heidelberg |isbn=978-3-642-38896-5 |pages=467–468 |chapter=A.1 Big Oh, Little Oh, and Other Comparisons |chapter-url=https://books.google.com/books?id=SNu4BAAAQBAJ&pg=PA467 |doi=10.1007/978-3-642-38896-5}}</ref><ref name=Wild>{{cite journal |first1=Paul |last1=Vitányi |author1-link=Paul Vitanyi |first2=Lambert |last2=Meertens |author2-link=Lambert Meertens |title=Big Omega versus the wild functions |journal=ACM SIGACT News |volume=16 |issue=4 |date=April 1985 |pages=56–59 |doi=10.1145/382242.382835 |url=http://www.kestrel.edu/home/people/meertens/publications/papers/Big_Omega_contra_the_wild_functions.pdf |doi-access=free |s2cid-access=free |citeseerx=10.1.1.694.3072 |s2cid=11700420 |access-date=2017-03-14 |archive-date=2016-03-10 |archive-url=https://web.archive.org/web/20160310012405/http://www.kestrel.edu/home/people/meertens/publications/papers/Big_Omega_contra_the_wild_functions.pdf |url-status=live }}</ref><ref name="knuth"/><ref name="HL"/> |- | <math>f(n) = o(g(n))</math> | Small O; Small Oh; Little O; Little Oh | {{mvar|f}} is dominated by {{mvar|g}} asymptotically (for any constant factor <math>k</math>) | <math>\forall k>0 \, \exists n_0 \, \forall n > n_0\colon |f(n)| \leq k\, g(n)</math> | <math>\lim_{n \to \infty} \frac{f(n)}{g(n)} = 0</math> |- | <math>f(n) = O(g(n))</math> | Big O; Big Oh; Big Omicron | <math>|f|</math> is asymptotically bounded above by {{mvar|g}} (up to constant factor <math>k</math>) | <math>\exists k > 0 \, \exists n_0 \, \forall n>n_0\colon |f(n)| \leq k\, g(n)</math> | <math>\limsup_{n \to \infty} \frac{\left|f(n)\right|}{g(n)} < \infty</math> |- | <math>f(n) \asymp g(n)</math> (Hardy's notation) or <math>f(n) = \Theta(g(n))</math> (Knuth notation) | Of the same order as (Hardy); Big Theta (Knuth) | {{mvar|f}} is asymptotically bounded by {{mvar|g}} both above (with constant factor <math>k_2</math>) and below (with constant factor <math>k_1</math>) | <math>\exists k_1 > 0 \, \exists k_2>0 \, \exists n_0 \, \forall n > n_0\colon</math> <math>k_1 \, g(n) \leq f(n) \leq k_2 \, g(n)</math> | <math>f(n) = O(g(n))</math> and <math>g(n) = O(f(n))</math> |- | <math>f(n)\sim g(n)</math> | Asymptotic equivalence | {{mvar|f}} is equal to {{mvar|g}} [[Asymptotic analysis|asymptotically]] | <math>\forall \varepsilon > 0 \, \exists n_0 \, \forall n > n_0\colon \left| \frac{f(n)}{g(n)} - 1 \right| < \varepsilon</math> | <math>\lim_{n \to \infty} \frac{f(n)}{g(n)} = 1</math> |- | <math>f(n) = \Omega(g(n))</math> | Big Omega in complexity theory (Knuth) | {{mvar|f}} is bounded below by {{mvar|g}} asymptotically | <math>\exists k > 0 \, \exists n_0 \, \forall n>n_0\colon f(n) \geq k\, g(n)</math> | <math>\liminf_{n \to \infty} \frac{f(n)}{g(n)} > 0 </math> |- | <math>f(n) = \omega(g(n))</math> | Small Omega; Little Omega | {{mvar|f}} dominates {{mvar|g}} asymptotically | <math>\forall k > 0 \, \exists n_0 \, \forall n > n_0 \colon f(n) > k\, g(n)</math> | <math>\lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty</math> |- #style="border-top: 2px solid gray;" | <math>f(n) = \Omega(g(n))</math> | Big Omega in number theory (Hardy–Littlewood) | <math>|f|</math> is not dominated by {{mvar|g}} asymptotically | <math>\exists k>0 \, \forall n_0 \, \exists n > n_0\colon |f(n)| \geq k\, g(n)</math> | <math>\limsup_{n \to \infty} \frac{\left|f(n)\right|}{g(n)} > 0 </math> |} The limit definitions assume <math>g(n) > 0</math> for sufficiently large <math>n</math>. The table is (partly) sorted from smallest to largest, in the sense that <math>o,O,\Theta,\sim, </math> (Knuth's version of) <math>\Omega, \omega </math> on functions correspond to <math><,\leq,\approx,=, </math><math>\geq,> </math> on the real line<ref name=Wild/> (the Hardy–Littlewood version of <math>\Omega </math>, however, doesn't correspond to any such description). Computer science uses the big <math>O </math>, big Theta <math>\Theta </math>, little <math>o </math>, little omega <math>\omega </math> and Knuth's big Omega <math>\Omega </math> notations.<ref>{{Introduction to Algorithms|edition=2|pages=41–50}}</ref> Analytic number theory often uses the big <math>O </math>, small <math>o </math>, Hardy's <math>\asymp</math>,<ref name=GT>Gérald Tenenbaum, Introduction to analytic and probabilistic number theory, « Notation », page xxiii. American Mathematical Society, Providence RI, 2015.</ref> Hardy–Littlewood's big Omega <math>\Omega </math> (with or without the +, − or ± subscripts) and <math>\sim</math> notations.<ref name=Ivic/> The small omega <math>\omega </math> notation is not used as often in analysis.<ref>for example it is omitted in: {{cite web |last1=Hildebrand |first1=A.J. |title=Asymptotic Notations |url=http://www.math.uiuc.edu/~ajh/595ama/ama-ch2.pdf |website=Asymptotic Methods in Analysis |series=Math 595, Fall 2009 |publisher=University of Illinois |place=Urbana, IL |department=Department of Mathematics |access-date=14 March 2017 |archive-date=14 March 2017 |archive-url=https://web.archive.org/web/20170314153801/http://www.math.uiuc.edu/~ajh/595ama/ama-ch2.pdf |url-status=live }}</ref> === Use in computer science === {{Further|Analysis of algorithms}} Informally, especially in computer science, the big ''O'' notation often can be used somewhat differently to describe an asymptotic [[Upper and lower bounds#Tight bounds|tight]] bound where using big Theta Θ notation might be more factually appropriate in a given context.<ref>{{harvtxt|Cormen|Leiserson|Rivest|Stein|2009}}, p. 64: "Many people continue to use the ''O''-notation where the Θ-notation is more technically precise."</ref> For example, when considering a function ''T''(''n'') = 73''n''<sup>3</sup> + 22''n''<sup>2</sup> + 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below). #{{nowrap|1=''T''(''n'') = ''O''(''n''<sup>100</sup>)}} #{{nowrap|1=''T''(''n'') = ''O''(''n''<sup>3</sup>)}} #{{nowrap|1=''T''(''n'') = Θ(''n''<sup>3</sup>)}} The equivalent English statements are respectively: #''T''(''n'') grows asymptotically no faster than ''n''<sup>100</sup> #''T''(''n'') grows asymptotically no faster than ''n''<sup>3</sup> #''T''(''n'') grows asymptotically as fast as ''n''<sup>3</sup>. So while all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, if ''T''(''n'') represents the running time of a newly developed algorithm for input size ''n'', the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound. === Other notation === In their book ''[[Introduction to Algorithms]]'', [[Thomas H. Cormen|Cormen]], [[Charles E. Leiserson|Leiserson]], [[Ronald L. Rivest|Rivest]] and [[Clifford Stein|Stein]] consider the set of functions ''f'' which satisfy :<math> f(n) = O(g(n))\quad(n\to\infty)~.</math> In a correct notation this set can, for instance, be called ''O''(''g''), where <math display=block>O(g) = \{ f : \text{there exist positive constants}~c~\text{and}~n_0~\text{such that}~0 \le f(n) \le c g(n) \text{ for all } n \ge n_0 \}.</math><ref>{{cite book | isbn=978-0-262-53305-8 |author1=Cormen, Thomas H. |author2=Leiserson, Charles E. |author3=Rivest, Ronald L. |title=Introduction to Algorithms |location=Cambridge/MA |publisher=MIT Press |edition=3rd |year=2009 |page=47 |quote=When we have only an asymptotic upper bound, we use O-notation. For a given function ''g''(''n''), we denote by ''O''(''g''(''n'')) (pronounced "big-oh of ''g'' of ''n''" or sometimes just "oh of ''g'' of ''n''") the set of functions ''O''(''g''(''n'')) = { ''f''(''n'') : there exist positive constants ''c'' and ''n''<sub>0</sub> such that 0 ≤ ''f''(''n'') ≤ ''cg''(''n'') for all ''n'' ≥ ''n''<sub>0</sub>} }}</ref> The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages.<ref name="clrs3">{{cite book |isbn=978-0-262-53305-8 |author1=Cormen, Thomas H. |author2=Leiserson, Charles E. |author3=Rivest, Ronald L. |title=Introduction to Algorithms |url=https://archive.org/details/introductiontoal00corm_805 |url-access=limited |location=Cambridge/MA |publisher=MIT Press |edition=3rd |year=2009 |page=[https://archive.org/details/introductiontoal00corm_805/page/n65 45] |quote=Because ''θ''(''g''(''n'')) is a set, we could write "''f''(''n'') ∈ ''θ''(''g''(''n''))" to indicate that ''f''(''n'') is a member of ''θ''(''g''(''n'')). Instead, we will usually write ''f''(''n'') = ''θ''(''g''(''n'')) to express the same notion. You might be confused because we abuse equality in this way, but we shall see later in this section that doing so has its advantages.}}</ref> Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the set ''O''(''g''), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example:<ref>{{cite book |isbn=978-0-262-53305-8 |author1=Cormen, Thomas H. |author2=Leiserson, Charles E. |author3=Rivest, Ronald L. |title=Introduction to Algorithms |url=https://archive.org/details/introductiontoal00corm_805 |url-access=limited |location=Cambridge/MA |publisher=MIT Press |edition=3rd |year=2009 |page=[https://archive.org/details/introductiontoal00corm_805/page/n69 49] |quote=When the asymptotic notation stands alone (that is, not within a larger formula) on the right-hand side of an equation (or inequality), as in n = O(n<sup>2</sup>), we have already defined the equal sign to mean set membership: n ∈ O(n<sup>2</sup>). In general, however, when asymptotic notation appears in a formula, we interpret it as standing for some anonymous function that we do not care to name. For example, the formula 2''n''<sup>2</sup> + 3''n'' + 1 = 2''n''<sup>2</sup> + ''θ''(''n'') means that 2''n''<sup>2</sup> + 3''n'' + 1 = 2''n''<sup>2</sup> + ''f''(''n''), where ''f''(''n'') is some function in the set ''θ''(''n''). In this case, we let ''f''(''n'') = 3''n'' + 1, which is indeed in ''θ''(''n''). Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation.}}</ref> :<math> 2n^2 + 3n + 1=2n^2 + O(n).</math> === Extensions to the Bachmann–Landau notations === Another notation sometimes used in computer science is ''[[Õ]]'' (read ''soft-O''), which hides polylogarithmic factors. There are two definitions in use: some authors use ''f''(''n'') = ''Õ''(''g''(''n'')) as [[shorthand]] for {{nowrap|1=''f''(''n'') = ''O''(''g''(''n'') [[Polylogarithmic function|log<sup>''k''</sup> ''n'']])}} for some ''k'', while others use it as shorthand for {{nowrap|1=''f''(''n'') = ''O''(''g''(''n'') log<sup>''k''</sup> ''g''(''n''))}}.<ref>{{Cite book |last1=Cormen |first1=Thomas H. |url=https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/ |title=Introduction to Algorithms |last2=Leiserson |first2=Charles E. |last3=Rivest |first3=Ronald L. |last4=Stein |first4=Clifford |publisher=The MIT Press |year=2022 |isbn=9780262046305 |edition=4th |location=Cambridge, Mass. |pages=74–75 |oclc= |url-access=}}</ref> When {{Nowrap|''g''(''n'')}} is polynomial in ''n'', there is no difference; however, the latter definition allows one to say, e.g. that <math>n2^n = \tilde O(2^n)</math> while the former definition allows for <math>\log^k n = \tilde O(1)</math> for any constant ''k''. Some authors write ''O''<sup>*</sup> for the same purpose as the latter definition.<ref>{{cite journal | url=https://www.cs.helsinki.fi/u/mkhkoivi/publications/sicomp-2009.pdf | author=Andreas Björklund and Thore Husfeldt and Mikko Koivisto | title=Set partitioning via inclusion-exclusion | journal=[[SIAM Journal on Computing]] | volume=39 | number=2 | pages=546–563 | year=2009 | doi=10.1137/070683933 | access-date=2022-02-03 | archive-date=2022-02-03 | archive-url=https://web.archive.org/web/20220203095918/https://www.cs.helsinki.fi/u/mkhkoivi/publications/sicomp-2009.pdf | url-status=live }} See sect.2.3, p.551.</ref> Essentially, it is big ''O'' notation, ignoring [[Polylogarithmic function|logarithmic factors]] because the [[Asymptotic analysis|growth-rate]] effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log<sup>''k''</sup> ''n'' is always ''o''(''n''<sup>ε</sup>) for any constant ''k'' and any {{nowrap|''ε'' > 0}}). Also, the [[L-notation|''L'' notation]], defined as :<math>L_n[\alpha,c] = e^{(c + o(1))(\ln n)^\alpha(\ln\ln n)^{1-\alpha}},</math> is convenient for functions that are between [[Time complexity#Polynomial time|polynomial]] and [[Time complexity#Exponential time|exponential]] in terms of {{nowrap|<math>\ln n</math>.}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)