Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Big O notation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Little-o notation=== <!-- [[Little-o notation]] redirects here --> {{Redirect|Little o|the baseball player|Omar Vizquel|the Greek letter|Omicron}} Intuitively, the assertion "{{math|''f''(''x'')}} is {{math|''o''(''g''(''x''))}}" (read "{{math|''f''(''x'')}} is little-o of {{math|''g''(''x'')}}" or "{{math|''f''(''x'')}} is of inferior order to {{math|''g''(''x'')}}") means that {{math|''g''(''x'')}} grows much faster than {{math|''f''(''x'')}}, or equivalently {{math|''f''(''x'')}} grows much slower than {{math|''g''(''x'')}}. As before, let ''f'' be a real or complex valued function and ''g'' a real valued function, both defined on some unbounded subset of the positive [[real number]]s, such that <math>g(x)</math> is strictly positive for all large enough values of ''x''. One writes :<math>f(x) = o(g(x)) \quad \text{ as } x \to \infty</math> if for every positive constant {{mvar|Ξ΅}} there exists a constant <math>x_0</math> such that :<math>|f(x)| \leq \varepsilon g(x) \quad \text{ for all } x \geq x_0.</math><ref name=Landausmallo>{{cite book |first=Edmund |last=Landau |author-link=Edmund Landau |title=Handbuch der Lehre von der Verteilung der Primzahlen |publisher=B. G. Teubner |date=1909 |location=Leipzig |trans-title=Handbook on the theory of the distribution of the primes |language=de |page=61 | url=https://archive.org/stream/handbuchderlehre01landuoft#page/61/mode/2up}}</ref> For example, one has : <math>2x = o(x^2)</math> and <math>1/x = o(1),</math> both as <math> x \to \infty .</math> The difference between the [[#Formal definition|definition of the big-O notation]] and the definition of little-o is that while the former has to be true for ''at least one'' constant ''M'', the latter must hold for ''every'' positive constant {{math|''Ξ΅''}}, however small.<ref name="Introduction to Algorithms">Thomas H. Cormen et al., 2001, [http://highered.mcgraw-hill.com/sites/0070131511/ Introduction to Algorithms, Second Edition, Ch. 3.1] {{Webarchive|url=https://web.archive.org/web/20090116115944/http://highered.mcgraw-hill.com/sites/0070131511/ |date=2009-01-16 }}</ref> In this way, little-o notation makes a ''stronger statement'' than the corresponding big-O notation: every function that is little-o of ''g'' is also big-O of ''g'', but not every function that is big-O of ''g'' is little-o of ''g''. For example, <math>2x^2 = O(x^2) </math> but {{nowrap|<math>2x^2 \neq o(x^2)</math>.}} If <math>g(x)</math> is nonzero, or at least becomes nonzero beyond a certain point, the relation <math>f(x) = o(g(x))</math> is equivalent to :<math>\lim_{x \to \infty}\frac{f(x)}{g(x)} = 0</math> (and this is in fact how Landau<ref name=Landausmallo /> originally defined the little-o notation). Little-o respects a number of arithmetic operations. For example, : if {{mvar|c}} is a nonzero constant and <math>f = o(g)</math> then <math>c \cdot f = o(g)</math>, and : if <math>f = o(F)</math> and <math>g = o(G)</math> then <math> f \cdot g = o(F \cdot G).</math> : if <math>f = o(F)</math> and <math>g = o(G)</math> then <math>f+g=o(F+G)</math> It also satisfies a [[Transitive relation|transitivity]] relation: : if <math>f = o(g)</math> and <math> g = o(h)</math> then <math>f = o(h).</math> Little-o can also be generalized to the finite case:<ref>{{Cite journal |last1=Baratchart |first1=L. |last2=Grimm |first2=J. |last3=LeBlond |first3=J. |last4=Partington |first4=J.R. |date=2003 |title=Asymptotic estimates for interpolation and constrained approximation in H2 by diagonalization of Toeplitz operators. |url=https://www.researchgate.net/publication/225672883 |journal=Integral Equations and Operator Theory |volume=45 |issue=3 |pages=269β29|doi=10.1007/s000200300005 }}</ref> <math>f(x) = o(g(x)) \quad \text{ as } x \to x_0</math> if <math>f(x) = \alpha(x)g(x)</math> for some <math>\alpha(x)</math> with <math>\lim_{x\to x_0} \alpha(x) = 0</math>. Or, if <math>g(x)</math> is nonzero in a neighbourhood around <math>x_0</math>: <math>f(x) = o(g(x)) \quad \text{ as } x \to x_0</math> if <math>\lim_{x \to x_0}\frac{f(x)}{g(x)} = 0</math>. This definition especially useful in the computation of [[Limit of a function|limits]] using [[Taylor series]]. For example: <math>\sin x = x - \frac{x^3}{3!} + \ldots = x + o(x^2) \text{ as } x\to 0</math>, so <math>\lim_{x\to 0}\frac{\sin x}x = \lim_{x\to 0} \frac{x + o(x^2)}{x} = \lim_{x\to 0} 1 + o(x) = 1</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)