{{#invoke:sidebar|collapsible | class = plainlist | titlestyle = padding-bottom:0.25em; | pretitle = Part of a series of articles about | title = Calculus | image = <math>\int_{a}^{b} f'(t) \, dt = f(b) - f(a)</math> | listtitlestyle = text-align:center; | liststyle = border-top:1px solid #aaa;padding-top:0.15em;border-bottom:1px solid #aaa; | expanded = series | abovestyle = padding:0.15em 0.25em 0.3em;font-weight:normal; | above =
Template:EndflatlistTemplate:Startflatlist
| list2name = differential | list2titlestyle = display:block;margin-top:0.65em; | list2title = Template:Bigger | list2 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | heading1 = Definitions | content1 =
| heading2 = Concepts | content2 =
- Differentiation notation
- Second derivative
- Implicit differentiation
- Logarithmic differentiation
- Related rates
- Taylor's theorem
| heading3 = Rules and identities | content3 =
- Sum
- Product
- Chain
- Power
- Quotient
- L'Hôpital's rule
- Inverse
- General Leibniz
- Faà di Bruno's formula
- Reynolds
}}
| list3name = integral | list3title = Template:Bigger | list3 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Definitions
| content2 =
- Antiderivative
- Integral (improper)
- Riemann integral
- Lebesgue integration
- Contour integration
- Integral of inverse functions
| heading3 = Integration by | content3 =
- Parts
- Discs
- Cylindrical shells
- Substitution (trigonometric, tangent half-angle, Euler)
- Euler's formula
- Partial fractions (Heaviside's method)
- Changing order
- Reduction formulae
- Differentiating under the integral sign
- Risch algorithm
}}
| list4name = series | list4title = Template:Bigger | list4 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Convergence tests | content2 =
- Summand limit (term test)
- Ratio
- Root
- Integral
- Direct comparison
Limit comparison- Alternating series
- Cauchy condensation
- Dirichlet
- Abel
}}
| list5name = vector | list5title = Template:Bigger | list5 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Theorems | content2 =
}}
| list6name = multivariable | list6title = Template:Bigger | list6 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | heading1 = Formalisms | content1 =
| heading2 = Definitions | content2 =
- Partial derivative
- Multiple integral
- Line integral
- Surface integral
- Volume integral
- Jacobian
- Hessian
}}
| list7name = advanced | list7title = Template:Bigger | list7 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
}}
| list8name = specialized | list8title = Template:Bigger | list8 =
| list9name = miscellanea | list9title = Template:Bigger | list9 =
- Precalculus
- History
- Glossary
- List of topics
- Integration Bee
- Mathematical analysis
- Nonstandard analysis
}}
In mathematics, the integral test for convergence is a method used to test infinite series of monotonic terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.
Statement of the testEdit
Consider an integer Template:Math and a function Template:Math defined on the unbounded interval Template:Closed-open, on which it is monotone decreasing. Then the infinite series
- <math>\sum_{n=N}^\infty f(n)</math>
converges to a real number if and only if the improper integral
- <math>\int_N^\infty f(x)\,dx</math>
is finite. In particular, if the integral diverges, then the series diverges as well.
RemarkEdit
If the improper integral is finite, then the proof also gives the lower and upper bounds
for the infinite series.
Note that if the function <math>f(x)</math> is increasing, then the function <math>-f(x)</math> is decreasing and the above theorem applies.
Many textbooks require the function <math>f</math> to be positive,<ref>Template:Cite book</ref><ref>Template:Cite book</ref><ref>Template:Cite book</ref> but this condition is not really necessary, since when <math>f</math> is negative and decreasing both <math>\sum_{n=N}^\infty f(n)</math> and <math>\int_N^\infty f(x)\,dx</math> diverge.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>Template:Better source needed
ProofEdit
The proof uses the comparison test, comparing the term <math>f(n)</math> with the integral of <math>f</math> over the intervals <math>[n-1,n)</math> and <math>[n,n+1)</math> respectively.
The monotonic function <math>f</math> is continuous almost everywhere. To show this, let
- <math>D=\{ x\in [N,\infty)\mid f\text{ is discontinuous at } x\}</math>
For every <math>x\in D</math>, there exists by the density of <math>\mathbb Q</math>, a <math>c(x)\in\mathbb Q</math> so that <math>c(x)\in\left[\lim_{y\downarrow x} f(y), \lim_{y\uparrow x} f(y)\right]</math>.
Note that this set contains an open non-empty interval precisely if <math>f</math> is discontinuous at <math>x</math>. We can uniquely identify <math>c(x)</math> as the rational number that has the least index in an enumeration <math>\mathbb N\to\mathbb Q</math> and satisfies the above property. Since <math>f</math> is monotone, this defines an injective mapping <math>c:D\to\mathbb Q, x\mapsto c(x)</math> and thus <math>D</math> is countable. It follows that <math>f</math> is continuous almost everywhere. This is sufficient for Riemann integrability.<ref>Template:Cite journal</ref>
Since Template:Math is a monotone decreasing function, we know that
- <math>
f(x)\le f(n)\quad\text{for all }x\in[n,\infty) </math>
and
- <math>
f(n)\le f(x)\quad\text{for all }x\in[N,n]. </math>
Hence, for every integer Template:Math,
and, for every integer Template:Math,
By summation over all Template:Math from Template:Math to some larger integer Template:Math, we get from (Template:EquationNote)
- <math>
\int_N^{M+1}f(x)\,dx=\sum_{n=N}^M\underbrace{\int_n^{n+1}f(x)\,dx}_{\le\,f(n)}\le\sum_{n=N}^Mf(n) </math>
and from (Template:EquationNote)
- <math>
\begin{align} \sum_{n=N}^Mf(n)&=f(N)+\sum_{n=N+1}^Mf(n)\\ &\leq f(N)+\sum_{n=N+1}^M\underbrace{\int_{n-1}^n f(x)\,dx}_{\ge\,f(n)}\\ &=f(N)+\int_N^M f(x)\,dx. \end{align} </math>
Combining these two estimates yields
- <math>\int_N^{M+1}f(x)\,dx\le\sum_{n=N}^Mf(n)\le f(N)+\int_N^M f(x)\,dx.</math>
Letting Template:Math tend to infinity, the bounds in (Template:EquationNote) and the result follow.
ApplicationsEdit
The harmonic series
- <math>
\sum_{n=1}^\infty \frac 1 n </math> diverges because, using the natural logarithm, its antiderivative, and the fundamental theorem of calculus, we get
- <math>
\int_1^M \frac 1 n\,dn = \ln n\Bigr|_1^M = \ln M \to\infty \quad\text{for }M\to\infty. </math> On the other hand, the series
- <math>
\zeta(1+\varepsilon)=\sum_{n=1}^\infty \frac1{n^{1+\varepsilon}} </math> (cf. Riemann zeta function) converges for every Template:Math, because by the power rule
- <math>
\int_1^M\frac1{n^{1+\varepsilon}}\,dn
\left. -\frac 1{\varepsilon n^\varepsilon} \right|_1^MEdit
\frac 1 \varepsilon \left(1-\frac 1 {M^\varepsilon}\right) \le \frac 1 \varepsilon < \infty \quad\text{for all }M\ge1. </math> From (Template:EquationNote) we get the upper estimate
- <math>
\zeta(1+\varepsilon)=\sum_{n=1}^\infty \frac 1 {n^{1+\varepsilon}} \le \frac{1 + \varepsilon}\varepsilon, </math> which can be compared with some of the particular values of Riemann zeta function.
Borderline between divergence and convergenceEdit
The above examples involving the harmonic series raise the question of whether there are monotone sequences such that Template:Math decreases to 0 faster than Template:Math but slower than Template:Math in the sense that
- <math>
\lim_{n\to\infty}\frac{f(n)}{1/n}=0 \quad\text{and}\quad \lim_{n\to\infty}\frac{f(n)}{1/n^{1+\varepsilon}}=\infty </math> for every Template:Math, and whether the corresponding series of the Template:Math still diverges. Once such a sequence is found, a similar question can be asked with Template:Math taking the role of Template:Math, and so on. In this way it is possible to investigate the borderline between divergence and convergence of infinite series.
Using the integral test for convergence, one can show (see below) that, for every natural number Template:Math, the series Template:NumBlk still diverges (cf. proof that the sum of the reciprocals of the primes diverges for Template:Math) but Template:NumBlk </math>|Template:EquationRef}} converges for every Template:Math. Here Template:Math denotes the Template:Math-fold composition of the natural logarithm defined recursively by
- <math>
\ln_k(x)= \begin{cases} \ln(x)&\text{for }k=1,\\ \ln(\ln_{k-1}(x))&\text{for }k\ge2. \end{cases} </math> Furthermore, Template:Math denotes the smallest natural number such that the Template:Math-fold composition is well-defined and Template:Math, i.e.
- <math>
N_k\ge \underbrace{e^{e^{\cdot^{\cdot^{e}}}}}_{k\ e'\text{s}}=e \uparrow\uparrow k </math> using tetration or Knuth's up-arrow notation.
To see the divergence of the series (Template:EquationNote) using the integral test, note that by repeated application of the chain rule
- <math>
\frac{d}{dx}\ln_{k+1}(x) =\frac{d}{dx}\ln(\ln_k(x)) =\frac1{\ln_k(x)}\frac{d}{dx}\ln_k(x) =\cdots =\frac1{x\ln(x)\cdots\ln_k(x)}, </math> hence
- <math>
\int_{N_k}^\infty\frac{dx}{x\ln(x)\cdots\ln_k(x)} =\ln_{k+1}(x)\bigr|_{N_k}^\infty=\infty. </math> To see the convergence of the series (Template:EquationNote), note that by the power rule, the chain rule and the above result
- <math>
-\frac{d}{dx}\frac1{\varepsilon(\ln_k(x))^\varepsilon} =\frac1{(\ln_k(x))^{1+\varepsilon}}\frac{d}{dx}\ln_k(x) =\cdots =\frac{1}{x\ln(x)\cdots\ln_{k-1}(x)(\ln_k(x))^{1+\varepsilon}}, </math> hence
- <math>
\int_{N_k}^\infty\frac{dx}{x\ln(x)\cdots\ln_{k-1}(x)(\ln_k(x))^{1+\varepsilon}} =-\frac1{\varepsilon(\ln_k(x))^\varepsilon}\biggr|_{N_k}^\infty<\infty </math> and (Template:EquationNote) gives bounds for the infinite series in (Template:EquationNote).
See alsoEdit
- Convergence tests
- Convergence (mathematics)
- Direct comparison test
- Dominated convergence theorem
- Euler-Maclaurin formula
- Limit comparison test
- Monotone convergence theorem
ReferencesEdit
- Knopp, Konrad, "Infinite Sequences and Series", Dover Publications, Inc., New York, 1956. (§ 3.3) Template:ISBN
- Whittaker, E. T., and Watson, G. N., A Course in Modern Analysis, fourth edition, Cambridge University Press, 1963. (§ 4.43) Template:ISBN
- Ferreira, Jaime Campos, Ed Calouste Gulbenkian, 1987, Template:ISBN
<references/>