Alternating series
Template:Short description Template:More citations needed {{#invoke:sidebar|collapsible | class = plainlist | titlestyle = padding-bottom:0.25em; | pretitle = Part of a series of articles about | title = Calculus | image = <math>\int_{a}^{b} f'(t) \, dt = f(b) - f(a)</math> | listtitlestyle = text-align:center; | liststyle = border-top:1px solid #aaa;padding-top:0.15em;border-bottom:1px solid #aaa; | expanded = series | abovestyle = padding:0.15em 0.25em 0.3em;font-weight:normal; | above =
Template:EndflatlistTemplate:Startflatlist
| list2name = differential | list2titlestyle = display:block;margin-top:0.65em; | list2title = Template:Bigger | list2 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | heading1 = Definitions | content1 =
| heading2 = Concepts | content2 =
- Differentiation notation
- Second derivative
- Implicit differentiation
- Logarithmic differentiation
- Related rates
- Taylor's theorem
| heading3 = Rules and identities | content3 =
- Sum
- Product
- Chain
- Power
- Quotient
- L'Hôpital's rule
- Inverse
- General Leibniz
- Faà di Bruno's formula
- Reynolds
}}
| list3name = integral | list3title = Template:Bigger | list3 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Definitions
| content2 =
- Antiderivative
- Integral (improper)
- Riemann integral
- Lebesgue integration
- Contour integration
- Integral of inverse functions
| heading3 = Integration by | content3 =
- Parts
- Discs
- Cylindrical shells
- Substitution (trigonometric, tangent half-angle, Euler)
- Euler's formula
- Partial fractions (Heaviside's method)
- Changing order
- Reduction formulae
- Differentiating under the integral sign
- Risch algorithm
}}
| list4name = series | list4title = Template:Bigger | list4 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Convergence tests | content2 =
- Summand limit (term test)
- Ratio
- Root
- Integral
- Direct comparison
Limit comparison- Alternating series
- Cauchy condensation
- Dirichlet
- Abel
}}
| list5name = vector | list5title = Template:Bigger | list5 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
| heading2 = Theorems | content2 =
}}
| list6name = multivariable | list6title = Template:Bigger | list6 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | heading1 = Formalisms | content1 =
| heading2 = Definitions | content2 =
- Partial derivative
- Multiple integral
- Line integral
- Surface integral
- Volume integral
- Jacobian
- Hessian
}}
| list7name = advanced | list7title = Template:Bigger | list7 ={{#invoke:sidebar|sidebar|child=yes
|contentclass=hlist | content1 =
}}
| list8name = specialized | list8title = Template:Bigger | list8 =
| list9name = miscellanea | list9title = Template:Bigger | list9 =
- Precalculus
- History
- Glossary
- List of topics
- Integration Bee
- Mathematical analysis
- Nonstandard analysis
}}
In mathematics, an alternating series is an infinite series of terms that alternate between positive and negative signs. In capital-sigma notation this is expressed <math display="block">\sum_{n=0}^\infty (-1)^n a_n</math> or <math display="block">\sum_{n=0}^\infty (-1)^{n+1} a_n</math> with Template:Math for all Template:Mvar.
Like any series, an alternating series is a convergent series if and only if the sequence of partial sums of the series converges to a limit. The alternating series test guarantees that an alternating series is convergent if the terms Template:Math converge to 0 monotonically, but this condition is not necessary for convergence.
ExamplesEdit
The geometric series [[1/2 − 1/4 + 1/8 − 1/16 + ⋯|Template:Sfrac − Template:Sfrac + Template:Sfrac − Template:Sfrac + ⋯]] sums to Template:Sfrac.
The alternating harmonic series has a finite sum but the harmonic series does not. The series <math display="block">1-\frac{1}{3}+\frac{1}{5}-\ldots=\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}</math> converges to <math>\frac{\pi}{4}</math>, but is not absolutely convergent.
The Mercator series provides an analytic power series expression of the natural logarithm, given by <math display="block"> \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} x^n = \ln (1+x),\;\;\;|x|\le1, x\ne-1.</math>
The functions sine and cosine used in trigonometry and introduced in elementary algebra as the ratio of sides of a right triangle can also be defined as alternating series in calculus. <math display="block">\sin x = \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}</math> and <math display="block">\cos x = \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{(2n)!} .</math> When the alternating factor Template:Math is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus and statistics.
For integer or positive index α the Bessel function of the first kind may be defined with the alternating series <math display="block"> J_\alpha(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m+\alpha+1)} {\left(\frac{x}{2}\right)}^{2m+\alpha} </math> where Template:Math is the gamma function.
If Template:Mvar is a complex number, the Dirichlet eta function is formed as an alternating series <math display="block">\eta(s) = \sum_{n=1}^{\infty}{(-1)^{n-1} \over n^s} = \frac{1}{1^s} - \frac{1}{2^s} + \frac{1}{3^s} - \frac{1}{4^s} + \cdots</math> that is used in analytic number theory.
Alternating series testEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
The theorem known as the "Leibniz Test" or the alternating series test states that an alternating series will converge if the terms Template:Math converge to 0 monotonically.
Proof: Suppose the sequence <math>a_n</math> converges to zero and is monotone decreasing. If <math>m</math> is odd and <math>m<n</math>, we obtain the estimate <math>S_n - S_m \le a_{m}</math> via the following calculation: <math display="block">\begin{align} S_n - S_m & = \sum_{k=0}^n(-1)^k\,a_k\,-\,\sum_{k=0}^m\,(-1)^k\,a_k\ = \sum_{k=m+1}^n\,(-1)^k\,a_k \\ & =a_{m+1} - a_{m+2} + a_{m+3} - a_{m+4} + \cdots + a_n\\ & = a_{m+1}-(a_{m+2}-a_{m+3}) - (a_{m+4}-a_{m+5}) - \cdots - a_n \le a_{m+1} \le a_{m}. \end{align}</math>
Since <math>a_n</math> is monotonically decreasing, the terms <math>-(a_m - a_{m+1})</math> are negative. Thus, we have the final inequality: <math>S_n - S_m \le a_m</math>. Similarly, it can be shown that <math>-a_m \le S_n - S_m </math>. Since <math>a_m</math> converges to <math>0</math>, the partial sums <math>S_m</math> form a Cauchy sequence (i.e., the series satisfies the Cauchy criterion) and therefore they converge. The argument for <math>m</math> even is similar.
Approximating sumsEdit
The estimate above does not depend on <math>n</math>. So, if <math>a_n</math> is approaching 0 monotonically, the estimate provides an error bound for approximating infinite sums by partial sums: <math display="block">\left|\sum_{k=0}^\infty(-1)^k\,a_k\,-\,\sum_{k=0}^m\,(-1)^k\,a_k\right|\le |a_{m+1}|.</math>That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take <math>1-1/2+1/3-1/4+... = \ln 2</math> and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through <math>a_{20000}</math> is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through <math>a_{10000}</math> is sufficient. This series happens to have the property that constructing a new series with <math>a_n -a_{n+1}</math> also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound,<ref>Template:Cite journal</ref> discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound.<ref>Template:Cite journal</ref> If one can apply the property an infinite number of times, Euler's transform applies.<ref>Template:Cite arXiv</ref>
Absolute convergenceEdit
A series <math display=inline>\sum a_n</math> converges absolutely if the series <math display=inline>\sum |a_n|</math> converges.
Theorem: Absolutely convergent series are convergent.
Proof: Suppose <math display=inline>\sum a_n</math> is absolutely convergent. Then, <math display=inline>\sum |a_n|</math> is convergent and it follows that <math display=inline>\sum 2|a_n|</math> converges as well. Since <math display=inline> 0 \leq a_n + |a_n| \leq 2|a_n|</math>, the series <math display=inline>\sum (a_n + |a_n|)</math> converges by the comparison test. Therefore, the series <math display=inline>\sum a_n</math> converges as the difference of two convergent series <math display=inline>\sum a_n = \sum (a_n + |a_n|) - \sum |a_n|</math>.
Conditional convergenceEdit
A series is conditionally convergent if it converges but does not converge absolutely.
For example, the harmonic series
<math display="block">\sum_{n=1}^\infty \frac{1}{n}</math>
diverges, while the alternating version
<math display="block">\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}</math>
converges by the alternating series test.
RearrangementsEdit
For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.<ref>Template:Cite journal</ref> Agnew's theorem describes rearrangements that preserve convergence for all convergent series. The general principle is that addition of infinite sums is only commutative for absolutely convergent series.
For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.
As another example, by Mercator series <math display="block">\ln(2) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots.</math>
But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for <math display="inline">\tfrac 1 2 \ln(2)</math>: <math display="block">\begin{align} & {} \quad \left(1-\frac{1}{2}\right)-\frac{1}{4} +\left(\frac{1}{3}-\frac{1}{6}\right) -\frac{1}{8}+\left(\frac{1}{5} -\frac{1}{10}\right)-\frac{1}{12}+\cdots \\[8pt] & = \frac{1}{2}-\frac{1}{4}+\frac{1}{6} -\frac{1}{8}+\frac{1}{10}-\frac{1}{12} +\cdots \\[8pt] & = \frac{1}{2}\left(1-\frac{1}{2} + \frac{1}{3} -\frac{1}{4}+\frac{1}{5}- \frac{1}{6}+ \cdots\right)= \frac{1}{2} \ln(2). \end{align}</math>
Series accelerationEdit
In practice, the numerical summation of an alternating series may be sped up using any one of a variety of series acceleration techniques. One of the oldest techniques is that of Euler summation, and there are many modern techniques that can offer even more rapid convergence.
See alsoEdit
NotesEdit
ReferencesEdit
- Earl D. Rainville (1967) Infinite Series, pp 73–6, Macmillan Publishers.
- {{#invoke:Template wrapper|{{#if:|list|wrap}}|_template=cite web
|_exclude=urlname, _debug, id |url = https://mathworld.wolfram.com/{{#if:AlternatingSeries%7CAlternatingSeries.html}} |title = Alternating Series |author = Weisstein, Eric W. |website = MathWorld |access-date = |ref = Template:SfnRef }}