Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Integration by parts
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical method in calculus}} {{Calculus |Integral}} In [[calculus]], and more generally in [[mathematical analysis]], '''integration by parts''' or '''partial integration''' is a process that finds the [[integral (mathematics)|integral]] of a [[product (mathematics)|product]] of [[Function (mathematics)|functions]] in terms of the integral of the product of their [[derivative]] and [[antiderivative]]. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the [[product rule]] of [[derivative|differentiation]]; it is indeed derived using the product rule. The integration by parts formula states: <math display="block">\begin{align} \int_a^b u(x) v'(x) \, dx & = \Big[u(x) v(x)\Big]_a^b - \int_a^b u'(x) v(x) \, dx\\ & = u(b) v(b) - u(a) v(a) - \int_a^b u'(x) v(x) \, dx. \end{align}</math> Or, letting <math>u = u(x)</math> and <math>du = u'(x) \,dx</math> while <math>v = v(x)</math> and <math>dv = v'(x) \, dx,</math> the formula can be written more compactly: <math display="block">\int u \, dv \ =\ uv - \int v \, du.</math> The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former. Mathematician [[Brook Taylor]] discovered integration by parts, first publishing the idea in 1715.<ref name="Brook Taylor biography, St. Andrews">{{cite web |url=http://www-history.mcs.st-andrews.ac.uk/Biographies/Taylor.html |title=Brook Taylor |work=History.MCS.St-Andrews.ac.uk |access-date= May 25, 2018}}</ref><ref name="Brook Taylor biography, Stetson">{{cite web |url=https://www2.stetson.edu/~efriedma/periodictable/html/Tl.html |title=Brook Taylor |work=Stetson.edu |access-date=May 25, 2018 |archive-date=January 3, 2018 |archive-url=https://web.archive.org/web/20180103003304/http://www2.stetson.edu/~efriedma/periodictable/html/Tl.html |url-status=dead }}</ref> More general formulations of integration by parts exist for the [[Riemann–Stieltjes integral#Properties and relation to the Riemann integral|Riemann–Stieltjes]] and [[Lebesgue–Stieltjes integral#Integration by parts|Lebesgue–Stieltjes integrals]]. The [[Discrete mathematics|discrete]] analogue for [[Sequence|sequences]] is called [[summation by parts]]. ==Theorem== ===Product of two functions=== The theorem can be derived as follows. For two [[continuously differentiable]] [[function (mathematics)|functions]] <math>u(x)</math> and <math>v(x)</math>, the [[product rule]] states: <math display="block">\Big(u(x)v(x)\Big)' = u'(x) v(x) + u(x) v'(x).</math> Integrating both sides with respect to <math>x</math>, <math display="block">\int \Big(u(x)v(x)\Big)'\,dx = \int u'(x)v(x)\,dx + \int u(x)v'(x) \,dx, </math> and noting that an [[indefinite integral]] is an antiderivative gives <math display="block">u(x)v(x) = \int u'(x)v(x)\,dx + \int u(x)v'(x)\,dx,</math> where we neglect writing the [[constant of integration]]. This yields the formula for '''integration by parts''': <math display="block">\int u(x)v'(x)\,dx = u(x)v(x) - \int u'(x)v(x) \,dx, </math> or in terms of the [[differentials of a function|differentials]] <math> du=u'(x)\,dx</math>, <math>dv=v'(x)\,dx, \quad</math> <math display="block">\int u(x)\,dv = u(x)v(x) - \int v(x)\,du.</math> This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values <math>x = a</math> and <math>x = b</math> and applying the [[fundamental theorem of calculus]] gives the definite integral version: <math display="block"> \int_a^b u(x) v'(x) \, dx = u(b) v(b) - u(a) v(a) - \int_a^b u'(x) v(x) \, dx . </math> The original integral <math>\int uv' \, dx</math> contains the [[derivative]] {{mvar|v'}}; to apply the theorem, one must find {{mvar|v}}, the [[antiderivative]] of {{mvar|v'}}, then evaluate the resulting integral <math>\int vu' \, dx.</math> ===Validity for less smooth functions=== It is not necessary for <math>u</math> and <math>v</math> to be continuously differentiable. Integration by parts works if <math>u</math> is [[absolutely continuous]] and the function designated <math>v'</math> is [[Lebesgue integrable]] (but not necessarily continuous).<ref>{{cite web |title=Integration by parts| url=https://www.encyclopediaofmath.org/index.php/Integration_by_parts |website=Encyclopedia of Mathematics}}</ref> (If <math>v'</math> has a point of discontinuity then its antiderivative <math>v</math> may not have a derivative at that point.) If the interval of integration is not [[compact space|compact]], then it is not necessary for <math>u</math> to be absolutely continuous in the whole interval or for <math>v'</math> to be Lebesgue integrable in the interval, as a couple of examples (in which <math>u</math> and <math>v</math> are continuous and continuously differentiable) will show. For instance, if <math display="block">u(x)= e^x/x^2, \, v'(x) =e^{-x}</math> <math>u</math> is not absolutely continuous on the interval {{closed-open|1, ∞}}, but nevertheless: <math display="block">\int_1^\infty u(x)v'(x)\,dx = \Big[u(x)v(x)\Big]_1^\infty - \int_1^\infty u'(x)v(x)\,dx</math> so long as <math>\left[u(x)v(x)\right]_1^\infty</math> is taken to mean the limit of <math>u(L)v(L)-u(1)v(1)</math> as <math>L\to\infty</math> and so long as the two terms on the right-hand side are finite. This is only true if we choose <math>v(x)=-e^{-x}.</math> Similarly, if <math display="block">u(x)= e^{-x},\, v'(x) =x^{-1}\sin(x)</math> <math>v'</math> is not Lebesgue integrable on the interval {{closed-open|1, ∞}}, but nevertheless <math display="block">\int_1^\infty u(x)v'(x)\,dx = \Big[u(x)v(x)\Big]_1^\infty - \int_1^\infty u'(x)v(x)\,dx</math> with the same interpretation. One can also easily come up with similar examples in which <math>u</math> and <math>v</math> are ''not'' continuously differentiable. Further, if <math>f(x)</math> is a function of bounded variation on the segment <math>[a,b],</math> and <math>\varphi(x)</math> is differentiable on <math>[a,b],</math> then <math display="block">\int_{a}^{b}f(x)\varphi'(x)\,dx=-\int_{-\infty}^{\infty} \widetilde\varphi(x)\,d(\widetilde\chi_{[a,b]}(x)\widetilde f(x)),</math> where <math>d(\chi_{[a,b]}(x)\widetilde f(x))</math> denotes the signed measure corresponding to the function of bounded variation <math>\chi_{[a,b]}(x)f(x)</math>, and functions <math>\widetilde f, \widetilde \varphi</math> are extensions of <math>f, \varphi</math> to <math>\R,</math> which are respectively of bounded variation and differentiable.{{cn|date=August 2019}} ===Product of many functions=== Integrating the product rule for three multiplied functions, <math>u(x)</math>, <math>v(x)</math>, <math>w(x)</math>, gives a similar result: <math display="block">\int_a^b u v \, dw \ =\ \Big[u v w\Big]^b_a - \int_a^b u w \, dv - \int_a^b v w \, du.</math> In general, for <math>n</math> factors <math display="block">\left(\prod_{i=1}^n u_i(x) \right)' \ =\ \sum_{j=1}^n u_j'(x)\prod_{i\neq j}^n u_i(x), </math> which leads to <math display="block"> \left[ \prod_{i=1}^n u_i(x) \right]_a^b \ =\ \sum_{j=1}^n \int_a^b u_j'(x) \prod_{i\neq j}^n u_i(x). </math> ==Visualization== [[Image:Integration by parts v2.svg|thumb|280px |Graphical interpretation of the theorem. The pictured curve is parametrized by the variable t.]] Consider a parametric curve <math>(x, y) = (f(t), g(t))</math>. Assuming that the curve is locally [[Injective function|one-to-one]] and [[Locally integrable function|integrable]], we can define <math display="block">\begin{align} x(y) &= f(g^{-1}(y)) \\ y(x) &= g(f^{-1}(x)) \end{align}</math> The area of the blue region is <math display="block">A_1=\int_{y_1}^{y_2}x(y) \, dy</math> Similarly, the area of the red region is <math display="block">A_2=\int_{x_1}^{x_2}y(x)\,dx</math> The total area ''A''<sub>1</sub> + ''A''<sub>2</sub> is equal to the area of the bigger rectangle, ''x''<sub>2</sub>''y''<sub>2</sub>, minus the area of the smaller one, ''x''<sub>1</sub>''y''<sub>1</sub>: <math display="block">\overbrace{\int_{y_1}^{y_2}x(y) \, dy}^{A_1}+\overbrace{\int_{x_1}^{x_2}y(x) \, dx}^{A_2}\ =\ \biggl.x \cdot y(x)\biggl|_{x_1}^{x_2} \ =\ \biggl.y \cdot x(y)\biggl|_{y_1}^{y_2}</math> Or, in terms of ''t'', <math display="block">\int_{t_1}^{t_2}x(t) \, dy(t) + \int_{t_1}^{t_2}y(t) \, dx(t) \ =\ \biggl. x(t)y(t) \biggl|_{t_1}^{t_2}</math> Or, in terms of indefinite integrals, this can be written as <math display="block">\int x\,dy + \int y \,dx \ =\ xy</math> Rearranging: <math display="block">\int x\,dy \ =\ xy - \int y \,dx</math> Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region. This visualization also explains why integration by parts may help find the integral of an inverse function ''f''<sup>−1</sup>(''x'') when the integral of the function ''f''(''x'') is known. Indeed, the functions ''x''(''y'') and ''y''(''x'') are inverses, and the integral ∫ ''x'' ''dy'' may be calculated as above from knowing the integral ∫ ''y'' ''dx''. In particular, this explains use of integration by parts to integrate [[logarithm]] and [[inverse trigonometric function]]s. In fact, if <math>f</math> is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of <math>f^{-1}</math> in terms of the integral of <math>f</math>. This is demonstrated in the article, [[Integral of inverse functions]]. ==Applications== ===Finding antiderivatives=== Integration by parts is a [[heuristic]] rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions ''u''(''x'')''v''(''x'') such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take: <math display="block">\int uv\,dx = u \int v\,dx - \int\left(u' \int v\,dx \right)\,dx.</math> On the right-hand side, ''u'' is differentiated and ''v'' is integrated; consequently it is useful to choose ''u'' as a function that simplifies when differentiated, or to choose ''v'' as a function that simplifies when integrated. As a simple example, consider: <math display="block">\int\frac{\ln(x)}{x^2}\,dx\,.</math> Since the derivative of ln(''x'') is {{sfrac|1|''x''}}, one makes (ln(''x'')) part ''u''; since the antiderivative of {{sfrac|1|''x''<sup>2</sup>}} is −{{sfrac|1|''x''}}, one makes {{sfrac|1|''x''<sup>2</sup>}} part ''v''. The formula now yields: <math display="block">\int\frac{\ln(x)}{x^2}\,dx = -\frac{\ln(x)}{x} - \int \biggl(\frac1{x}\biggr) \biggl(-\frac1{x}\biggr)\,dx\,.</math> The antiderivative of −{{sfrac|1|''x''<sup>2</sup>}} can be found with the [[power rule]] and is {{sfrac|1|''x''}}. Alternatively, one may choose ''u'' and ''v'' such that the product ''u''′ (∫''v'' ''dx'') simplifies due to cancellation. For example, suppose one wishes to integrate: <math display="block">\int\sec^2(x)\cdot\ln\Big(\bigl|\sin(x)\bigr|\Big)\,dx.</math> If we choose ''u''(''x'') = ln(|sin(''x'')|) and ''v''(''x'') = sec<sup>2</sup>x, then ''u'' differentiates to <math>\frac{1}{\tan x}</math> using the [[chain rule]] and ''v'' integrates to tan ''x''; so the formula gives: <math display="block">\int\sec^2(x)\cdot\ln\Big(\bigl|\sin(x)\bigr|\Big)\,dx = \tan(x)\cdot\ln\Big(\bigl|\sin(x)\bigr|\Big)-\int\tan(x)\cdot\frac1{\tan(x)} \, dx\ .</math> The integrand simplifies to 1, so the antiderivative is ''x''. Finding a simplifying combination frequently involves experimentation. In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in [[numerical analysis]], it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below. ====Polynomials and trigonometric functions==== In order to calculate <math display="block">I=\int x\cos(x)\,dx\,,</math> let: <math display="block">\begin{alignat}{3} u &= x\ &\Rightarrow\ &&du &= dx \\ dv &= \cos(x)\,dx\ &\Rightarrow\ && v &= \int\cos(x)\,dx = \sin(x) \end{alignat}</math> then: <math display="block">\begin{align} \int x\cos(x)\,dx & = \int u\ dv \\ & = u\cdot v - \int v \, du \\ & = x\sin(x) - \int \sin(x)\,dx \\ & = x\sin(x) + \cos(x) + C, \end{align}</math> where ''C'' is a [[constant of integration]]. For higher powers of <math>x</math> in the form <math display="block">\int x^n e^x\,dx,\ \int x^n\sin(x)\,dx,\ \int x^n\cos(x)\,dx\,,</math> repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of <math>x</math> by one. ====Exponentials and trigonometric functions==== {{hatnote|See also: [[Integration using Euler's formula]]}} An example commonly used to examine the workings of integration by parts is <math display="block">I=\int e^x\cos(x)\,dx.</math> Here, integration by parts is performed twice. First let <math display="block">\begin{alignat}{3} u &= \cos(x)\ &\Rightarrow\ &&du &= -\sin(x)\,dx \\ dv &= e^x\,dx\ &\Rightarrow\ &&v &= \int e^x\,dx = e^x \end{alignat}</math> then: <math display="block">\int e^x\cos(x)\,dx = e^x\cos(x) + \int e^x\sin(x)\,dx.</math> Now, to evaluate the remaining integral, we use integration by parts again, with: <math display="block">\begin{alignat}{3} u &= \sin(x)\ &\Rightarrow\ &&du &= \cos(x)\,dx \\ dv &= e^x\,dx\,&\Rightarrow\ && v &= \int e^x\,dx = e^x. \end{alignat}</math> Then: <math display="block">\int e^x\sin(x)\,dx = e^x\sin(x) - \int e^x\cos(x)\,dx.</math> Putting these together, <math display="block">\int e^x\cos(x)\,dx = e^x\cos(x) + e^x\sin(x) - \int e^x\cos(x)\,dx.</math> The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get <math display="block">2\int e^x\cos(x)\,dx = e^x\bigl[\sin(x)+\cos(x)\bigr] + C,</math> which rearranges to <math display="block">\int e^x\cos(x)\,dx = \frac{1}{2}e^x\bigl[\sin(x)+\cos(x)\bigr] + C'</math> where again <math>C</math> (and <math>C' = \frac{C}{2}</math>) is a [[constant of integration]]. A similar method is used to find the [[integral of secant cubed]]. ====Functions multiplied by unity==== Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times <math>x</math> is also known. The first example is <math>\int \ln(x) dx</math>. We write this as: <math display="block">I=\int\ln(x)\cdot 1\,dx\,.</math> Let: <math display="block">u = \ln(x)\ \Rightarrow\ du = \frac{dx}{x}</math> <math display="block">dv = dx\ \Rightarrow\ v = x</math> then: <math display="block"> \begin{align} \int \ln(x)\,dx & = x\ln(x) - \int\frac{x}{x}\,dx \\ & = x\ln(x) - \int 1\,dx \\ & = x\ln(x) - x + C \end{align} </math> where <math>C</math> is the [[constant of integration]]. The second example is the [[inverse tangent]] function <math>\arctan(x)</math>: <math display="block">I=\int\arctan(x)\,dx.</math> Rewrite this as <math display="block">\int\arctan(x)\cdot 1\,dx.</math> Now let: <math display="block">u = \arctan(x)\ \Rightarrow\ du = \frac{dx}{1+x^2}</math> <math display="block">dv = dx\ \Rightarrow\ v = x</math> then <math display="block"> \begin{align} \int\arctan(x)\,dx & = x\arctan(x) - \int\frac{x}{1+x^2}\,dx \\[8pt] & = x\arctan(x) - \frac{\ln(1+x^2)}{2} + C \end{align} </math> using a combination of the [[inverse chain rule method]] and the [[natural logarithm integral condition]]. ====LIATE rule==== The LIATE rule is a rule of thumb for integration by parts. It involves choosing as ''u'' the function that comes first in the following list:<ref>{{Cite journal |jstor=2975556 |first=Herbert E. |last=Kasube |title=A Technique for Integration by Parts |journal=[[The American Mathematical Monthly]] |volume=90 |issue=3 |year=1983 |pages=210–211 |doi=10.2307/2975556}}</ref> * '''L''' – [[logarithmic function]]s: <math>\ln(x),\ \log_b(x),</math> etc. * '''I''' – [[inverse trigonometric function]]s (including [[Inverse hyperbolic functions|hyperbolic analogues]]): <math>\arctan(x),\ \arcsec(x),\ \operatorname{arsinh}(x),</math> etc. * '''A''' – [[algebraic function]]s (such as [[polynomials]]): <math>x^2,\ 3x^{50},</math> etc. * '''T''' – [[trigonometric functions]] (including [[Hyperbolic functions|hyperbolic analogues]]): <math>\sin(x),\ \tan(x),\ \operatorname{sech}(x),</math> etc. * '''E''' – [[exponential function]]s: <math>e^x,\ 19^x,</math> etc. The function which is to be ''dv'' is whichever comes last in the list. The reason is that functions lower on the list generally have simpler [[antiderivative]]s than the functions above them. The rule is sometimes written as "DETAIL", where ''D'' stands for ''dv'' and the top of the list is the function chosen to be ''dv''. An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions. To demonstrate the LIATE rule, consider the integral <math display="block">\int x \cdot \cos(x) \,dx.</math> Following the LIATE rule, ''u'' = ''x'', and ''dv'' = cos(''x'') ''dx'', hence ''du'' = ''dx'', and ''v'' = sin(''x''), which makes the integral become <math display="block">x \cdot \sin(x) - \int 1 \sin(x) \,dx,</math> which equals <math display="block">x \cdot \sin(x) + \cos(x) + C.</math> In general, one tries to choose ''u'' and ''dv'' such that ''du'' is simpler than ''u'' and ''dv'' is easy to integrate. If instead cos(''x'') was chosen as ''u'', and ''x dx'' as ''dv'', we would have the integral <math display="block">\frac{x^2}{2} \cos(x) + \int \frac{x^2}{2} \sin(x) \,dx,</math> which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere. Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate <math display="block">\int x^3 e^{x^2} \,dx,</math> one would set <math display="block">u = x^2, \quad dv = x \cdot e^{x^2} \,dx,</math> so that <math display="block">du = 2x \,dx, \quad v = \frac{e^{x^2}}{2}.</math> Then <math display="block">\int x^3 e^{x^2} \,dx = \int \left(x^2\right) \left(xe^{x^2}\right) \,dx = \int u \,dv = uv - \int v \,du = \frac{x^2 e^{x^2}}{2} - \int x e^{x^2} \,dx.</math> Finally, this results in <math display="block">\int x^3 e^{x^2} \,dx = \frac{e^{x^2}\left(x^2 - 1\right)}{2} + C.</math> Integration by parts is often used as a tool to prove theorems in [[mathematical analysis]]. === Wallis product === The Wallis infinite product for <math>\pi</math> <math display="block">\begin{align} \frac{\pi}{2} & = \prod_{n=1}^\infty \frac{ 4n^2 }{ 4n^2 - 1 } = \prod_{n=1}^\infty \left(\frac{2n}{2n-1} \cdot \frac{2n}{2n+1}\right) \\[6pt] & = \Big(\frac{2}{1} \cdot \frac{2}{3}\Big) \cdot \Big(\frac{4}{3} \cdot \frac{4}{5}\Big) \cdot \Big(\frac{6}{5} \cdot \frac{6}{7}\Big) \cdot \Big(\frac{8}{7} \cdot \frac{8}{9}\Big) \cdot \; \cdots \end{align}</math> may be [[Wallis product#Proof using integration|derived using integration by parts]]. ===Gamma function identity=== The [[gamma function]] is an example of a [[special function]], defined as an [[improper integral]] for <math>z > 0 </math>. Integration by parts illustrates it to be an extension of the factorial function: <math display="block">\begin{align} \Gamma(z) & = \int_0^\infty e^{-x} x^{z-1} dx \\[6pt] & = - \int_0^\infty x^{z-1} \, d\left(e^{-x}\right) \\[6pt] & = - \Biggl[e^{-x} x^{z-1}\Biggl]_0^\infty + \int_0^\infty e^{-x} d\left(x^{z-1}\right) \\[6pt] & = 0 + \int_0^\infty \left(z-1\right) x^{z-2} e^{-x} dx\\[6pt] & = (z-1)\Gamma(z-1). \end{align} </math> Since <math display="block">\Gamma(1) = \int_0^\infty e^{-x} \, dx = 1,</math> when <math>z</math> is a natural number, that is, <math> z = n \in \mathbb{N} </math>, applying this formula repeatedly gives the [[factorial]]: <math>\Gamma(n+1) = n!</math> ===Use in harmonic analysis=== Integration by parts is often used in [[harmonic analysis]], particularly [[Fourier analysis]], to show [[Riemann–Lebesgue lemma|that quickly oscillating integrals with sufficiently smooth integrands decay quickly]]. The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below. ====Fourier transform of derivative==== If <math>f</math> is a <math>k</math>-times continuously differentiable function and all derivatives up to the <math>k</math>th one decay to zero at infinity, then its [[Fourier transform]] satisfies <math display="block">(\mathcal{F}f^{(k)})(\xi) = (2\pi i\xi)^k \mathcal{F}f(\xi),</math> where <math>f^{(k)}</math> is the <math>k</math>th derivative of <math>f</math>. (The exact constant on the right depends on the [[Fourier transform#Other conventions|convention of the Fourier transform used]].) This is proved by noting that <math display="block">\frac{d}{dy} e^{-2\pi iy\xi} = -2\pi i\xi e^{-2\pi iy\xi},</math> so using integration by parts on the Fourier transform of the derivative we get <math display="block">\begin{align} (\mathcal{F}f')(\xi) &= \int_{-\infty}^\infty e^{-2\pi iy\xi} f'(y)\,dy \\ &=\left[e^{-2\pi iy\xi} f(y)\right]_{-\infty}^\infty - \int_{-\infty}^\infty (-2\pi i\xi e^{-2\pi iy\xi}) f(y)\,dy \\[5pt] &=2\pi i\xi \int_{-\infty}^\infty e^{-2\pi iy\xi} f(y)\,dy \\[5pt] &=2\pi i\xi \mathcal{F}f(\xi). \end{align}</math> Applying this [[Mathematical induction|inductively]] gives the result for general <math>k</math>. A similar method can be used to find the [[Laplace transform]] of a derivative of a function. ====Decay of Fourier transform==== The above result tells us about the decay of the Fourier transform, since it follows that if <math>f</math> and <math>f^{(k)}</math> are integrable then <math display="block">\vert\mathcal{F}f(\xi)\vert \leq \frac{I(f)}{1+\vert 2\pi\xi\vert^k}, \text{ where } I(f) = \int_{-\infty}^\infty \Bigl(\vert f(y)\vert + \vert f^{(k)}(y)\vert\Bigr) \, dy.</math> In other words, if <math>f</math> satisfies these conditions then its Fourier transform decays at infinity at least as quickly as {{nowrap|1/{{!}}''ξ''{{!}}<sup>''k''</sup>}}. In particular, if <math>k \geq 2</math> then the Fourier transform is integrable. The proof uses the fact, which is immediate from the [[Fourier transform#Definition|definition of the Fourier transform]], that <math display="block">\vert\mathcal{F}f(\xi)\vert \leq \int_{-\infty}^\infty \vert f(y) \vert \,dy.</math> Using the same idea on the equality stated at the start of this subsection gives <math display="block">\vert(2\pi i\xi)^k \mathcal{F}f(\xi)\vert \leq \int_{-\infty}^\infty \vert f^{(k)}(y) \vert \,dy.</math> Summing these two inequalities and then dividing by {{nowrap|1 + {{!}}2{{pi}}''ξ''<sup>''k''</sup>{{!}}}} gives the stated inequality. ===Use in operator theory=== One use of integration by parts in [[operator theory]] is that it shows that the {{nowrap|−∆}} (where ∆ is the [[Laplace operator]]) is a [[positive operator]] on <math>L^2</math> (see [[Lp space|''L''<sup>''p''</sup> space]]). If <math>f</math> is smooth and compactly supported then, using integration by parts, we have <math display="block">\begin{align} \langle -\Delta f, f \rangle_{L^2} &= -\int_{-\infty}^\infty f''(x)\overline{f(x)}\,dx \\[5pt] &=-\left[f'(x)\overline{f(x)}\right]_{-\infty}^\infty + \int_{-\infty}^\infty f'(x)\overline{f'(x)}\,dx \\[5pt] &=\int_{-\infty}^\infty \vert f'(x)\vert^2\,dx \geq 0. \end{align}</math> ===Other applications=== <!---INCLUDING DERIVATIONS HERE WOULD BE TOO LENGTHLY, IDEALLY KEEP THIS AS A LIST---> * Determining [[boundary condition]]s in [[Sturm–Liouville theory]] * Deriving the [[Euler–Lagrange equation]] in the [[calculus of variations]] ==Repeated integration by parts== {{Further|Cauchy formula for repeated integration}} Considering a second derivative of <math>v</math> in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS: <math display="block">\int u v''\,dx = uv' - \int u'v'\,dx = uv' - \left( u'v - \int u''v\,dx \right).</math> Extending this concept of repeated partial integration to derivatives of degree {{mvar|n}} leads to <math display="block">\begin{align} \int u^{(0)} v^{(n)}\,dx &= u^{(0)} v^{(n-1)} - u^{(1)}v^{(n-2)} + u^{(2)}v^{(n-3)} - \cdots + (-1)^{n-1}u^{(n-1)} v^{(0)} + (-1)^n \int u^{(n)} v^{(0)} \,dx.\\[5pt] &= \sum_{k=0}^{n-1}(-1)^k u^{(k)}v^{(n-1-k)} + (-1)^n \int u^{(n)} v^{(0)} \,dx. \end{align}</math> This concept may be useful when the successive integrals of <math>v^{(n)}</math> are readily available (e.g., plain exponentials or sine and cosine, as in [[Laplace transform|Laplace]] or [[Fourier transform]]s), and when the {{mvar|n}}th derivative of <math>u</math> vanishes (e.g., as a polynomial function with degree <math>(n-1)</math>). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes. In the course of the above repetition of partial integrations the integrals <math display="block">\int u^{(0)} v^{(n)}\,dx \quad</math> and <math>\quad \int u^{(\ell)} v^{(n-\ell)}\,dx \quad</math> and <math>\quad \int u^{(m)} v^{(n-m)}\,dx \quad\text{ for } 1 \le m,\ell \le n</math> get related. This may be interpreted as arbitrarily "shifting" derivatives between <math>v</math> and <math>u</math> within the integrand, and proves useful, too (see [[Rodrigues' formula]]). ===Tabular integration by parts=== The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration"<ref>{{Cite book |first1=G. B. |last1=Thomas |author-link=George B. Thomas |first2=R. L. |last2=Finney |title=Calculus and Analytic Geometry |publisher=Addison-Wesley |location=Reading, MA |edition=7th |year=1988 |isbn=0-201-17069-8 }}</ref> and was featured in the film ''[[Stand and Deliver]]'' (1988).<ref>{{Cite journal |url=https://www.maa.org/sites/default/files/pdf/mathdl/CMJ/Horowitz307-311.pdf |first=David |last=Horowitz |title=Tabular Integration by Parts |journal=[[The College Mathematics Journal]] |volume=21 |issue=4 |year=1990 |pages=307–311 |doi=10.2307/2686368 |jstor=2686368}}</ref> For example, consider the integral <math display="block">\int x^3 \cos x \,dx \quad</math> and take <math>\quad u^{(0)} = x^3, \quad v^{(n)} = \cos x.</math> Begin to list in column '''A''' the function <math>u^{(0)} = x^3</math> and its subsequent derivatives <math>u^{(i)}</math> until zero is reached. Then list in column '''B''' the function <math>v^{(n)} = \cos x</math> and its subsequent integrals <math>v^{(n-i)}</math> until the size of column '''B''' is the same as that of column '''A'''. The result is as follows: :{| class="wikitable" style="text-align:center" !# ''i'' !! Sign !! A: derivatives <math>u^{(i)}</math> !! B: integrals <math>v^{(n-i)}</math> |- | 0 || + || <math>x^3</math> || <math>\cos x</math> |- | 1 || − || <math>3x^2</math> || <math>\sin x</math> |- | 2 || + || <math>6x</math> || <math>-\cos x</math> |- | 3 || − || <math>6</math> || <math>-\sin x</math> |- | 4 || + || <math>0</math> || <math>\cos x</math> |} The product of the entries in {{nowrap|row {{mvar|i}}}} of columns '''A''' and '''B''' together with the respective sign give the relevant integrals in {{nowrap|step {{mvar|i}}}} in the course of repeated integration by parts. {{nowrap|Step {{math|''i'' {{=}} 0}}}} yields the original integral. For the complete result in {{nowrap|step {{math|''i'' > 0}}}} the {{nowrap|{{mvar|i}}th integral}} must be added to all the previous products ({{math|0 ≤ ''j'' < ''i''}}) of the {{nowrap|{{mvar|j}}th entry}} of column A and the {{nowrap|{{math|(''j'' + 1)}}st entry}} of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given {{nowrap|{{mvar|j}}th sign.}} This process comes to a natural halt, when the product, which yields the integral, is zero ({{math|''i'' {{=}} 4}} in the example). The complete result is the following (with the alternating signs in each term): <math display="block">\underbrace{(+1)(x^3)(\sin x)}_{j=0} + \underbrace{(-1)(3x^2)(-\cos x)}_{j=1} + \underbrace{(+1)(6x)(-\sin x)}_{j=2} +\underbrace{(-1)(6)(\cos x)}_{j=3}+ \underbrace{\int(+1)(0)(\cos x) \,dx}_{i=4: \;\to \;C}.</math> This yields <math display="block">\underbrace{\int x^3 \cos x \,dx}_{\text{step 0}} = x^3\sin x + 3x^2\cos x - 6x\sin x - 6\cos x + C. </math> The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions <math>u^{(i)}</math> and <math>v^{(n-i)}</math> their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index {{mvar|i.}}This can happen, expectably, with exponentials and trigonometric functions. As an example consider <math display="block">\int e^x \cos x \,dx. </math> :{| class="wikitable" style="text-align:center" !# ''i'' !! Sign !! A: derivatives <math>u^{(i)}</math> !! B: integrals <math>v^{(n-i)}</math> |- | 0 || + || <math>e^x</math> || <math>\cos x</math> |- | 1 || − || <math>e^x</math> || <math>\sin x</math> |- | 2 || + || <math>e^x</math> || <math>-\cos x</math> |} In this case the product of the terms in columns '''A''' and '''B''' with the appropriate sign for index {{math|''i'' {{=}} 2}} yields the negative of the original integrand (compare {{nowrap|rows {{math|''i'' {{=}} 0}}}} {{nowrap|and {{math|''i'' {{=}} 2}}).}} <math display="block"> \underbrace{\int e^x \cos x \,dx}_{\text{step 0}} = \underbrace{(+1)(e^x)(\sin x)}_{j=0} + \underbrace{(-1)(e^x)(-\cos x)}_{j=1} + \underbrace{\int(+1)(e^x)(-\cos x) \,dx}_{i= 2}. </math> Observing that the integral on the RHS can have its own constant of integration <math>C'</math>, and bringing the abstract integral to the other side, gives: <math display="block"> 2 \int e^x \cos x \,dx = e^x\sin x + e^x\cos x + C', </math> and finally: <math display="block">\int e^x \cos x \,dx = \frac 12 \left(e^x ( \sin x + \cos x ) \right) + C,</math> where <math>C = \frac{C'}{2}</math>. ==Higher dimensions== Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function ''u'' and vector-valued function (vector field) '''V'''.<ref>{{Cite web|url=http://www.math.nagoya-u.ac.jp/~richard/teaching/s2016/Ref2.pdf|title=The Calculus of Several Variables| last=Rogers| first=Robert C. |date=September 29, 2011}}</ref> The [[Vector calculus identities#First derivative identities|product rule for divergence]] states: <math display="block">\nabla \cdot ( u \mathbf{V} ) \ =\ u\, \nabla \cdot \mathbf V \ +\ \nabla u\cdot \mathbf V.</math> Suppose <math>\Omega</math> is an [[Open set|open]] [[bounded set|bounded subset]] of <math>\R^n</math> with a [[piecewise smooth]] [[boundary (topology)|boundary]] <math>\Gamma=\partial\Omega</math>. Integrating over <math>\Omega</math> with respect to the standard volume form <math>d\Omega</math>, and applying the [[divergence theorem]], gives: <math display="block">\int_{\Gamma} u \mathbf{V} \cdot \hat{\mathbf n} \,d\Gamma \ =\ \int_\Omega\nabla\cdot ( u \mathbf{V} )\,d\Omega \ =\ \int_\Omega u\, \nabla \cdot \mathbf V\,d\Omega \ +\ \int_\Omega\nabla u\cdot \mathbf V\,d\Omega,</math> where <math>\hat{\mathbf n}</math> is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form <math>d\Gamma</math>. Rearranging gives: <math display="block"> \int_\Omega u \,\nabla \cdot \mathbf V\,d\Omega \ =\ \int_\Gamma u \mathbf V \cdot \hat{\mathbf n}\,d\Gamma - \int_\Omega \nabla u \cdot \mathbf V \, d\Omega, </math> or in other words <math display="block"> \int_\Omega u\,\operatorname{div}(\mathbf V)\,d\Omega \ =\ \int_\Gamma u \mathbf V \cdot \hat{\mathbf n}\,d\Gamma - \int_\Omega \operatorname{grad}(u)\cdot\mathbf V\,d\Omega . </math> The [[Differentiability class|regularity]] requirements of the theorem can be relaxed. For instance, the boundary <math> \Gamma=\partial\Omega</math> need only be [[Lipschitz continuous]], and the functions ''u'', ''v'' need only lie in the [[Sobolev space]] <math>H^1(\Omega)</math>. === Green's first identity === Consider the continuously differentiable vector fields <math>\mathbf U = u_1\mathbf e_1+\cdots+u_n\mathbf e_n</math> and <math>v \mathbf e_1,\ldots, v\mathbf e_n</math>, where <math>\mathbf e_i</math> is the ''i''-th standard basis vector for <math>i=1,\ldots,n</math>. Now apply the above integration by parts to each <math>u_i</math> times the vector field <math>v\mathbf e_i</math>: <math display="block">\int_\Omega u_i\frac{\partial v}{\partial x_i}\,d\Omega \ =\ \int_\Gamma u_i v \,\mathbf e_i\cdot\hat\mathbf{n}\,d\Gamma - \int_\Omega \frac{\partial u_i}{\partial x_i} v\,d\Omega.</math> Summing over ''i'' gives a new integration by parts formula: <math display="block"> \int_\Omega \mathbf U \cdot \nabla v\,d\Omega \ =\ \int_\Gamma v \mathbf{U}\cdot \hat{\mathbf n}\,d\Gamma - \int_\Omega v\, \nabla \cdot \mathbf{U}\,d\Omega.</math> The case <math>\mathbf{U}=\nabla u</math>, where <math>u\in C^2(\bar{\Omega})</math>, is known as the first of [[Green's identities]]: <math display="block"> \int_\Omega \nabla u \cdot \nabla v\,d\Omega\ =\ \int_\Gamma v\, \nabla u\cdot\hat{\mathbf n}\,d\Gamma - \int_\Omega v\, \nabla^2 u \, d\Omega.</math> ==See also== * [[Lebesgue–Stieltjes integral#Integration by parts|Integration by parts for the Lebesgue–Stieltjes integral]] * [[Quadratic variation#Semimartingales|Integration by parts]] for [[semimartingale]]s, involving their quadratic covariation. * [[Integration by substitution]] * [[Legendre transformation]] ==Notes== <references /> ==Further reading== *{{cite book|author=Louis Brand|title=Advanced Calculus: An Introduction to Classical Analysis|url=https://books.google.com/books?id=hdSIAAAAQBAJ&q=%22integration+by+parts%22&pg=PA267|date=10 October 2013|publisher=Courier Corporation|isbn=978-0-486-15799-3|pages=267–}} *{{cite book |last1=Hoffmann |first1=Laurence D. |last2=Bradley |first2=Gerald L. |title=Calculus for Business, Economics, and the Social and Life Sciences |year=2004 |edition=8th |pages=450–464 |publisher=McGraw Hill Higher Education |isbn=0-07-242432-X }} *{{cite book |first=Stephen |last=Willard |title=Calculus and its Applications |location=Boston |publisher=Prindle, Weber & Schmidt |year=1976 |isbn=0-87150-203-8 |pages=193–214 }} *{{cite book |first=Allyn J. |last=Washington |title=Technical Calculus with Analytic Geometry |location=Reading |publisher=Addison-Wesley |year=1966 |isbn=0-8465-8603-7 |pages=218–245 }} ==External links== {{wikibooks|Calculus|Integration techniques/Integration by Parts|Integration by parts}} * {{springer|title=Integration by parts|id=p/i051730}} * [https://mathworld.wolfram.com/IntegrationbyParts.html Integration by parts—from MathWorld] {{Calculus topics}} {{Integrals}} [[Category:Integral calculus]] [[Category:Mathematical identities]] [[Category:Theorems in mathematical analysis]] [[Category:Theorems in calculus]] [[es:Métodos de integración#Método de integración por partes]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Bigger
(
edit
)
Template:Calculus
(
edit
)
Template:Calculus topics
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Closed-open
(
edit
)
Template:Cn
(
edit
)
Template:Endflatlist
(
edit
)
Template:Further
(
edit
)
Template:Hatnote
(
edit
)
Template:Integrals
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Sfrac
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Springer
(
edit
)
Template:Startflatlist
(
edit
)
Template:Wikibooks
(
edit
)