Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Laplace's method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Formal statement and proof=== Suppose <math>f(x)</math> is a twice continuously differentiable function on <math>[a,b],</math> and there exists a unique point <math>x_0 \in (a,b)</math> such that: :<math>f(x_0) = \max_{x \in [a,b]} f(x) \quad \text{and} \quad f''(x_0)<0.</math> Then: :<math>\lim_{n\to\infty} \frac{\int_a^b e^{nf(x)} \, dx}{e^{nf(x_0)} \sqrt{\frac{2\pi}{n\left(-f''(x_0)\right)}}}= 1. </math> {{hidden begin|border=solid 1px #aaa|title={{center|Proof}}}} '''Lower bound:''' Let <math>\varepsilon > 0</math>. Since <math>f''</math> is continuous there exists <math>\delta >0</math> such that if <math>|x_0-c|< \delta</math> then <math>f''(c) \ge f''(x_0) - \varepsilon.</math> By [[Taylor's Theorem]], for any <math>x \in (x_0 - \delta, x_0 + \delta),</math> :<math>f(x) \ge f(x_0) + \frac{1}{2}(f''(x_0) - \varepsilon)(x-x_0)^2.</math> Then we have the following lower bound: :<math>\begin{align} \int_a^b e^{nf(x)} \, dx &\ge \int_{x_0 - \delta}^{x_0 + \delta} e^{nf(x)} \, dx \\ &\ge e^{nf(x_0)} \int_{x_0 - \delta}^{x_0 + \delta} e^{\frac{n}{2}(f''(x_0) - \varepsilon)(x-x_0)^2} \, dx \\ &= e^{nf(x_0)} \sqrt{\frac{1}{n(-f''(x_0) + \varepsilon)}} \int_{-\delta \sqrt{n(-f''(x_0) + \varepsilon)} }^{\delta \sqrt{n(-f''(x_0) + \varepsilon)} } e^{-\frac{1}{2}y^2} \, dy \end{align}</math> where the last equality was obtained by a change of variables :<math>y= \sqrt{n(-f''(x_0) + \varepsilon)} (x-x_0).</math> Remember <math>f''(x_0)<0</math> so we can take the square root of its negation. If we divide both sides of the above inequality by :<math>e^{nf(x_0)}\sqrt{\frac{2 \pi}{n(-f''(x_0))}}</math> and take the limit we get: :<math>\lim_{n \to \infty} \frac{\int_a^b e^{nf(x)} \,dx}{e^{nf(x_0)}\sqrt{\frac{2\pi}{n(-f''(x_0))}}} \ge \lim_{n \to \infty} \frac{1}{\sqrt{2\pi}} \int_{-\delta\sqrt{n(-f''(x_0) + \varepsilon)} }^{\delta \sqrt{n(-f''(x_0) + \varepsilon)}} e^{-\frac{1}{2}y^2} \, dy \, \cdot \sqrt{\frac{-f''(x_0)}{-f''(x_0) + \varepsilon}} = \sqrt{\frac{-f''(x_0)}{-f''(x_0) + \varepsilon}}</math> since this is true for arbitrary <math>\varepsilon</math> we get the lower bound: :<math>\lim_{n \to \infty} \frac{\int_a^b e^{nf(x)} \, dx}{ e^{nf(x_0)}\sqrt{\frac{2\pi}{n(-f''(x_0))}}} \ge 1</math> Note that this proof works also when <math>a = -\infty</math> or <math>b= \infty</math> (or both). '''Upper bound:''' The proof is similar to that of the lower bound but there are a few inconveniences. Again we start by picking an <math>\varepsilon >0</math> but in order for the proof to work we need <math>\varepsilon</math> small enough so that <math>f''(x_0) + \varepsilon < 0.</math> Then, as above, by continuity of <math>f''</math> and [[Taylor's Theorem]] we can find <math>\delta>0</math> so that if <math>|x-x_0| < \delta</math>, then :<math>f(x) \le f(x_0) + \frac{1}{2} (f''(x_0) + \varepsilon)(x-x_0)^2.</math> Lastly, by our assumptions (assuming <math>a,b</math> are finite) there exists an <math>\eta >0</math> such that if <math>|x-x_0|\ge \delta</math>, then <math>f(x) \le f(x_0) - \eta</math>. Then we can calculate the following upper bound: :<math>\begin{align} \int_a^b e^{nf(x)} \, dx &\le \int_a^{x_0-\delta} e^{nf(x)} \, dx + \int_{x_0-\delta}^{x_0 + \delta} e^{nf(x)} \, dx + \int_{x_0 + \delta}^b e^{nf(x)} \, dx \\ &\le (b-a)e^{n(f(x_0)-\eta)} + \int_{x_0-\delta}^{x_0 + \delta} e^{n f(x) } \, dx \\ &\le (b-a)e^{n(f(x_0)-\eta)} + e^{nf(x_0)} \int_{x_0-\delta}^{x_0 + \delta} e^{\frac{n}{2}(f''(x_0)+\varepsilon)(x-x_0)^2} \, dx\\ &\le (b-a)e^{n(f(x_0)-\eta)} + e^{nf(x_0)} \int_{-\infty}^{+\infty} e^{\frac{n}{2}(f''(x_0)+\varepsilon)(x-x_0)^2} \, dx \\ &\le (b-a)e^{n(f(x_0)-\eta)} + e^{nf(x_0)} \sqrt{\frac{2 \pi}{n (-f''(x_0) - \varepsilon)}} \end{align}</math> If we divide both sides of the above inequality by :<math>e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}}</math> and take the limit we get: :<math>\lim_{n \to \infty} \frac{\int_a^b e^{nf(x)} \, dx}{e^{nf(x_0)}\sqrt{\frac{2\pi}{n(-f''(x_0))}}} \le \lim_{n \to \infty} (b-a) e^{-\eta n} \sqrt{\frac{n(-f''(x_0))}{2\pi}} + \sqrt{\frac{-f''(x_0)}{-f''(x_0) - \varepsilon}} = \sqrt{\frac{-f''(x_0)}{-f''(x_0) - \varepsilon}}</math> Since <math>\varepsilon</math> is arbitrary we get the upper bound: :<math>\lim_{n \to \infty} \frac{\int_a^b e^{nf(x)} \, dx}{e^{nf(x_0)}\sqrt{\frac{2 \pi}{n (-f''(x_0))}}} \le 1</math> And combining this with the lower bound gives the result. Note that the above proof obviously fails when <math>a = -\infty</math> or <math>b = \infty</math> (or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that for <math>n = 1,</math> :<math>\int_a^b e^{nf(x)} \, dx < \infty,</math> and that the number <math>\eta</math> as above exists (note that this must be an assumption in the case when the interval <math>[a,b]</math> is infinite). The proof proceeds otherwise as above, but with a slightly different approximation of integrals: :<math>\int_a^{x_0-\delta} e^{nf(x)} \, dx + \int_{x_0 + \delta}^b e^{nf(x)} \, dx \le \int_a^b e^{f(x)}e^{(n-1)(f(x_0) - \eta)} \, dx = e^{(n-1)(f(x_0) - \eta)} \int_a^b e^{f(x)} \, dx.</math> When we divide by :<math>e^{nf(x_0)}\sqrt{\frac{2\pi}{n(-f''(x_0))}},</math> we get for this term :<math>\frac{e^{(n-1)(f(x_0) - \eta)} \int_a^b e^{f(x)} \, dx}{e^{nf(x_0)}\sqrt{\frac{2\pi}{n(-f''(x_0))}}} = e^{-(n-1)\eta} \sqrt{n} e^{-f(x_0)} \int_a^b e^{f(x)} \, dx \sqrt{\frac{-f''(x_0)}{2\pi}}</math> whose limit as <math>n \to \infty</math> is <math>0</math>. The rest of the proof (the analysis of the interesting term) proceeds as above. The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function at <math>x_0</math> must be a "true" maximum (the number <math>\eta > 0</math> must exist). There is no need to demand that the integral is finite for <math>n=1</math> but it is enough to demand that the integral is finite for some <math>n=N.</math> {{hidden end}} This method relies on 4 basic concepts such as {{hidden begin|border=solid 1px #aaa|title={{center|Concepts}}}} :'''1. Relative error''' The “approximation” in this method is related to the [[relative error]] and not the [[absolute error]]. Therefore, if we set :<math>s = \sqrt{\frac{2\pi}{M\left|f''(x_0)\right|}},</math> the integral can be written as :<math>\begin{align} \int_a^b e^{M f(x)} \, dx &= se^{Mf(x_0)} \frac{1}{s}\int_a^b e^{M(f(x)-f(x_0))}\, dx \\ & = se^{Mf(x_0)} \int_{\frac{a-x_0}{s}}^{\frac{b-x_0}{s}} e^{M(f(sy+x_0)-f(x_0))}\,dy \end{align}</math> where <math>s</math> is a small number when <math>M</math> is a large number obviously and the relative error will be :<math>\left| \int_{\frac{a-x_0}{s}}^{\frac{b-x_0}{s}} e^{M(f(sy+x_0)-f(x_0))} dy-1 \right|.</math> Now, let us separate this integral into two parts: <math>y\in[-D_y,D_y]</math> region and the rest. :'''2. <math>e^{M (f(sy+x_0)-f(x_0))} \to e^{-\pi y^2}</math> around the [[stationary point]] when <math>M</math> is large enough''' Let’s look at the [[Taylor expansion]] of <math>M(f(x)-f(x_0))</math> around <math>x_0</math> and translate <math>x</math> to <math>y</math> because we do the comparison in y-space, we will get :<math>M(f(x)-f(x_0)) = \frac{Mf''(x_0)}{2}s^2y^2 +\frac{Mf'''(x_0)}{6}s^3y^3+ \cdots = -\pi y^2 +O\left(\frac{1}{\sqrt{M}}\right).</math> Note that <math>f'(x_0)=0</math> because <math>x_0</math> is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of <math>\tfrac{1}{\sqrt{M}}</math> so that <math>\exp(M(f(x)-f(x_0)))</math> will get closer to the [[Gaussian function]] as shown in figure. Besides, :<math>\int_{-\infty}^{\infty}e^{-\pi y^2} dy =1.</math> [[File:For laplace method --- with different M.png|thumb|The figure of <math>e^{M[f(sy+x_0)-f(x_0)]}</math> with <math>M</math> equals 1, 2 and 3, and the red line is the curve of function <math>e^{-\pi y^2}</math> .]] :'''3. The larger <math>M</math> is, the smaller range of <math>x</math> is related''' Because we do the comparison in y-space, <math>y</math> is fixed in <math>y\in[-D_y,D_y]</math> which will cause <math>x\in[-sD_y, sD_y]</math>; however, <math>s</math> is inversely proportional to <math>\sqrt{M}</math>, the chosen region of <math>x</math> will be smaller when <math>M</math> is increased. :'''4. If the integral in Laplace's method converges, the contribution of the region which is not around the stationary point of the integration of its relative error will tend to zero as <math>M</math> grows.''' Relying on the 3rd concept, even if we choose a very large ''D<sub>y</sub>'', ''sD<sub>y</sub>'' will finally be a very small number when <math>M</math> is increased to a huge number. Then, how can we guarantee the integral of the rest will tend to 0 when <math>M</math> is large enough? The basic idea is to find a function <math>m(x)</math> such that <math>m(x)\ge f(x)</math> and the integral of <math>e^{Mm(x)}</math> will tend to zero when <math>M</math> grows. Because the exponential function of <math>Mm(x)</math> will be always larger than zero as long as <math>m(x)</math> is a real number, and this exponential function is proportional to <math>m(x),</math> the integral of <math>e^{Mf(x)}</math> will tend to zero. For simplicity, choose <math>m(x)</math> as a [[tangent]] through the point <math>x=sD_y</math> as shown in the figure: [[File:For laplace method --- upper limit function m(x).gif|thumb| <math>m(x)</math> is denoted by the two [[tangent]] lines passing through <math>x=\pm sD_y+x_0</math>. When <math>sD_y</math> gets smaller, the cover region will be larger. ]] If the interval of the integration of this method is finite, we will find that no matter <math>f(x)</math> is continue in the rest region, it will be always smaller than <math>m(x)</math> shown above when <math>M</math> is large enough. By the way, it will be proved later that the integral of <math>e^{Mm(x)}</math> will tend to zero when <math>M</math> is large enough. If the interval of the integration of this method is infinite, <math>m(x)</math> and <math>f(x)</math> might always cross to each other. If so, we cannot guarantee that the integral of <math>e^{Mf(x)}</math> will tend to zero finally. For example, in the case of <math>f(x)=\tfrac{\sin(x)}{x},</math> <math>\int^{\infty}_{0}e^{Mf(x)} dx</math> will always diverge. Therefore, we need to require that <math>\int^{\infty}_{d}e^{Mf(x)} dx</math> can converge for the infinite interval case. If so, this integral will tend to zero when <math>d</math> is large enough and we can choose this <math>d</math> as the cross of <math>m(x)</math> and <math>f(x).</math> You might ask why not choose <math>\int^{\infty}_{d}e^{f(x)} dx</math> as a convergent integral? Let me use an example to show you the reason. Suppose the rest part of <math>f(x)</math> is <math>-\ln x,</math> then <math>e^{f(x)}=\tfrac{1}{x}</math> and its integral will diverge; however, when <math>M=2,</math> the integral of <math>e^{Mf(x)}=\tfrac{1}{x^2}</math> converges. So, the integral of some functions will diverge when <math>M</math> is not a large number, but they will converge when <math>M</math> is large enough. {{hidden end}} Based on these four concepts, we can derive the relative error of this method.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)