Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gaussian quadrature
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Other forms == The integration problem can be expressed in a slightly more general way by introducing a positive [[weight function]] {{mvar|ω}} into the integrand, and allowing an interval other than {{math|[−1, 1]}}. That is, the problem is to calculate <math display="block"> \int_a^b \omega(x)\,f(x)\,dx </math> for some choices of {{mvar|a}}, {{mvar|b}}, and {{mvar|ω}}. For {{math|1=''a'' = −1}}, {{math|1=''b'' = 1}}, and {{math|1=''ω''(''x'') = 1}}, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for [[Abramowitz and Stegun]] (A & S). {| class="wikitable" style="margin:auto; background:white; text-align:center;" ! Interval ! {{math|''ω''(''x'')}} ! Orthogonal polynomials ! A & S ! For more information, see ... |- | {{closed-closed|−1, 1}} || {{math|1}} || [[Legendre polynomials]] || 25.4.29 || {{section link||Gauss–Legendre quadrature}} |- | {{open-open|−1, 1}} || <math>\left(1 - x\right)^\alpha \left(1 + x\right)^\beta,\quad \alpha, \beta > -1</math> || [[Jacobi polynomials]] || 25.4.33 ({{math|1=''β'' = 0}}) || [[Gauss–Jacobi quadrature]] |- | {{open-open|−1, 1}} || <math>\frac{1}{\sqrt{1 - x^2}}</math> || [[Chebyshev polynomials]] (first kind) || 25.4.38 || [[Chebyshev–Gauss quadrature]] |- | {{closed-closed|−1, 1}} || <math>\sqrt{1 - x^2}</math> || Chebyshev polynomials (second kind) || 25.4.40 || [[Chebyshev–Gauss quadrature]] |- | {{closed-open|0, ∞}} || <math> e^{-x}\, </math> || [[Laguerre polynomials]] || 25.4.45 || [[Gauss–Laguerre quadrature]] |- | {{closed-open|0, ∞}} || <math> x^\alpha e^{-x},\quad \alpha>-1 </math> || Generalized [[Laguerre polynomials]] || || [[Gauss–Laguerre quadrature]] |- | {{open-open|−∞, ∞}} || <math> e^{-x^2} </math> || [[Hermite polynomials]] || 25.4.46 || [[Gauss–Hermite quadrature]] |} === Fundamental theorem === Let {{mvar|p<sub>n</sub>}} be a nontrivial polynomial of degree {{mvar|n}} such that <math display="block">\int_a^b \omega(x) \, x^k p_n(x) \, dx = 0, \quad \text{for all } k = 0, 1, \ldots, n - 1.</math> Note that this will be true for all the orthogonal polynomials above, because each {{mvar|p<sub>n</sub>}} is constructed to be orthogonal to the other polynomials {{mvar|p<sub>j</sub>}} for {{math|''j''<''n''}}, and {{math|''x''<sup>''k''</sup>}} is in the span of that set. If we pick the {{mvar|n}} nodes {{mvar|x<sub>i</sub>}} to be the zeros of {{mvar|p<sub>n</sub>}}, then there exist {{mvar|n}} weights {{mvar|w<sub>i</sub>}} which make the Gaussian quadrature computed integral exact for all polynomials {{math|''h''(''x'')}} of degree {{math|2''n'' − 1}} or less. Furthermore, all these nodes {{mvar|x<sub>i</sub>}} will lie in the open interval {{math|(''a'', ''b'')}}.<ref>{{harvnb|Stoer|Bulirsch|2002|pp=172–175}}</ref> To prove the first part of this claim, let {{math|''h''(''x'')}} be any polynomial of degree {{math|2''n'' − 1}} or less. Divide it by the orthogonal polynomial {{mvar|p<sub>n</sub>}} to get <math display="block"> h(x) = p_n(x) \, q(x) + r(x). </math> where {{math|''q''(''x'')}} is the quotient, of degree {{math|''n'' − 1}} or less (because the sum of its degree and that of the divisor {{mvar|p<sub>n</sub>}} must equal that of the dividend), and {{math|''r''(''x'')}} is the remainder, also of degree {{math|''n'' − 1}} or less (because the degree of the remainder is always less than that of the divisor). Since {{mvar|p<sub>n</sub>}} is by assumption orthogonal to all monomials of degree less than {{mvar|n}}, it must be orthogonal to the quotient {{math|''q''(''x'')}}. Therefore <math display="block"> \int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x)\,\big( \, p_n(x) q(x) + r(x) \, \big)\,dx = \int_a^b \omega(x)\,r(x)\,dx. </math> Since the remainder {{math|''r''(''x'')}} is of degree {{math|''n'' − 1}} or less, we can interpolate it exactly using {{mvar|n}} interpolation points with [[Lagrange polynomials]] {{math|''l''<sub>''i''</sub>(''x'')}}, where <math display="block"> l_i(x) = \prod _{j \ne i} \frac{x-x_j}{x_i-x_j}. </math> We have <math display="block"> r(x) = \sum_{i=1}^n l_i(x) \, r(x_i). </math> Then its integral will equal <math display="block"> \int_a^b \omega(x)\,r(x)\,dx = \int_a^b \omega(x) \, \sum_{i=1}^n l_i(x) \, r(x_i) \, dx = \sum_{i=1}^n \, r(x_i) \, \int_a^b \omega(x) \, l_i(x) \, dx = \sum_{i=1}^n \, r(x_i) \, w_i, </math> where {{math|''w''<sub>''i''</sub>}}, the weight associated with the node {{math|''x''<sub>''i''</sub>}}, is defined to equal the weighted integral of {{math|''l''<sub>''i''</sub>(''x'')}} (see below for other formulas for the weights). But all the {{mvar|x<sub>i</sub>}} are roots of {{mvar|p<sub>n</sub>}}, so the division formula above tells us that <math display="block"> h(x_i) = p_n(x_i) \, q(x_i) + r(x_i) = r(x_i), </math> for all {{mvar|i}}. Thus we finally have <math display="block"> \int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x) \, r(x) \, dx = \sum_{i=1}^n w_i \, r(x_i) = \sum_{i=1}^n w_i \, h(x_i). </math> This proves that for any polynomial {{math|''h''(''x'')}} of degree {{math|2''n'' − 1}} or less, its integral is given exactly by the Gaussian quadrature sum. To prove the second part of the claim, consider the factored form of the polynomial {{math|''p''<sub>''n''</sub>}}. Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from {{mvar|a}} to {{mvar|b}} will not change sign over that interval. Finally, for factors corresponding to roots {{mvar|x<sub>i</sub>}} inside the interval from {{mvar|a}} to {{mvar|b}} that are of odd multiplicity, multiply {{math|''p''<sub>''n''</sub>}} by one more factor to make a new polynomial <math display="block"> p_n(x) \, \prod_i (x - x_i). </math> This polynomial cannot change sign over the interval from {{mvar|a}} to {{mvar|b}} because all its roots there are now of even multiplicity. So the integral <math display="block"> \int_a^b p_n(x) \, \left( \prod_i (x - x_i) \right) \, \omega(x) \, dx \ne 0, </math> since the weight function {{math|''ω''(''x'')}} is always non-negative. But {{math|''p''<sub>''n''</sub>}} is orthogonal to all polynomials of degree {{math|''n'' − 1}} or less, so the degree of the product <math display="block"> \prod_i (x - x_i) </math> must be at least {{mvar|n}}. Therefore {{math|''p''<sub>''n''</sub>}} has {{mvar|n}} distinct roots, all real, in the interval from {{mvar|a}} to {{mvar|b}}. ==== General formula for the weights ==== The weights can be expressed as {{NumBlk|:|<math>w_{i} = \frac{a_{n}}{a_{n-1}} \frac{\int_{a}^{b} \omega(x) p_{n-1}(x)^2 dx}{p'_{n}(x_{i}) p_{n-1}(x_{i})}</math>|{{EquationRef|1}}}} where <math>a_{k}</math> is the coefficient of <math>x^{k}</math> in <math>p_{k}(x)</math>. To prove this, note that using [[Lagrange interpolation]] one can express {{math|''r''(''x'')}} in terms of <math>r(x_{i})</math> as <math display="block">r(x) = \sum_{i=1}^{n} r(x_{i}) \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}}\frac{x-x_{j}}{x_{i}-x_{j}}</math> because {{math|''r''(''x'')}} has degree less than {{mvar|n}} and is thus fixed by the values it attains at {{mvar|n}} different points. Multiplying both sides by {{math|''ω''(''x'')}} and integrating from {{mvar|a}} to {{mvar|b}} yields <math display="block">\int_{a}^{b}\omega(x)r(x)dx = \sum_{i=1}^{n} r(x_{i}) \int_{a}^{b}\omega(x)\prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \frac{x-x_{j}}{x_{i}-x_{j}}dx</math> The weights {{mvar|w<sub>i</sub>}} are thus given by <math display="block">w_{i} = \int_{a}^{b}\omega(x)\prod_{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\frac{x-x_{j}}{x_{i}-x_{j}}dx</math> This integral expression for <math>w_{i}</math> can be expressed in terms of the orthogonal polynomials <math>p_{n}(x)</math> and <math>p_{n-1}(x)</math> as follows. We can write <math display="block"> \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \left(x-x_{j}\right) = \frac{\prod_{1\leq j\leq n} \left(x - x_{j}\right)}{x-x_{i}} = \frac{p_{n}(x)}{a_{n}\left(x-x_{i}\right)}</math> where <math>a_{n}</math> is the coefficient of <math>x^n</math> in <math>p_{n}(x)</math>. Taking the limit of {{mvar|x}} to <math>x_{i}</math> yields using L'Hôpital's rule <math display="block"> \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \left(x_{i}-x_{j}\right) = \frac{p'_{n}(x_{i})}{a_{n}}</math> We can thus write the integral expression for the weights as {{NumBlk|:|<math>w_{i} = \frac{1}{p'_{n}(x_{i})}\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx</math>|{{EquationRef|2}}}} In the integrand, writing <math display="block">\frac{1}{x-x_i} = \frac{1 - \left(\frac{x}{x_i}\right)^{k}}{x - x_i} + \left(\frac{x}{x_i}\right)^{k} \frac{1}{x - x_i}</math> yields <math display="block">\int_a^b\omega(x)\frac{x^kp_n(x)}{x-x_i}dx = x_i^k \int_{a}^{b}\omega(x)\frac{p_n(x)}{x-x_i}dx</math> provided <math>k \leq n</math>, because <math display="block">\frac{1-\left(\frac{x}{x_{i}}\right)^{k}}{x-x_{i}}</math> is a polynomial of degree {{math|''k'' − 1}} which is then orthogonal to <math>p_{n}(x)</math>. So, if {{math|''q''(''x'')}} is a polynomial of at most nth degree we have <math display="block">\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}} dx = \frac{1}{q(x_{i})} \int_{a}^{b} \omega(x)\frac{q(x) p_n(x)}{x-x_{i}}dx </math> We can evaluate the integral on the right hand side for <math>q(x) = p_{n-1}(x)</math> as follows. Because <math>\frac{p_{n}(x)}{x-x_{i}}</math> is a polynomial of degree {{math|''n'' − 1}}, we have <math display="block">\frac{p_{n}(x)}{x-x_{i}} = a_{n}x^{n-1} + s(x)</math> where {{math|''s''(''x'')}} is a polynomial of degree <math>n - 2</math>. Since {{math|''s''(''x'')}} is orthogonal to <math>p_{n-1}(x)</math> we have <math display="block">\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx=\frac{a_{n}}{p_{n-1}(x_{i})} \int_{a}^{b}\omega(x)p_{n-1}(x)x^{n-1}dx </math> We can then write <math display="block">x^{n-1} = \left(x^{n-1} - \frac{p_{n-1}(x)}{a_{n-1}}\right) + \frac{p_{n-1}(x)}{a_{n-1}}</math> The term in the brackets is a polynomial of degree <math>n - 2</math>, which is therefore orthogonal to <math>p_{n-1}(x)</math>. The integral can thus be written as <math display="block">\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx = \frac{a_{n}}{a_{n-1} p_{n-1}(x_{i})} \int_{a}^{b}\omega(x) p_{n-1}(x)^{2} dx </math> According to equation ({{EquationNote|2}}), the weights are obtained by dividing this by <math>p'_{n}(x_{i})</math> and that yields the expression in equation ({{EquationNote|1}}). <math>w_{i}</math> can also be expressed in terms of the orthogonal polynomials <math>p_{n}(x)</math> and now <math>p_{n+1}(x)</math>. In the 3-term recurrence relation <math>p_{n+1}(x_{i}) = (a) p_{n}(x_{i}) + (b) p_{n-1}(x_{i})</math> the term with <math>p_{n}(x_{i})</math> vanishes, so <math>p_{n-1}(x_{i})</math> in Eq. (1) can be replaced by <math display="inline">\frac{1}{b} p_{n+1} \left(x_i\right)</math>. ====Proof that the weights are positive==== Consider the following polynomial of degree <math>2n - 2</math> <math display="block">f(x) = \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}}\frac{\left(x - x_j\right)^2}{\left(x_i - x_j\right)^2}</math> where, as above, the {{mvar|x<sub>j</sub>}} are the roots of the polynomial <math>p_{n}(x)</math>. Clearly <math>f(x_j) = \delta_{ij}</math>. Since the degree of <math>f(x)</math> is less than <math>2n - 1</math>, the Gaussian quadrature formula involving the weights and nodes obtained from <math>p_{n}(x)</math> applies. Since <math>f(x_{j}) = 0</math> for {{mvar|j}} not equal to {{mvar|i}}, we have <math display="block">\int_{a}^{b}\omega(x)f(x)dx=\sum_{j=1}^{n}w_{j}f(x_{j}) = \sum_{j=1}^{n} \delta_{ij} w_j = w_{i} > 0.</math> Since both <math>\omega(x)</math> and <math>f(x)</math> are non-negative functions, it follows that <math>w_{i} > 0</math>. === Computation of Gaussian quadrature rules === There are many algorithms for computing the nodes {{mvar|x<sub>i</sub>}} and weights {{mvar|w<sub>i</sub>}} of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring {{math|''O''(''n''<sup>2</sup>)}} operations, Newton's method for solving <math>p_n(x) = 0</math> using the [[Orthogonal polynomials#Recurrence relation|three-term recurrence]] for evaluation requiring {{math|''O''(''n''<sup>2</sup>)}} operations, and asymptotic formulas for large ''n'' requiring {{math|''O''(''n'')}} operations. ==== Recurrence relation ==== Orthogonal polynomials <math>p_r</math> with <math>(p_r, p_s) = 0</math> for <math>r \ne s</math> for a scalar product <math>(\cdot , \cdot)</math>, degree <math>(p_r) = r</math> and leading coefficient one (i.e. [[monic polynomial|monic]] orthogonal polynomials) satisfy the recurrence relation <math display="block">p_{r+1}(x) = (x - a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x) \cdots - a_{r,0}p_0(x)</math> and scalar product defined <math display="block">(f(x),g(x))=\int_a^b\omega(x)f(x)g(x)dx</math> for <math>r = 0, 1, \ldots, n - 1</math> where {{mvar|n}} is the maximal degree which can be taken to be infinity, and where <math display="inline">a_{r,s} = \frac{\left(xp_r, p_s\right)}{\left(p_s, p_s\right)}</math>. First of all, the polynomials defined by the recurrence relation starting with <math>p_0(x) = 1</math> have leading coefficient one and correct degree. Given the starting point by <math>p_0</math>, the orthogonality of <math>p_r</math> can be shown by induction. For <math>r = s = 0</math> one has <math display="block">(p_1,p_0) = (x-a_{0,0}) (p_0,p_0) = (xp_0,p_0) - a_{0,0}(p_0,p_0) = (xp_0,p_0) - (xp_0,p_0) = 0.</math> Now if <math>p_0, p_1, \ldots, p_r</math> are orthogonal, then also <math>p_{r+1}</math>, because in <math display="block">(p_{r+1}, p_s) = (xp_r, p_s) - a_{r,r}(p_r, p_s) - a_{r,r-1}(p_{r-1}, p_s)\cdots - a_{r,0}(p_0, p_s)</math> all scalar products vanish except for the first one and the one where <math>p_s</math> meets the same orthogonal polynomial. Therefore, <math display="block">(p_{r+1},p_s) = (xp_r,p_s) - a_{r,s}(p_s,p_s) = (xp_r,p_s)-(xp_r,p_s) = 0.</math> However, if the scalar product satisfies <math>(xf, g) = (f,xg)</math> (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For <math>s < r - 1, xp_s</math> is a polynomial of degree less than or equal to {{math|''r'' − 1}}. On the other hand, <math>p_r</math> is orthogonal to every polynomial of degree less than or equal to {{math|''r'' − 1}}. Therefore, one has <math>(xp_r, p_s) = (p_r, xp_s) = 0</math> and <math>a_{r,s} = 0</math> for {{math|''s'' < ''r'' − 1}}. The recurrence relation then simplifies to <math display="block">p_{r+1}(x) = (x-a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x)</math> or <math display="block">p_{r+1}(x) = (x-a_r) p_r(x) - b_r p_{r-1}(x)</math> (with the convention <math>p_{-1}(x) \equiv 0</math>) where <math display="block">a_r := \frac{(xp_r,p_r)}{(p_r,p_r)}, \qquad b_r := \frac{(xp_r,p_{r-1})}{(p_{r-1},p_{r-1})} = \frac{(p_r,p_r)}{(p_{r-1},p_{r-1})}</math> (the last because of <math>(xp_r, p_{r-1}) = (p_r, xp_{r-1}) = (p_r, p_r)</math>, since <math>xp_{r-1}</math> differs from <math>p_r</math> by a degree less than {{mvar|r}}). ==== The Golub-Welsch algorithm ==== The three-term recurrence relation can be written in matrix form <math>J\tilde{P} = x\tilde{P} - p_n(x) \mathbf{e}_n</math> where <math>\tilde{P} = \begin{bmatrix} p_0(x) & p_1(x) & \cdots & p_{n-1}(x) \end{bmatrix}^\mathsf{T}</math>, <math>\mathbf{e}_n</math> is the <math>n</math>th standard basis vector, i.e., <math>\mathbf{e}_n = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}^\mathsf{T}</math>, and {{mvar|J}} is the following [[tridiagonal matrix]], called the Jacobi matrix: <math display="block">\mathbf{J} = \begin{bmatrix} a_0 & 1 & 0 & \cdots & 0 \\ b_1 & a_1 & 1 & \ddots & \vdots \\ 0 & b_2 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & a_{n-2} & 1 \\ 0 & \cdots & 0 & b_{n-1} & a_{n-1} \end{bmatrix}.</math> The zeros <math>x_j</math> of the polynomials up to degree {{mvar|n}}, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this matrix. This procedure is known as ''Golub–Welsch algorithm''. For computing the weights and nodes, it is preferable to consider the [[Symmetric matrix|symmetric]] tridiagonal matrix <math>\mathcal{J}</math> with elements <math display="block">\begin{align} \mathcal{J}_{k,i} = J_{k,i} &= a_{k-1} & k &= 1,2,\ldots,n \\[2.1ex] \mathcal{J}_{k-1,i} = \mathcal{J}_{k,k-1} = \sqrt{J_{k,k-1}J_{k-1,k}} &= \sqrt{b_{k-1}} & k &= \hphantom{1,\,}2,\ldots,n. \end{align}</math> That is, <math display="block">\mathcal{J} = \begin{bmatrix} a_0 & \sqrt{b_1} & 0 & \cdots & 0 \\ \sqrt{b_1} & a_1 & \sqrt{b_2} & \ddots & \vdots \\ 0 & \sqrt{b_2} & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & a_{n-2} & \sqrt{b_{n-1}} \\ 0 & \cdots & 0 & \sqrt{b_{n-1}} & a_{n-1} \end{bmatrix}.</math> {{math|'''J'''}} and <math>\mathcal{J}</math> are [[similar matrices]] and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If <math>\phi^{(j)}</math> is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated with the eigenvalue {{mvar|x<sub>j</sub>}}, the corresponding weight can be computed from the first component of this eigenvector, namely: <math display="block">w_j = \mu_0 \left(\phi_1^{(j)}\right)^2</math> where <math>\mu_0</math> is the integral of the weight function <math display="block">\mu_0 = \int_a^b \omega(x) dx.</math> See, for instance, {{harv|Gil|Segura|Temme|2007}} for further details. === Error estimates === The error of a Gaussian quadrature rule can be stated as follows.<ref>{{harvnb|Stoer|Bulirsch|2002|loc=Thm 3.6.24}}</ref> For an integrand which has {{math|2''n''}} continuous derivatives, <math display="block"> \int_a^b \omega(x)\,f(x)\,dx - \sum_{i=1}^n w_i\,f(x_i) = \frac{f^{(2n)}(\xi)}{(2n)!} \, (p_n, p_n) </math> for some {{mvar|ξ}} in {{math|(''a'', ''b'')}}, where {{mvar|p<sub>n</sub>}} is the monic (i.e. the leading coefficient is {{math|1}}) orthogonal polynomial of degree {{mvar|n}} and where <math display="block"> (f,g) = \int_a^b \omega(x) f(x) g(x) \, dx.</math> In the important special case of {{math|1=''ω''(''x'') = 1}}, we have the error estimate<ref>{{harvnb|Kahaner|Moler|Nash|1989|loc=§5.2}}</ref> <math display="block"> \frac{\left(b - a\right)^{2n+1} \left(n!\right)^4}{(2n + 1)\left[\left(2n\right)!\right]^3} f^{(2n)} (\xi), \qquad a < \xi < b.</math> Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order {{math|2''n''}} derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful. === Gauss–Kronrod rules === {{main|Gauss–Kronrod quadrature formula}} If the interval {{math|[''a'', ''b'']}} is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. ''Gauss–Kronrod rules'' are extensions of Gauss quadrature rules generated by adding {{math|''n'' + 1}} points to an {{mvar|n}}-point rule in such a way that the resulting rule is of order {{math|2''n'' + 1}}. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error. === Gauss–Lobatto rules === Also known as '''Lobatto quadrature''',<ref>{{harvnb|Abramowitz|Stegun|1983|p=888}}</ref> named after Dutch mathematician [[Rehuel Lobatto]]. It is similar to Gaussian quadrature with the following differences: # The integration points include the end points of the integration interval. # It is accurate for polynomials up to degree {{math|2''n'' − 3}}, where {{mvar|n}} is the number of integration points.<ref>{{harvnb|Quarteroni|Sacco|Saleri|2000}}</ref> Lobatto quadrature of function {{math|''f''(''x'')}} on interval {{math|[−1, 1]}}: <math display="block">\int_{-1}^1 {f(x) \, dx} = \frac {2} {n(n-1)}[f(1) + f(-1)] + \sum_{i = 2}^{n-1} {w_i f(x_i)} + R_n.</math> Abscissas: {{mvar|x<sub>i</sub>}} is the <math>(i - 1)</math>st zero of <math>P'_{n-1}(x)</math>, here <math>P_m(x)</math> denotes the standard Legendre polynomial of {{mvar|m}}-th degree and the dash denotes the derivative. Weights: <math display="block">w_i = \frac{2}{n(n - 1)\left[P_{n-1}\left(x_i\right)\right]^2}, \qquad x_i \ne \pm 1.</math> Remainder: <math display="block">R_n = \frac{-n\left(n - 1\right)^3 2^{2n-1} \left[\left(n - 2\right)!\right]^4}{(2n-1) \left[\left(2n - 2\right)!\right]^3} f^{(2n-2)}(\xi), \qquad -1 < \xi < 1.</math> Some of the weights are: {| class="wikitable" style="margin:auto; background:white; text-align:center;" ! Number of points, ''n'' ! Points, {{mvar|x<sub>i</sub>}} ! Weights, {{mvar|w<sub>i</sub>}} |- | rowspan="2" | <math>3</math> | <math>0</math> || <math>\frac{4}{3}</math> |- | <math>\pm 1</math> || <math>\frac{1}{3}</math> |- | rowspan="2" | <math>4</math> | <math>\pm \sqrt{\frac{1}{5}}</math> || <math>\frac{5}{6}</math> |- | <math>\pm 1</math> || <math>\frac{1}{6}</math> |- | rowspan="3" | <math>5</math> | <math>0</math> || <math>\frac{32}{45}</math> |- | <math>\pm\sqrt{\frac{3}{7}}</math> || <math>\frac{49}{90}</math> |- | <math>\pm 1</math> || <math>\frac{1}{10}</math> |- | rowspan="3" | <math>6</math> | <math>\pm\sqrt{\frac{1}{3}-\frac{2\sqrt{7}}{21}}</math> || <math>\frac{14+\sqrt{7}}{30}</math> |- | <math>\pm\sqrt{\frac{1}{3} + \frac{2\sqrt{7}}{21}}</math> || <math>\frac{14 - \sqrt{7}}{30}</math> |- | <math>\pm 1</math> || <math>\frac{1}{15}</math> |- | rowspan="4" | <math>7</math> | <math>0</math> || <math>\frac{256}{525}</math> |- | <math>\pm\sqrt{\frac{5}{11}-\frac{2}{11}\sqrt{\frac{5}{3}}}</math> || <math>\frac{124 + 7\sqrt{15}}{350}</math> |- | <math>\pm\sqrt{\frac{5}{11} + \frac{2}{11}\sqrt{\frac{5}{3}}}</math> || <math>\frac{124 - 7\sqrt{15}}{350}</math> |- | <math>\pm 1</math> || <math>\frac{1}{21}</math> |} An adaptive variant of this algorithm with 2 interior nodes<ref>{{harvnb|Gander|Gautschi|2000}}</ref> is found in [[GNU Octave]] and [[MATLAB]] as <code>quadl</code> and <code>integrate</code>.<ref>{{harvnb|MathWorks|2012}}</ref><ref>{{harvnb|Eaton|Bateman|Hauberg|Wehbring|2018}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)