Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gaussian quadrature
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Computation of Gaussian quadrature rules === There are many algorithms for computing the nodes {{mvar|x<sub>i</sub>}} and weights {{mvar|w<sub>i</sub>}} of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring {{math|''O''(''n''<sup>2</sup>)}} operations, Newton's method for solving <math>p_n(x) = 0</math> using the [[Orthogonal polynomials#Recurrence relation|three-term recurrence]] for evaluation requiring {{math|''O''(''n''<sup>2</sup>)}} operations, and asymptotic formulas for large ''n'' requiring {{math|''O''(''n'')}} operations. ==== Recurrence relation ==== Orthogonal polynomials <math>p_r</math> with <math>(p_r, p_s) = 0</math> for <math>r \ne s</math> for a scalar product <math>(\cdot , \cdot)</math>, degree <math>(p_r) = r</math> and leading coefficient one (i.e. [[monic polynomial|monic]] orthogonal polynomials) satisfy the recurrence relation <math display="block">p_{r+1}(x) = (x - a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x) \cdots - a_{r,0}p_0(x)</math> and scalar product defined <math display="block">(f(x),g(x))=\int_a^b\omega(x)f(x)g(x)dx</math> for <math>r = 0, 1, \ldots, n - 1</math> where {{mvar|n}} is the maximal degree which can be taken to be infinity, and where <math display="inline">a_{r,s} = \frac{\left(xp_r, p_s\right)}{\left(p_s, p_s\right)}</math>. First of all, the polynomials defined by the recurrence relation starting with <math>p_0(x) = 1</math> have leading coefficient one and correct degree. Given the starting point by <math>p_0</math>, the orthogonality of <math>p_r</math> can be shown by induction. For <math>r = s = 0</math> one has <math display="block">(p_1,p_0) = (x-a_{0,0}) (p_0,p_0) = (xp_0,p_0) - a_{0,0}(p_0,p_0) = (xp_0,p_0) - (xp_0,p_0) = 0.</math> Now if <math>p_0, p_1, \ldots, p_r</math> are orthogonal, then also <math>p_{r+1}</math>, because in <math display="block">(p_{r+1}, p_s) = (xp_r, p_s) - a_{r,r}(p_r, p_s) - a_{r,r-1}(p_{r-1}, p_s)\cdots - a_{r,0}(p_0, p_s)</math> all scalar products vanish except for the first one and the one where <math>p_s</math> meets the same orthogonal polynomial. Therefore, <math display="block">(p_{r+1},p_s) = (xp_r,p_s) - a_{r,s}(p_s,p_s) = (xp_r,p_s)-(xp_r,p_s) = 0.</math> However, if the scalar product satisfies <math>(xf, g) = (f,xg)</math> (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For <math>s < r - 1, xp_s</math> is a polynomial of degree less than or equal to {{math|''r'' − 1}}. On the other hand, <math>p_r</math> is orthogonal to every polynomial of degree less than or equal to {{math|''r'' − 1}}. Therefore, one has <math>(xp_r, p_s) = (p_r, xp_s) = 0</math> and <math>a_{r,s} = 0</math> for {{math|''s'' < ''r'' − 1}}. The recurrence relation then simplifies to <math display="block">p_{r+1}(x) = (x-a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x)</math> or <math display="block">p_{r+1}(x) = (x-a_r) p_r(x) - b_r p_{r-1}(x)</math> (with the convention <math>p_{-1}(x) \equiv 0</math>) where <math display="block">a_r := \frac{(xp_r,p_r)}{(p_r,p_r)}, \qquad b_r := \frac{(xp_r,p_{r-1})}{(p_{r-1},p_{r-1})} = \frac{(p_r,p_r)}{(p_{r-1},p_{r-1})}</math> (the last because of <math>(xp_r, p_{r-1}) = (p_r, xp_{r-1}) = (p_r, p_r)</math>, since <math>xp_{r-1}</math> differs from <math>p_r</math> by a degree less than {{mvar|r}}). ==== The Golub-Welsch algorithm ==== The three-term recurrence relation can be written in matrix form <math>J\tilde{P} = x\tilde{P} - p_n(x) \mathbf{e}_n</math> where <math>\tilde{P} = \begin{bmatrix} p_0(x) & p_1(x) & \cdots & p_{n-1}(x) \end{bmatrix}^\mathsf{T}</math>, <math>\mathbf{e}_n</math> is the <math>n</math>th standard basis vector, i.e., <math>\mathbf{e}_n = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}^\mathsf{T}</math>, and {{mvar|J}} is the following [[tridiagonal matrix]], called the Jacobi matrix: <math display="block">\mathbf{J} = \begin{bmatrix} a_0 & 1 & 0 & \cdots & 0 \\ b_1 & a_1 & 1 & \ddots & \vdots \\ 0 & b_2 & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & a_{n-2} & 1 \\ 0 & \cdots & 0 & b_{n-1} & a_{n-1} \end{bmatrix}.</math> The zeros <math>x_j</math> of the polynomials up to degree {{mvar|n}}, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this matrix. This procedure is known as ''Golub–Welsch algorithm''. For computing the weights and nodes, it is preferable to consider the [[Symmetric matrix|symmetric]] tridiagonal matrix <math>\mathcal{J}</math> with elements <math display="block">\begin{align} \mathcal{J}_{k,i} = J_{k,i} &= a_{k-1} & k &= 1,2,\ldots,n \\[2.1ex] \mathcal{J}_{k-1,i} = \mathcal{J}_{k,k-1} = \sqrt{J_{k,k-1}J_{k-1,k}} &= \sqrt{b_{k-1}} & k &= \hphantom{1,\,}2,\ldots,n. \end{align}</math> That is, <math display="block">\mathcal{J} = \begin{bmatrix} a_0 & \sqrt{b_1} & 0 & \cdots & 0 \\ \sqrt{b_1} & a_1 & \sqrt{b_2} & \ddots & \vdots \\ 0 & \sqrt{b_2} & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & a_{n-2} & \sqrt{b_{n-1}} \\ 0 & \cdots & 0 & \sqrt{b_{n-1}} & a_{n-1} \end{bmatrix}.</math> {{math|'''J'''}} and <math>\mathcal{J}</math> are [[similar matrices]] and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If <math>\phi^{(j)}</math> is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated with the eigenvalue {{mvar|x<sub>j</sub>}}, the corresponding weight can be computed from the first component of this eigenvector, namely: <math display="block">w_j = \mu_0 \left(\phi_1^{(j)}\right)^2</math> where <math>\mu_0</math> is the integral of the weight function <math display="block">\mu_0 = \int_a^b \omega(x) dx.</math> See, for instance, {{harv|Gil|Segura|Temme|2007}} for further details.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)