Clenshaw algorithm

Revision as of 10:27, 24 March 2025 by imported>Ellyps0 (→‎Special case for Chebyshev series)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials.<ref name="Clenshaw55">Template:Cite journal Note that this paper is written in terms of the Shifted Chebyshev polynomials of the first kind <math>T^*_n(x) = T_n(2x-1)</math>.</ref><ref name="Tscherning82"/> The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials.

It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.<ref>Template:Citation</ref>

Clenshaw algorithmEdit

In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions <math>\phi_k(x)</math>: <math display="block">S(x) = \sum_{k=0}^n a_k \phi_k(x)</math> where <math>\phi_k,\; k=0, 1, \ldots</math> is a sequence of functions that satisfy the linear recurrence relation <math display="block">\phi_{k+1}(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_{k-1}(x),</math> where the coefficients <math>\alpha_k(x)</math> and <math>\beta_k(x)</math> are known in advance.

The algorithm is most useful when <math>\phi_k(x)</math> are functions that are complicated to compute directly, but <math>\alpha_k(x)</math> and <math>\beta_k(x)</math> are particularly simple. In the most common applications, <math>\alpha(x)</math> does not depend on <math>k</math>, and <math>\beta</math> is a constant that depends on neither <math>x</math> nor <math>k</math>.

To perform the summation for given series of coefficients <math>a_0, \ldots, a_n</math>, compute the values <math>b_k(x)</math> by the "reverse" recurrence formula: <math display="block"> \begin{align}

 b_{n+1}(x) &= b_{n+2}(x) = 0, \\
 b_k(x) &= a_k + \alpha_k(x)\,b_{k+1}(x) + \beta_{k+1}(x)\,b_{k+2}(x).

\end{align} </math>

Note that this computation makes no direct reference to the functions <math>\phi_k(x)</math>. After computing <math>b_2(x)</math> and <math>b_1(x)</math>, the desired sum can be expressed in terms of them and the simplest functions <math>\phi_0(x)</math> and <math>\phi_1(x)</math>: <math display="block">S(x) = \phi_0(x)\,a_0 + \phi_1(x)\,b_1(x) + \beta_1(x)\,\phi_0(x)\,b_2(x).</math>

See Fox and Parker<ref name="FoxParker">Template:Citation</ref> for more information and stability analyses.

ExamplesEdit

Horner as a special case of ClenshawEdit

A particularly simple case occurs when evaluating a polynomial of the form <math display="block">S(x) = \sum_{k=0}^n a_k x^k.</math> The functions are simply <math display="block"> \begin{align}

 \phi_0(x) &= 1, \\
 \phi_k(x) &= x^k = x\phi_{k-1}(x)

\end{align} </math> and are produced by the recurrence coefficients <math>\alpha(x) = x</math> and <math>\beta = 0</math>.

In this case, the recurrence formula to compute the sum is <math display="block">b_k(x) = a_k + x b_{k+1}(x)</math> and, in this case, the sum is simply <math display="block">S(x) = a_0 + x b_1(x) = b_0(x),</math> which is exactly the usual Horner's method.

Special case for Chebyshev seriesEdit

Consider a truncated Chebyshev series <math display="block">p_n(x) = a_0 + a_1 T_1(x) + a_2 T_2(x) + \cdots + a_n T_n(x).</math>

The coefficients in the recursion relation for the Chebyshev polynomials are <math display="block">\alpha(x) = 2x, \quad \beta = -1,</math> with the initial conditions <math display="block">T_0(x) = 1, \quad T_1(x) = x.</math>

Thus, the recurrence is <math display="block">b_k(x) = a_k + 2xb_{k+1}(x) - b_{k+2}(x)</math> and the final results are <math display="block">b_0(x) = a_0 + 2xb_1(x) - b_2(x),</math> <math display="block">p_n(x) = \tfrac{1}{2} \left[a_0+b_0(x) - b_2(x)\right].</math>

An equivalent expression for the sum is given by <math display="block">p_n(x) = a_0 + xb_1(x) - b_2(x).</math>

Meridian arc length on the ellipsoidEdit

Clenshaw summation is extensively used in geodetic applications.<ref name="Tscherning82">Template:Citation</ref> A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form <math display="block">m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta.</math>

Leaving off the initial <math>C_0\,\theta</math> term, the remainder is a summation of the appropriate form. There is no leading term because <math>\phi_0(\theta) = \sin 0\theta = \sin 0 = 0</math>.

The recurrence relation for <math>\sin k\theta</math> is <math display="block">\sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta,</math> making the coefficients in the recursion relation <math display="block">\alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1.</math> and the evaluation of the series is given by <math display="block">\begin{align}

 b_{n+1}(\theta) &= b_{n+2}(\theta) = 0, \\
 b_k(\theta) &= C_k + 2\cos \theta \,b_{k+1}(\theta) - b_{k+2}(\theta),\quad\mathrm{for\ } n\ge k \ge 1.

\end{align}</math> The final step is made particularly simple because <math>\phi_0(\theta) = \sin 0 = 0</math>, so the end of the recurrence is simply <math>b_1(\theta)\sin(\theta)</math>; the <math>C_0\,\theta</math> term is added separately: <math display="block">m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta.</math>

Note that the algorithm requires only the evaluation of two trigonometric quantities <math>\cos \theta</math> and <math>\sin \theta</math>.

Difference in meridian arc lengthsEdit

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write <math display="block">

 m(\theta_1)-m(\theta_2) = C_0(\theta_1-\theta_2) + \sum_{k=1}^n 2 C_k
 \sin\bigl({\textstyle\frac12}k(\theta_1-\theta_2)\bigr)
 \cos\bigl({\textstyle\frac12}k(\theta_1+\theta_2)\bigr).

</math> Clenshaw summation can be applied in this case<ref>Template:Cite journal</ref> provided we simultaneously compute <math>m(\theta_1)+m(\theta_2)</math> and perform a matrix summation, <math display="block">

 \mathsf M(\theta_1,\theta_2) = \begin{bmatrix}
 (m(\theta_1) + m(\theta_2)) / 2\\
 (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2)
 \end{bmatrix} =
 C_0 \begin{bmatrix} \mu \\ 1 \end{bmatrix} +
 \sum_{k=1}^n C_k \mathsf F_k(\theta_1,\theta_2),

</math> where <math display="block"> \begin{align}

 \delta &= \tfrac{1}{2}(\theta_1-\theta_2), \\[1ex]
 \mu &= \tfrac{1}{2}(\theta_1+\theta_2), \\[1ex]
 \mathsf F_k(\theta_1,\theta_2) &=
   \begin{bmatrix}
      \cos k \delta \sin k \mu \\
      \dfrac{\sin k \delta}\delta \cos k \mu
   \end{bmatrix}.

\end{align} </math> The first element of <math>\mathsf M(\theta_1,\theta_2)</math> is the average value of <math>m</math> and the second element is the average slope. <math>\mathsf F_k(\theta_1,\theta_2)</math> satisfies the recurrence relation <math display="block">

 \mathsf F_{k+1}(\theta_1,\theta_2) =
 \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) -
 \mathsf F_{k-1}(\theta_1,\theta_2),

</math> where <math display="block">

  \mathsf A(\theta_1,\theta_2) = 2\begin{bmatrix}
     \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\
     - \displaystyle\frac{\sin \delta}\delta \sin \mu &   \cos \delta \cos \mu
 \end{bmatrix}

</math> takes the place of <math>\alpha</math> in the recurrence relation, and <math>\beta=-1</math>. The standard Clenshaw algorithm can now be applied to yield <math display="block"> \begin{align}

 \mathsf B_{n+1} &= \mathsf B_{n+2} = \mathsf 0, \\[1ex]
 \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_{k+1} -
 \mathsf B_{k+2}, \qquad\mathrm{for\ } n\ge k \ge 1, \\[1ex]
 \mathsf M(\theta_1,\theta_2) &=
   C_0 \begin{bmatrix}\mu\\1\end{bmatrix} +
   \mathsf B_1 \mathsf F_1(\theta_1,\theta_2),

\end{align}</math> where <math>\mathsf B_k</math> are 2×2 matrices. Finally we have <math display="block">

 \frac{m(\theta_1) - m(\theta_2)}{\theta_1 - \theta_2} =
 \mathsf M_2(\theta_1, \theta_2).

</math> This technique can be used in the limit <math>\theta_2 = \theta_1 = \mu</math> and <math> \delta = 0 </math> to simultaneously compute <math>m(\mu)</math> and the derivative <math>dm(\mu)/d\mu</math>, provided that, in evaluating <math>\mathsf F_1</math> and <math>\mathsf A</math>, we take <math>\lim_{\delta \to 0} (\sin k \delta)/\delta = k</math>.

See alsoEdit

ReferencesEdit

Template:Reflist