Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Clenshaw algorithm
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
In [[numerical analysis]], the '''Clenshaw algorithm''', also called '''Clenshaw summation''', is a [[Recursion|recursive]] method to evaluate a linear combination of [[Chebyshev polynomials]].<ref name="Clenshaw55">{{Cite journal | last1 = Clenshaw | first1 = C. W.| title = A note on the summation of Chebyshev series| url = https://www.ams.org/journals/mcom/1955-09-051/S0025-5718-1955-0071856-0/| doi = 10.1090/S0025-5718-1955-0071856-0| journal = Mathematical Tables and Other Aids to Computation| issn = 0025-5718| volume = 9| issue = 51| page = 118| date=July 1955 | doi-access = free}} Note that this paper is written in terms of the ''Shifted'' Chebyshev polynomials of the first kind <math>T^*_n(x) = T_n(2x-1)</math>.</ref><ref name="Tscherning82"/> The method was published by [[Charles William Clenshaw]] in 1955. It is a generalization of [[Horner's method]] for evaluating a linear combination of [[monomial]]s. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term [[recurrence relation]].<ref>{{Citation |last1=Press |first1=WH |last2=Teukolsky |first2=SA |last3=Vetterling |first3=WT |last4=Flannery |first4=BP |year=2007 |title=Numerical Recipes: The Art of Scientific Computing |edition=3rd |publisher = Cambridge University Press |publication-place=New York |isbn=978-0-521-88068-8 |chapter=Section 5.4.2. Clenshaw's Recurrence Formula |chapter-url=http://apps.nrbook.com/empanel/index.html?pg=222}}</ref> ==Clenshaw algorithm== In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions <math>\phi_k(x)</math>: <math display="block">S(x) = \sum_{k=0}^n a_k \phi_k(x)</math> where <math>\phi_k,\; k=0, 1, \ldots</math> is a sequence of functions that satisfy the linear recurrence relation <math display="block">\phi_{k+1}(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_{k-1}(x),</math> where the coefficients <math>\alpha_k(x)</math> and <math>\beta_k(x)</math> are known in advance. The algorithm is most useful when <math>\phi_k(x)</math> are functions that are complicated to compute directly, but <math>\alpha_k(x)</math> and <math>\beta_k(x)</math> are particularly simple. In the most common applications, <math>\alpha(x)</math> does not depend on <math>k</math>, and <math>\beta</math> is a constant that depends on neither <math>x</math> nor <math>k</math>. To perform the summation for given series of coefficients <math>a_0, \ldots, a_n</math>, compute the values <math>b_k(x)</math> by the "reverse" recurrence formula: <math display="block"> \begin{align} b_{n+1}(x) &= b_{n+2}(x) = 0, \\ b_k(x) &= a_k + \alpha_k(x)\,b_{k+1}(x) + \beta_{k+1}(x)\,b_{k+2}(x). \end{align} </math> Note that this computation makes no direct reference to the functions <math>\phi_k(x)</math>. After computing <math>b_2(x)</math> and <math>b_1(x)</math>, the desired sum can be expressed in terms of them and the simplest functions <math>\phi_0(x)</math> and <math>\phi_1(x)</math>: <math display="block">S(x) = \phi_0(x)\,a_0 + \phi_1(x)\,b_1(x) + \beta_1(x)\,\phi_0(x)\,b_2(x).</math> See Fox and Parker<ref name="FoxParker">{{Citation |first1=Leslie |last1=Fox |first2=Ian B. |last2=Parker |title=Chebyshev Polynomials in Numerical Analysis |publisher=Oxford University Press |year=1968 |isbn=0-19-859614-6}}</ref> for more information and stability analyses. ==Examples== ===Horner as a special case of Clenshaw=== A particularly simple case occurs when evaluating a polynomial of the form <math display="block">S(x) = \sum_{k=0}^n a_k x^k.</math> The functions are simply <math display="block"> \begin{align} \phi_0(x) &= 1, \\ \phi_k(x) &= x^k = x\phi_{k-1}(x) \end{align} </math> and are produced by the recurrence coefficients <math>\alpha(x) = x</math> and <math>\beta = 0</math>. In this case, the recurrence formula to compute the sum is <math display="block">b_k(x) = a_k + x b_{k+1}(x)</math> and, in this case, the sum is simply <math display="block">S(x) = a_0 + x b_1(x) = b_0(x),</math> which is exactly the usual [[Horner's method]]. ===Special case for Chebyshev series=== Consider a truncated [[Chebyshev series]] <math display="block">p_n(x) = a_0 + a_1 T_1(x) + a_2 T_2(x) + \cdots + a_n T_n(x).</math> The coefficients in the recursion relation for the [[Chebyshev polynomials]] are <math display="block">\alpha(x) = 2x, \quad \beta = -1,</math> with the initial conditions <math display="block">T_0(x) = 1, \quad T_1(x) = x.</math> Thus, the recurrence is <math display="block">b_k(x) = a_k + 2xb_{k+1}(x) - b_{k+2}(x)</math> and the final results are <math display="block">b_0(x) = a_0 + 2xb_1(x) - b_2(x),</math> <math display="block">p_n(x) = \tfrac{1}{2} \left[a_0+b_0(x) - b_2(x)\right].</math> An equivalent expression for the sum is given by <math display="block">p_n(x) = a_0 + xb_1(x) - b_2(x).</math> ===Meridian arc length on the ellipsoid=== Clenshaw summation is extensively used in [[geodesy|geodetic]] applications.<ref name="Tscherning82">{{Citation | last1=Tscherning | first1=C. C. | last2=Poder | first2=K. | year=1982 | title=Some Geodetic applications of Clenshaw Summation | journal=Bolletino di Geodesia e Scienze Affini | volume=41 | number=4 | pages=349–375 | url=http://cct.gfy.ku.dk/publ_cct/cct80.pdf | access-date=2012-08-02 | archive-url=https://web.archive.org/web/20070612091533/http://cct.gfy.ku.dk/publ_cct/cct80.pdf | archive-date=2007-06-12 | url-status=dead }}</ref> A simple application is summing the trigonometric series to compute the [[meridian arc]] distance on the surface of an ellipsoid. These have the form <math display="block">m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta.</math> Leaving off the initial <math>C_0\,\theta</math> term, the remainder is a summation of the appropriate form. There is no leading term because <math>\phi_0(\theta) = \sin 0\theta = \sin 0 = 0</math>. The [[List of trigonometric identities#Chebyshev method|recurrence relation for <math>\sin k\theta</math>]] is <math display="block">\sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta,</math> making the coefficients in the recursion relation <math display="block">\alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1.</math> and the evaluation of the series is given by <math display="block">\begin{align} b_{n+1}(\theta) &= b_{n+2}(\theta) = 0, \\ b_k(\theta) &= C_k + 2\cos \theta \,b_{k+1}(\theta) - b_{k+2}(\theta),\quad\mathrm{for\ } n\ge k \ge 1. \end{align}</math> The final step is made particularly simple because <math>\phi_0(\theta) = \sin 0 = 0</math>, so the end of the recurrence is simply <math>b_1(\theta)\sin(\theta)</math>; the <math>C_0\,\theta</math> term is added separately: <math display="block">m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta.</math> Note that the algorithm requires only the evaluation of two trigonometric quantities <math>\cos \theta</math> and <math>\sin \theta</math>. ===Difference in meridian arc lengths=== Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write <math display="block"> m(\theta_1)-m(\theta_2) = C_0(\theta_1-\theta_2) + \sum_{k=1}^n 2 C_k \sin\bigl({\textstyle\frac12}k(\theta_1-\theta_2)\bigr) \cos\bigl({\textstyle\frac12}k(\theta_1+\theta_2)\bigr). </math> Clenshaw summation can be applied in this case<ref>{{ cite journal | last = Karney | first = C. F. F. | year = 2024 | volume = 68 | number = 3-4 | pages = 99-120 | title = The area of rhumb polygons | journal = Stud. Geophys. Geod. | doi = 10.1007/s11200-024-0709-z | doi-access = free | postscript = Appendix B| arxiv = 2303.03219 }}</ref> provided we simultaneously compute <math>m(\theta_1)+m(\theta_2)</math> and perform a matrix summation, <math display="block"> \mathsf M(\theta_1,\theta_2) = \begin{bmatrix} (m(\theta_1) + m(\theta_2)) / 2\\ (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2) \end{bmatrix} = C_0 \begin{bmatrix} \mu \\ 1 \end{bmatrix} + \sum_{k=1}^n C_k \mathsf F_k(\theta_1,\theta_2), </math> where <math display="block"> \begin{align} \delta &= \tfrac{1}{2}(\theta_1-\theta_2), \\[1ex] \mu &= \tfrac{1}{2}(\theta_1+\theta_2), \\[1ex] \mathsf F_k(\theta_1,\theta_2) &= \begin{bmatrix} \cos k \delta \sin k \mu \\ \dfrac{\sin k \delta}\delta \cos k \mu \end{bmatrix}. \end{align} </math> The first element of <math>\mathsf M(\theta_1,\theta_2)</math> is the average value of <math>m</math> and the second element is the average slope. <math>\mathsf F_k(\theta_1,\theta_2)</math> satisfies the recurrence relation <math display="block"> \mathsf F_{k+1}(\theta_1,\theta_2) = \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) - \mathsf F_{k-1}(\theta_1,\theta_2), </math> where <math display="block"> \mathsf A(\theta_1,\theta_2) = 2\begin{bmatrix} \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\ - \displaystyle\frac{\sin \delta}\delta \sin \mu & \cos \delta \cos \mu \end{bmatrix} </math> takes the place of <math>\alpha</math> in the recurrence relation, and <math>\beta=-1</math>. The standard Clenshaw algorithm can now be applied to yield <math display="block"> \begin{align} \mathsf B_{n+1} &= \mathsf B_{n+2} = \mathsf 0, \\[1ex] \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_{k+1} - \mathsf B_{k+2}, \qquad\mathrm{for\ } n\ge k \ge 1, \\[1ex] \mathsf M(\theta_1,\theta_2) &= C_0 \begin{bmatrix}\mu\\1\end{bmatrix} + \mathsf B_1 \mathsf F_1(\theta_1,\theta_2), \end{align}</math> where <math>\mathsf B_k</math> are 2×2 matrices. Finally we have <math display="block"> \frac{m(\theta_1) - m(\theta_2)}{\theta_1 - \theta_2} = \mathsf M_2(\theta_1, \theta_2). </math> This technique can be used in the [[limit (mathematics)|limit]] <math>\theta_2 = \theta_1 = \mu</math> and <math> \delta = 0 </math> to simultaneously compute <math>m(\mu)</math> and the [[derivative]] <math>dm(\mu)/d\mu</math>, provided that, in evaluating <math>\mathsf F_1</math> and <math>\mathsf A</math>, we take <math>\lim_{\delta \to 0} (\sin k \delta)/\delta = k</math>. ==See also== *[[Horner scheme]] to evaluate polynomials in [[monomial form]] *[[De Casteljau's algorithm]] to evaluate polynomials in [[Bézier form]] ==References== {{reflist|30em}} {{DEFAULTSORT:Clenshaw Algorithm}} [[Category:Numerical analysis]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite journal
(
edit
)
Template:Reflist
(
edit
)