Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Laguerre's method
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Polynomial root-finding algorithm}} In [[numerical analysis]], '''Laguerre's method''' is a [[root-finding algorithm]] tailored to [[polynomial]]s. In other words, Laguerre's method can be used to numerically solve the equation {{math|''p''(''x'') {{=}} 0}} for a given polynomial {{math|''p''(''x'')}}. One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to ''some'' root of the polynomial, no matter what initial guess is chosen. However, for [[computer]] computation, more efficient methods are known, with which it is guaranteed to find all roots (see {{slink|Root-finding algorithm|Roots of polynomials}}) or all real roots (see [[Real-root isolation]]). This method is named in honour of the French mathematician, [[Edmond Laguerre]]. ==Definition== The algorithm of the Laguerre method to find one root of a polynomial {{math|''p''(''x'')}} of degree {{mvar|n}} is: * Choose an initial guess {{math|''x''<sub>0</sub>}} * For {{math|''k'' {{=}} 0, 1, 2, ...}} ** If <math>p(x_k)</math> is very small, exit the loop ** Calculate <math>G = \frac{p'(x_k)}{p(x_k)}</math> ** Calculate <math>H = G^2 - \frac{p''(x_k)}{p(x_k)}</math> ** Calculate <math>a = \frac{n}{G \plusmn \sqrt{(n-1)(nH - G^2)}}</math>, where the sign is chosen to give the denominator with the larger absolute value, to avoid [[catastrophic cancellation]]. ** Set <math>x_{k+1} = x_k - a</math> * Repeat until ''a'' is small enough or if the maximum number of iterations has been reached. If a root has been found, the corresponding linear factor can be removed from ''p''. This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of ''p'' can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude. ==Derivation== The [[fundamental theorem of algebra]] states that every {{mvar|n}}th degree polynomial <math> p </math> can be written in the form :<math>p(x) = C \left( x - x_1 \right) \left( x - x_2 \right) \cdots \left(x - x_n \right) ,</math> so that <math>x_1,\ x_2,\ \ldots,\ x_n ,</math> are the roots of the polynomial. If we take the [[natural logarithm]] of both sides, we find that :<math>\ln \bigl| p(x) \bigr| = \ln \bigl| C \bigr| + \ln \bigl| x - x_1 \bigr| + \ln \bigl| x - x_2 \bigr| + \cdots + \ln \bigl| x - x_n \bigr|.</math> Denote the [[logarithmic derivative]] by : <math>\begin{align} G &= \frac{ \operatorname{d} }{ \operatorname{d} x } \ln \Bigl| p(x) \Bigr| = \frac{ 1 }{ x - x_1 } + \frac{ 1 }{ x - x_2 } + \cdots + \frac{ 1 }{ x - x_n } \\ &= \frac{ p'(x) }{ \bigl| p(x) \bigr| } , \end{align}</math> and the negated second derivative by :<math>\begin{align} \ H &= -\frac{ \operatorname{d}^2 }{ \operatorname{d} x^2 } \ln \Bigl| p(x) \Bigr| = \frac{ 1 }{ (x - x_1)^2 } + \frac{ 1 }{ (x - x_2)^2 } + \cdots + \frac{ 1 }{ (x - x_n)^2 } \\ &= -\frac{ p''(x) }{ \bigl| p(x) \bigr| } + \left( \frac{ p'(x) }{ p(x) } \right)^2 \cdot\ \sgn\!\Bigl( p(x) \Bigr) .\end{align}</math> We then make what {{harvp|Acton|1970}}{{page needed|date=August 2024}} calls a "drastic set of assumptions", that the root we are looking for, say, <math>x_1 </math> is a short distance, <math>a,</math> away from our guess <math>x,</math> and all the other roots are all clustered together, at some further distance <math>b.</math> If we denote these distances by :<math> a \equiv x - x_1 </math> and : <math> b \approx x - x_2 \approx x - x_3 \approx \cdots \approx x - x_n ,</math> or exactly, : <math>b \equiv \operatorname{harmonic\ mean}\Bigl\{ x - x_2,\ x - x_3,\ \ldots\ x - x_n \Bigr\} </math> then our equation for <math>\ G\ </math> may be written as :<math> G = \frac{ 1 }{ a } + \frac{ n - 1 }{ b } </math> and the expression for <math>H </math> becomes :<math> H = \frac{ 1 }{ a^2 } + \frac{ n - 1 }{ b^2 }.</math> Solving these equations for <math>a,</math> we find that :<math> a = \frac{ n }{ G \plusmn \sqrt{\bigl( n - 1 \bigr)\bigl( n H - G^2 \bigr) } } ,</math> where in this case, the square root of the (possibly) [[complex number]] is chosen to produce largest absolute value of the denominator and make <math>\ a\ </math> as small as possible; equivalently, it satisfies: :<math> \operatorname\mathcal{R_e} \biggl\{ \overline{G} \sqrt{ \left( n - 1 \right) \left( n H - G^2\right) } \biggr\} > 0 ,</math> where <math>\mathcal{R_e} </math> denotes real part of a complex number, and <math>\overline{G} </math> is the complex conjugate of <math>G;</math> or :<math> a = \frac{ p(x) }{ p'(x) } \cdot \Biggl\{ \frac{ 1 }{ n } + \frac{ n - 1 }{ n } \sqrt{ 1 - \frac{ n }{ n-1 } \frac{ p(x)\ p''(x) }{p'(x)^2 }} \Biggr\}^{-1},</math> where the square root of a complex number is chosen to have a non-negative real part. For small values of <math>p(x) </math> this formula differs from the offset of the third order [[Halley's method]] by an error of <math>\operatorname\mathcal{O}\bigl\{(p(x))^3\bigr\},</math> so convergence close to a root will be cubic as well. ===Fallback=== Even if the "drastic set of assumptions" does not work well for some particular polynomial {{math|''p''(''x'')}}, then {{math|''p''(''x'')}} can be transformed into a related polynomial {{mvar|r}} for which the assumptions are viable; e.g. by first shifting the origin towards a suitable complex number {{mvar|w}}, giving a second polynomial {{math|''q''(''x'') {{=}} ''p''(''x'' − ''w'')}}, that give distinct roots clearly distinct magnitudes, if necessary (which it will be if some roots are complex conjugates). After that, getting a third polynomial {{mvar|r}} from {{math|''q''(''x'')}} by repeatedly applying the root squaring transformation from [[Graeffe's method]], enough times to make the smaller roots significantly smaller than the largest root (and so, clustered comparatively nearer to zero). The approximate root from Graeffe's method, can then be used to start the new iteration for Laguerre's method on {{mvar|r}}. An approximate root for {{math|''p''(''x'')}} may then be obtained straightforwardly from that for {{mvar|r}}. If we make the even more extreme assumption that the terms in <math>G </math> corresponding to the roots <math>x_2,\ x_3,\ \ldots,\ x_n </math> are negligibly small compared to the root <math>x_1,</math> this leads to [[Newton's method]]. ==Properties== [[Image:Attraction zones of Laguerre's.png|250px|right|thumb|Attraction zones of Laguerre's method for the polynomial <math>p(x) = x^4 + 2 x^3 + 3 x^2 + 4 x + 1.</math>]] If {{mvar|x}} is a simple root of the polynomial <math>p(x),</math> then Laguerre's method converges [[rate of convergence|cubically]] whenever the initial guess, <math>x^{(0)},</math> is close enough to the root <math>x_1.</math> On the other hand, when <math>x_1 </math> is a [[multiple root]] convergence is merely linear, with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration. A major advantage of Laguerre's method is that it is almost guaranteed to converge to ''some'' root of the polynomial ''no matter where the initial approximation is chosen''. This is in contrast to other methods such as the [[Newton's method|Newton–Raphson method]] and [[Stephensen's method]], which notoriously fail to converge for poorly chosen initial guesses. Laguerre's method may even converge to a complex root of the polynomial, because the radicand of the square root may be of a negative number, in the formula for the correction, <math>a,</math> given above – manageable so long as complex numbers can be conveniently accommodated for the calculation. This may be considered an advantage or a liability depending on the application to which the method is being used. Empirical evidence shows that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as a default, and prefer better understood methods such as the [[Jenkins–Traub algorithm]], for which more solid theory has been developed and whose limits are known. The algorithm is fairly simple to use, compared to other "sure-fire" methods, and simple enough for hand calculation, aided by a pocket calculator, if a computer is not available. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy. ==References== {{refbegin|25em|small=yes}} *{{cite book |first=Forman S. |last=Acton |author-link=Forman S. Acton |year=1970 |title=Numerical Methods that {{silver|'''{{sc|usually}}'''}} Work |publisher=Harper & Row |isbn=0-88385-450-3 |url=https://archive.org/details/numericalmethods00form |url-access=registration |via=Internet Archive (archive.org) }} *{{cite journal |first=S. |last=Goedecker |year=1994 |title=Remark on algorithms to find roots of polynomials |journal=[[SIAM Journal on Scientific Computing]] |volume=15 |issue=5 |pages=1059–1063 |doi=10.1137/0915064 |bibcode=1994SJSC...15.1059G }} * {{cite thesis |first=Wankere R. |last=Mekwi |year=2001 |title=Iterative methods for roots of polynomials |degree=Master's |publisher=University of Oxford |place=Oxford, UK |series=Mathematics |url=http://eprints.maths.ox.ac.uk/archive/00000016/ |url-status=dead |archive-url=https://archive.today/20121223120853/http://eprints.maths.ox.ac.uk/archive/00000016/ |archive-date=2012-12-23 |df=dmy-all }} *{{cite journal |first=V.Y. |last=Pan |year=1997 |title=Solving a polynomial equation: Some history and recent progress |journal=[[SIAM Review]] |volume=39 |issue=2 |pages=187–220 |doi=10.1137/S0036144595288554 |bibcode=1997SIAMR..39..187P }} * {{cite book | first1=W.H. | last1=Press | first2=S.A. | last2=Teukolsky | first3=W.T. | last3=Vetterling | first4=B.P. | last4=Flannery | year=2007 | section=Section 9.5.3   Laguerre's method | title=[[Numerical Recipes]]: The art of scientific computing | edition=3rd | place=New York, NY | publisher=Cambridge University Press | isbn=978-0-521-88068-8 | section-url=http://apps.nrbook.com/empanel/index.html#pg=466 }} * {{cite book |first1=Anthony |last1=Ralston |first2=Philip |last2=Rabinowitz |year=1978 |title=A First Course in Numerical Analysis |publisher=McGraw-Hill |isbn=0-07-051158-6 }} {{refend}} {{Root-finding algorithms}} [[Category:Polynomial factorization algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite thesis
(
edit
)
Template:Harvp
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Page needed
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Root-finding algorithms
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)