Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Legendre polynomials
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== ===Expanding an inverse distance potential=== {{main|Laplace expansion (potential)}} The Legendre polynomials were first introduced in 1782 by [[Adrien-Marie Legendre]]<ref>{{cite book |first1=A.-M. |last1=Legendre |chapter=Recherches sur l'attraction des sphéroïdes homogènes |title=Mémoires de Mathématiques et de Physique, présentés à l'Académie Royale des Sciences, par divers savans, et lus dans ses Assemblées |volume=X |pages=411–435 |location=Paris |date=1785 |orig-year=1782 |language=fr |chapter-url=http://edocs.ub.uni-frankfurt.de/volltexte/2007/3757/pdf/A009566090.pdf |url-status=dead |archive-url=https://web.archive.org/web/20090920070434/http://edocs.ub.uni-frankfurt.de/volltexte/2007/3757/pdf/A009566090.pdf |archive-date=2009-09-20 }}</ref> as the coefficients in the expansion of the [[Newtonian potential]] <math display="block">\frac{1}{\left| \mathbf{x}-\mathbf{x}' \right|} = \frac{1}{\sqrt{r^2+{r'}^2-2r{r'}\cos\gamma}} = \sum_{\ell=0}^\infty \frac{{r'}^\ell}{r^{\ell+1}} P_\ell(\cos \gamma),</math> where {{math|''r''}} and {{math|''r''′}} are the lengths of the vectors {{math|'''x'''}} and {{math|'''x'''′}} respectively and {{math|''γ''}} is the angle between those two vectors. The series converges when {{math|''r'' > ''r''′}}. The expression gives the [[gravitational potential]] associated to a [[point mass]] or the [[Coulomb potential]] associated to a [[point charge]]. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of [[Laplace's equation]] of the static [[electric potential|potential]], {{math|1=∇<sup>2</sup> Φ('''x''') = 0}}, in a charge-free region of space, using the method of [[separation of variables]], where the [[boundary conditions]] have axial symmetry (no dependence on an [[azimuth|azimuthal angle]]). Where {{math|'''ẑ'''}} is the axis of symmetry and {{math|''θ''}} is the angle between the position of the observer and the {{math|'''ẑ'''}} axis (the zenith angle), the solution for the potential will be <math display="block">\Phi(r,\theta) = \sum_{\ell=0}^\infty \left( A_\ell r^\ell + B_\ell r^{-(\ell+1)} \right) P_\ell(\cos\theta) \,.</math> {{math|''A<sub>l</sub>''}} and {{math|''B<sub>l</sub>''}} are to be determined according to the boundary condition of each problem.<ref>{{cite book|last=Jackson |first=J. D. |title=Classical Electrodynamics |url=https://archive.org/details/classicalelectro00jack_449 |url-access=limited |edition= 3rd |location=Wiley & Sons |date=1999 |page=[https://archive.org/details/classicalelectro00jack_449/page/n102 103] |isbn=978-0-471-30932-1}}</ref> They also appear when solving the [[Schrödinger equation]] in three dimensions for a central force. === In multipole expansions === [[File:Point axial multipole.svg|right|Diagram for the multipole expansion of electric potential.]] Legendre polynomials are also useful in expanding functions of the form (this is the same as before, written a little differently): <math display="block">\frac{1}{\sqrt{1 + \eta^2 - 2\eta x}} = \sum_{k=0}^\infty \eta^k P_k(x),</math> which arise naturally in [[multipole expansion]]s. The left-hand side of the equation is the [[generating function]] for the Legendre polynomials. As an example, the [[electric potential]] {{math|Φ(''r'',''θ'')}} (in [[spherical coordinates]]) due to a [[point charge]] located on the {{math|''z''}}-axis at {{math|1=''z'' = ''a''}} (see diagram right) varies as <math display="block">\Phi (r, \theta ) \propto \frac{1}{R} = \frac{1}{\sqrt{r^2 + a^2 - 2ar \cos\theta}}.</math> If the radius {{math|''r''}} of the observation point {{math|P}} is greater than {{math|''a''}}, the potential may be expanded in the Legendre polynomials <math display="block">\Phi(r, \theta) \propto \frac{1}{r} \sum_{k=0}^\infty \left( \frac{a}{r} \right)^k P_k(\cos \theta),</math> where we have defined {{math|1=''η'' = {{sfrac|''a''|''r''}} < 1}} and {{math|1=''x'' = cos ''θ''}}. This expansion is used to develop the normal [[multipole expansion]]. Conversely, if the radius {{math|''r''}} of the observation point {{math|P}} is smaller than {{math|''a''}}, the potential may still be expanded in the Legendre polynomials as above, but with {{math|''a''}} and {{math|''r''}} exchanged. This expansion is the basis of [[interior multipole expansion]]. === In trigonometry === The trigonometric functions {{math|cos ''nθ''}}, also denoted as the [[Chebyshev polynomials]] {{math|''T<sub>n</sub>''(cos ''θ'') ≡ cos ''nθ''}}, can also be multipole expanded by the Legendre polynomials {{math|''P<sub>n</sub>''(cos ''θ'')}}. The first several orders are as follows: <math display="block">\begin{alignat}{2} T_0(\cos\theta)&=1 &&=P_0(\cos\theta),\\[4pt] T_1(\cos\theta)&=\cos \theta&&=P_1(\cos\theta),\\[4pt] T_2(\cos\theta)&=\cos 2\theta&&=\tfrac{1}{3}\bigl(4P_2(\cos\theta)-P_0(\cos\theta)\bigr),\\[4pt] T_3(\cos\theta)&=\cos 3\theta&&=\tfrac{1}{5}\bigl(8P_3(\cos\theta)-3P_1(\cos\theta)\bigr),\\[4pt] T_4(\cos\theta)&=\cos 4\theta&&=\tfrac{1}{105}\bigl(192P_4(\cos\theta)-80P_2(\cos\theta)-7P_0(\cos\theta)\bigr),\\[4pt] T_5(\cos\theta)&=\cos 5\theta&&=\tfrac{1}{63}\bigl(128P_5(\cos\theta)-56P_3(\cos\theta)-9P_1(\cos\theta)\bigr),\\[4pt] T_6(\cos\theta)&=\cos 6\theta&&=\tfrac{1}{1155}\bigl(2560P_6(\cos\theta)-1152P_4(\cos\theta)-220P_2(\cos\theta)-33P_0(\cos\theta)\bigr). \end{alignat}</math> This can be summarized for <math>n>0</math> as <math> T_n(x)=2^{2n-n'}\hat n!\sum_{t=0}^{\hat n} (n-2t+1/2) \frac{(n-t-1)!}{2^{2t}t!(n-1)!} \times \frac{(-1)\cdot 1\cdot 3\cdots (2t-3)}{(1+2n')(3+2n')\cdots (2n-2t+1)}P_{n-2t}(x) . </math> where <math>\hat n\equiv \lfloor n/2\rfloor</math>, <math>n'\equiv \lfloor (n+1)/2\rfloor</math>, and where the products with the steps of two in the numerator and denominator are to be interpreted as 1 if the are empty, i.e., if the last factor is smaller than the first factor. Another property is the expression for {{math|sin (''n'' + 1)''θ''}}, which is <math display="block">\frac{\sin (n+1)\theta}{\sin\theta}=\sum_{\ell=0}^n P_\ell(\cos\theta) P_{n-\ell}(\cos\theta).</math> === In recurrent neural networks === A [[recurrent neural network]] that contains a {{math|''d''}}-dimensional memory vector, <math>\mathbf{m} \in \R^d</math>, can be optimized such that its neural activities obey the [[linear time-invariant system]] given by the following [[state-space representation]]: <math display="block">\theta \dot{\mathbf{m}}(t) = A\mathbf{m}(t) + Bu(t),</math> <math display="block">\begin{align} A &= \left[ a \right]_{ij} \in \R^{d \times d} \text{,} \quad && a_{ij} = \left(2i + 1\right) \begin{cases} -1 & i < j \\ (-1)^{i-j+1} & i \ge j \end{cases},\\ B &= \left[ b \right]_i \in \R^{d \times 1} \text{,} \quad && b_i = (2i + 1) (-1)^i . \end{align}</math> In this case, the sliding window of <math>u</math> across the past <math>\theta</math> units of time is [[Approximation theory|best approximated]] by a linear combination of the first <math>d</math> shifted Legendre polynomials, weighted together by the elements of <math>\mathbf{m}</math> at time <math>t</math>: <math display="block">u(t - \theta') \approx \sum_{\ell=0}^{d-1} \widetilde{P}_\ell \left(\frac{\theta'}{\theta} \right) \, m_{\ell}(t) , \quad 0 \le \theta' \le \theta .</math> When combined with [[deep learning]] methods, these networks can be trained to outperform [[long short-term memory]] units and related architectures, while using fewer computational resources.<ref>{{cite conference |last1=Voelker |first1=Aaron R. |last2=Kajić |first2=Ivana |last3=Eliasmith |first3=Chris |title=Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks |url=http://compneuro.uwaterloo.ca/files/publications/voelker.2019.lmu.pdf |conference=Advances in Neural Information Processing Systems |conference-url=https://neurips.cc |year=2019 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)