Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Euler–Lagrange equation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Statement== Let <math>(X,L)</math> be a [[real dynamical system]] with <math>n</math> degrees of freedom. Here <math>X</math> is the [[configuration space (physics)|configuration space]] and <math>L=L(t,{\boldsymbol q}(t), {\boldsymbol v}(t))</math> the ''[[Lagrangian mechanics#Lagrangian|Lagrangian]]'', i.e. a smooth [[real-valued function]] such that <math>{\boldsymbol q}(t) \in X,</math> and <math>{\boldsymbol v}(t)</math> is an <math>n</math>-dimensional "vector of speed". (For those familiar with [[differential geometry]], <math>X</math> is a [[smooth manifold]], and <math>L : {\mathbb R}_t \times X \times TX \to {\mathbb R},</math> where <math>TX</math> is the [[tangent bundle]] of <math>X).</math> Let <math>{\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b)</math> be the set of smooth paths <math>\boldsymbol q: [a,b] \to X</math> for which <math>\boldsymbol q(a) = \boldsymbol x_a</math> and <math>\boldsymbol q(b) = \boldsymbol x_b. </math> The [[action (physics)|action functional]] <math>S : {\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b) \to \mathbb{R}</math> is defined via <math display="block"> S[\boldsymbol q] = \int_a^b L(t,\boldsymbol q(t),\dot{\boldsymbol q}(t))\, dt.</math> A path <math>\boldsymbol q \in {\cal P}(a,b,\boldsymbol x_a,\boldsymbol x_b)</math> is a [[stationary point]] of <math>S</math> if and only if {{Equation box 1 |indent =: |equation = <math>\frac{\partial L}{\partial q^i}(t,\boldsymbol q(t),\dot{\boldsymbol q}(t))-\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial L}{\partial \dot q^i}(t,\boldsymbol q(t),\dot{\boldsymbol q}(t)) = 0, \quad i = 1, \dots, n.</math> |border colour = #50C878 |background colour = #ECFCF4}} Here, <math>\dot{\boldsymbol q}(t) </math> is the [[time derivative]] of <math>\boldsymbol q(t).</math> When we say stationary point, we mean a stationary point of <math>S</math> with respect to any small perturbation in <math>\boldsymbol q</math>. See proofs below for more rigorous detail. {{math proof|title=Derivation of the one-dimensional Euler–Lagrange equation|proof= The derivation of the one-dimensional Euler–Lagrange equation is one of the classic proofs in [[mathematics]]. It relies on the [[fundamental lemma of calculus of variations]]. We wish to find a function <math>f</math> which satisfies the boundary conditions <math>f(a) = A</math>, <math>f(b) = B</math>, and which extremizes the functional <math display="block"> J[f] = \int_a^b L(x,f(x),f'(x))\, \mathrm{d}x\ . </math> We assume that <math>L</math> is twice continuously differentiable.<ref name='CourantP184'>{{harvnb|Courant|Hilbert|1953|p=184}}</ref> A weaker assumption can be used, but the proof becomes more difficult.{{Citation needed|date=September 2013}} If <math>f</math> extremizes the functional subject to the boundary conditions, then any slight perturbation of <math>f</math> that preserves the boundary values must either increase <math>J</math> (if <math>f</math> is a minimizer) or decrease <math>J</math> (if <math>f</math> is a maximizer). Let <math>f + \varepsilon \eta</math> be the result of such a perturbation <math>\varepsilon \eta</math> of <math>f</math>, where <math>\varepsilon</math> is small and <math>\eta</math> is a differentiable function satisfying <math>\eta (a) = \eta (b) = 0</math>. Then define <math display="block"> \Phi(\varepsilon) = J[f+\varepsilon\eta] = \int_a^b L(x,f(x)+\varepsilon\eta(x), f'(x)+\varepsilon\eta'(x))\, \mathrm{d}x \ .</math> We now wish to calculate the [[total derivative]] of <math> \Phi</math> with respect to ''ε''. <math display="block">\begin{align}\frac{\mathrm{d} \Phi}{\mathrm{d} \varepsilon} &= \frac{\mathrm d}{\mathrm d\varepsilon}\int_a^b L(x,f(x)+\varepsilon\eta(x), f'(x)+\varepsilon\eta'(x)) \, \mathrm{d}x \\ &= \int_a^b \frac{\mathrm d}{\mathrm d\varepsilon} L(x,f(x)+\varepsilon\eta(x), f'(x)+\varepsilon\eta'(x)) \, \mathrm{d}x \\ &= \int_a^b \left[\eta(x)\frac{\partial L}{\partial {f} }(x,f(x)+\varepsilon\eta(x),f'(x)+\varepsilon\eta'(x)) + \eta'(x)\frac{\partial L}{\partial f'}(x,f(x)+\varepsilon\eta(x),f'(x)+\varepsilon\eta'(x))\right] \mathrm{d}x \ . \end{align}</math> The third line follows from the fact that <math> x </math> does not depend on <math> \varepsilon </math>, i.e. <math> \frac{\mathrm{d} x}{\mathrm{d} \varepsilon} = 0</math>. When <math>\varepsilon = 0</math>, <math>\Phi</math> has an [[extremum]] value, so that <math display="block"> \left.\frac{\mathrm d \Phi}{\mathrm d\varepsilon}\right|_{\varepsilon=0} = \int_a^b \left[ \eta(x) \frac{\partial L}{\partial f}(x,f(x),f'(x)) + \eta'(x) \frac{\partial L}{\partial f'}(x,f(x),f'(x)) \,\right]\,\mathrm{d}x = 0 \ .</math> The next step is to use [[integration by parts]] on the second term of the integrand, yielding <math display="block"> \int_a^b \left[ \frac{\partial L}{\partial f}(x,f(x),f'(x)) - \frac{\mathrm{d}}{\mathrm{d}x} \frac{\partial L}{\partial f'}(x,f(x),f'(x)) \right] \eta(x)\,\mathrm{d}x + \left[ \eta(x) \frac{\partial L}{\partial f'}(x,f(x),f'(x)) \right]_a^b = 0 \ . </math> Using the boundary conditions <math>\eta (a) = \eta (b) = 0</math>, <math display="block"> \int_a^b \left[ \frac{\partial L}{\partial f}(x,f(x),f'(x)) - \frac{\mathrm{d}}{\mathrm{d}x} \frac{\partial L}{\partial f'}(x,f(x),f'(x)) \right] \eta(x)\,\mathrm{d}x = 0 \, . </math> Applying the [[fundamental lemma of calculus of variations]] now yields the Euler–Lagrange equation <math display="block"> \frac{\partial L}{\partial f}(x,f(x),f'(x)) - \frac{\mathrm{d}}{\mathrm{d}x} \frac{\partial L}{\partial f'}(x,f(x),f'(x)) = 0 \, . </math> }} {{math proof|title=Alternative derivation of the one-dimensional Euler–Lagrange equation|proof= Given a functional <math display="block">J = \int^b_a L(t, y(t), y'(t))\,\mathrm{d}t</math> on <math>C^1([a, b])</math> with the boundary conditions <math>y(a) = A</math> and <math>y(b) = B</math>, we proceed by approximating the extremal curve by a polygonal line with <math>n</math> segments and passing to the limit as the number of segments grows arbitrarily large. Divide the interval <math>[a, b]</math> into <math>n</math> equal segments with endpoints <math>t_0 = a, t_1, t_2, \ldots, t_n = b</math> and let <math>\Delta t = t_k - t_{k - 1}</math>. Rather than a smooth function <math>y(t)</math> we consider the polygonal line with vertices <math>(t_0, y_0),\ldots,(t_n, y_n)</math>, where <math>y_0 = A</math> and <math>y_n = B</math>. Accordingly, our functional becomes a real function of <math>n - 1</math> variables given by <math display="block">J(y_1, \ldots, y_{n - 1}) \approx \sum^{n - 1}_{k = 0}L\left(t_k, y_k, \frac{y_{k + 1} - y_k}{\Delta t}\right)\Delta t.</math> Extremals of this new functional defined on the discrete points <math>t_0,\ldots,t_n</math> correspond to points where <math display="block">\frac{\partial J(y_1,\ldots,y_n)}{\partial y_m} = 0.</math> Note that change of <math> y_m </math> affects L not only at m but also at m-1 for the derivative of the 3rd argument. <math display="block"> L(\text{3rd argument}) \left( \frac{y_{m+1} - (y_{m} + \Delta y_{m})}{\Delta t} \right) = L \left(\frac{y_{m+1} - y_{m}}{\Delta t}\right) - \frac{\partial L}{\partial y'} \frac{\Delta y_m}{\Delta t} </math><math display="block">L \left( \frac{(y_{m} + \Delta y_{m}) - y_{m-1}}{\Delta t} \right) = L \left(\frac{y_{m} - y_{m-1}}{\Delta t}\right) + \frac{\partial L}{\partial y'} \frac{\Delta y_m}{\Delta t}</math> Evaluating the partial derivative gives <math display="block">\frac{\partial J}{\partial y_m} = L_y\left(t_m, y_m, \frac{y_{m + 1} - y_m}{\Delta t}\right)\Delta t + L_{y'}\left(t_{m - 1}, y_{m - 1}, \frac{y_m - y_{m - 1}}{\Delta t}\right) - L_{y'}\left(t_m, y_m, \frac{y_{m + 1} - y_m}{\Delta t}\right).</math> Dividing the above equation by <math>\Delta t</math> gives <math display="block">\frac{\partial J}{\partial y_m \Delta t} = L_y\left(t_m, y_m, \frac{y_{m + 1} - y_m}{\Delta t}\right) - \frac{1}{\Delta t}\left[L_{y'}\left(t_m, y_m, \frac{y_{m + 1} - y_m}{\Delta t}\right) - L_{y'}\left(t_{m - 1}, y_{m - 1}, \frac{y_m - y_{m - 1}}{\Delta t}\right)\right],</math> and taking the limit as <math>\Delta t \to 0</math> of the right-hand side of this expression yields <math display="block">L_y - \frac{\mathrm{d}}{\mathrm{d}t}L_{y'} = 0.</math> The left hand side of the previous equation is the [[functional derivative]] <math>\delta J/\delta y</math> of the functional <math>J</math>. A necessary condition for a differentiable functional to have an extremum on some function is that its functional derivative at that function vanishes, which is granted by the last equation. }}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)