Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear differential equation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Differential equations that are linear with respect to the unknown function and its derivatives}} {{About|linear differential equations with one independent variable|similar equations with two or more independent variables|Partial differential equation#Linear equations of second order}} {{Differential equations}} In [[mathematics]], a '''linear differential equation''' is a [[differential equation]] that is [[linear equation|linear]] in the unknown function and its derivatives, so it can be written in the form <math display="block">a_0(x)y + a_1(x)y' + a_2(x)y'' \cdots + a_n(x)y^{(n)} = b(x)</math> where {{nowrap|1={{math|''a''<sub>0</sub>(''x'')}}, ..., {{math|''a''<sub>''n''</sub>(''x'')}}}} and {{math|''b''(''x'')}} are arbitrary [[differentiable function]]s that do not need to be linear, and {{math|''y''′, ..., ''y''<sup>(''n'')</sup> }} are the successive derivatives of an unknown function {{mvar|y}} of the variable {{mvar|x}}. Such an equation is an [[ordinary differential equation]] (ODE). A ''linear differential equation'' may also be a linear [[partial differential equation]] (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are [[partial derivative]]s. ==Types of solution== {{anchor|solving by quadrature}} A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be '''solved by quadrature''', which means that the solutions may be expressed in terms of [[antiderivative|integrals]]. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, [[Kovacic's algorithm]] allows deciding whether there are solutions in terms of integrals, and computing them if any. The solutions of homogeneous linear differential equations with [[polynomial]] coefficients are called [[holonomic function]]s. This class of functions is stable under sums, products, [[derivative|differentiation]], [[antiderivative|integration]], and contains many usual functions and [[special function]]s such as [[exponential function]], [[logarithm]], [[sine]], [[cosine]], [[inverse trigonometric functions]], [[error function]], [[Bessel function]]s and [[hypergeometric function]]s. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of [[calculus]], such as computation of [[antiderivative]]s, [[limit (mathematics)|limits]], [[asymptotic expansion]], and numerical evaluation to any precision, with a certified error bound. ==Basic terminology== The highest [[order of derivation]] that appears in a (linear) differential equation is the ''order'' of the equation. The term {{math|''b''(''x'')}}, which does not depend on the unknown function and its derivatives, is sometimes called the ''constant term'' of the equation (by analogy with [[algebraic equation]]s), even when this term is a non-constant function. If the constant term is the [[zero function]], then the differential equation is said to be ''[[Homogeneous differential equation|homogeneous]]'', as it is a [[homogeneous polynomial]] in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the ''{{visible anchor|associated homogeneous equation}}''. A differential equation has ''constant coefficients'' if only [[constant function]]s appear as coefficients in the associated homogeneous equation. A ''{{visible anchor|solution|Solution of a differential equation}}'' of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a [[vector space]]. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation. ==Linear differential operator== {{Main|Differential operator}} A ''basic differential operator'' of order {{mvar|i}} is a mapping that maps any [[differentiable function]] to its [[higher derivative|{{mvar|i}}th derivative]], or, in the case of several variables, to one of its [[partial derivative]]s of order {{mvar|i}}. It is commonly denoted <math display="block">\frac{d^i}{dx^i}</math> in the case of [[univariate]] functions, and <math display="block">\frac{\partial^{i_1+\cdots +i_n}}{\partial x_1^{i_1}\cdots \partial x_n^{i_n}}</math> in the case of functions of {{mvar|n}} variables. The basic differential operators include the derivative of order 0, which is the identity mapping. A '''linear differential operator''' (abbreviated, in this article, as ''linear operator'' or, simply, ''operator'') is a [[linear combination]] of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form<ref>Gershenfeld 1999, p.9</ref> <math display="block">a_0(x)+a_1(x)\frac{d}{dx} + \cdots +a_n(x)\frac{d^n}{dx^n},</math> where {{math|''a''<sub>0</sub>(''x''), ..., ''a''<sub>''n''</sub>(''x'')}} are differentiable functions, and the nonnegative integer {{mvar|n}} is the ''order'' of the operator (if {{math|''a''<sub>''n''</sub>(''x'')}} is not the [[zero function]]). Let {{mvar|L}} be a linear differential operator. The application of {{mvar|L}} to a function {{mvar|f}} is usually denoted {{math|''Lf''}} or {{math|''Lf''(''X'')}}, if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a [[linear operator]], since it maps sums to sums and the product by a [[scalar (mathematics)|scalar]] to the product by the same scalar. As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a [[vector space]] over the [[real number]]s or the [[complex number]]s (depending on the nature of the functions that are considered). They form also a [[free module]] over the [[ring (mathematics)|ring]] of differentiable functions. The language of operators allows a compact writing for differentiable equations: if <math display="block">L = a_0(x)+a_1(x)\frac{d}{dx} + \cdots +a_n(x)\frac{d^n}{dx^n},</math> is a linear differential operator, then the equation <math display="block">a_0(x)y +a_1(x)y' + a_2(x)y'' +\cdots +a_n(x)y^{(n)}=b(x)</math> may be rewritten <math display="block">Ly=b(x).</math> There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in {{mvar|y}} and the right-hand and of the equation, such as {{math|1=''Ly''(''x'') = ''b''(''x'')}} or {{math|1=''Ly'' = ''b''}}. The ''kernel'' of a linear differential operator is its [[kernel (linear algebra)|kernel]] as a linear mapping, that is the [[vector space]] of the solutions of the (homogeneous) differential equation {{math|1=''Ly'' = 0}}. In the case of an ordinary differential operator of order {{mvar|n}}, [[Carathéodory's existence theorem]] implies that, under very mild conditions, the kernel of {{mvar|L}} is a vector space of dimension {{mvar|n}}, and that the solutions of the equation {{math|1=''Ly''(''x'') = ''b''(''x'')}} have the form <math display="block">S_0(x) + c_1S_1(x) + \cdots + c_n S_n(x),</math> where {{math|''c''<sub>1</sub>, ..., ''c''<sub>''n''</sub>}} are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval {{mvar|I}}, if the functions {{math|''b'', ''a''<sub>0</sub>, ..., ''a''<sub>''n''</sub>}} are continuous in {{mvar|I}}, and there is a positive real number {{mvar|k}} such that {{math|1={{abs|''a''<sub>''n''</sub>(''x'')}} > ''k''}} for every {{mvar|x}} in {{mvar|I}}. ==Homogeneous equation with constant coefficients== A homogeneous linear differential equation has ''constant coefficients'' if it has the form <math display="block">a_0y + a_1y' + a_2y'' + \cdots + a_n y^{(n)} = 0</math> where {{math|''a''<sub>1</sub>, ..., ''a''<sub>''n''</sub>}} are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients. The study of these differential equations with constant coefficients dates back to [[Leonhard Euler]], who introduced the [[exponential function]] {{math|''e''<sup>''x''</sup>}}, which is the unique solution of the equation {{math|1=''f''′ = ''f''}} such that {{math|1=''f''(0) = 1}}. It follows that the {{mvar|n}}th derivative of {{math|''e''<sup>''cx''</sup> }} is {{math|''c''<sup>''n''</sup>''e''<sup>''cx''</sup>}}, and this allows solving homogeneous linear differential equations rather easily. Let <math display="block">a_0y + a_1y' + a_2y'' + \cdots + a_ny^{(n)} = 0</math> be a homogeneous linear differential equation with constant coefficients (that is {{math|''a''<sub>0</sub>, ..., ''a''<sub>''n''</sub>}} are real or complex numbers). Searching solutions of this equation that have the form {{math|''e''<sup>''αx''</sup>}} is equivalent to searching the constants {{mvar|α}} such that <math display="block">a_0e^{\alpha x} + a_1\alpha e^{\alpha x} + a_2\alpha^2 e^{\alpha x}+\cdots + a_n\alpha^n e^{\alpha x} = 0.</math> Factoring out {{math|''e''<sup>''αx''</sup>}} (which is never zero), shows that {{mvar|α}} must be a root of the ''characteristic polynomial'' <math display="block">a_0 + a_1t + a_2 t^2 + \cdots + a_nt^n</math> of the differential equation, which is the left-hand side of the [[Characteristic equation (calculus)|characteristic equation]] <math display="block">a_0 + a_1t + a_2 t^2 + \cdots + a_nt^n = 0.</math> When these roots are all [[distinct roots|distinct]], one has {{mvar|n}} distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be [[linearly independent]], by considering the [[Vandermonde determinant]] of the values of these solutions at {{math|1=''x'' = 0, ..., ''n'' – 1}}. Together they form a [[Basis (linear algebra)|basis]] of the [[vector space]] of solutions of the differential equation (that is, the kernel of the differential operator). {| class="toccolours floatright" style="width:35%; margin: 0.5em 0 0.5em 1em;" ! style="background:#ffffaa; padding: 3px 5px 3px 5px; font-size:larger;" | Example |- | style="font-size:100%; padding:0 5px 0 5px;" | <math display="block">y''''-2y'''+2y''-2y'+y=0</math> has the characteristic equation <math display="block">z^4-2z^3+2z^2-2z+1=0.</math> This has zeros, {{mvar|i}}, {{math|−''i''}}, and {{math|1}} (multiplicity 2). The solution basis is thus <math display="block">e^{ix},\; e^{-ix},\; e^x,\; xe^x.</math> A real basis of solution is thus <math display="block">\cos x,\; \sin x,\; e^x,\; xe^x.</math> |} In the case where the characteristic polynomial has only [[simple root]]s, the preceding provides a complete basis of the solutions vector space. In the case of [[multiple root]]s, more linearly independent solutions are needed for having a basis. These have the form <math display="block">x^ke^{\alpha x},</math> where {{mvar|k}} is a nonnegative integer, {{mvar|α}} is a root of the characteristic polynomial of multiplicity {{mvar|m}}, and {{math|''k'' < ''m''}}. For proving that these functions are solutions, one may remark that if {{mvar|α}} is a root of the characteristic polynomial of multiplicity {{mvar|m}}, the characteristic polynomial may be factored as {{math|''P''(''t'')(''t'' − ''α'')<sup>''m''</sup>}}. Thus, applying the differential operator of the equation is equivalent with applying first {{mvar|m}} times the operator {{nowrap|<math display="inline"> \frac{d}{dx} - \alpha </math>,}} and then the operator that has {{mvar|P}} as characteristic polynomial. By the [[Shift theorem|exponential shift theorem]], <math display="block">\left(\frac{d}{dx}-\alpha\right)\left(x^ke^{\alpha x}\right)= kx^{k-1}e^{\alpha x},</math> and thus one gets zero after {{math|''k'' + 1}} application of {{nowrap|1=<math display="inline"> \frac{d}{dx} - \alpha </math>.}} As, by the [[fundamental theorem of algebra]], the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions. In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of [[real-valued function]]s. Such a basis may be obtained from the preceding basis by remarking that, if {{math|''a'' + ''ib''}} is a root of the characteristic polynomial, then {{math|''a'' – ''ib''}} is also a root, of the same multiplicity. Thus a real basis is obtained by using [[Euler's formula]], and replacing <math>x^ke^{(a+ib)x}</math> and <math>x^ke^{(a-ib)x}</math> by <math>x^ke^{ax} \cos(bx)</math> and <math>x^ke^{ax} \sin(bx)</math>. ===Second-order case=== A homogeneous linear differential equation of the second order may be written <math display="block">y'' + ay' + by = 0,</math> and its characteristic polynomial is <math display="block">r^2 + ar + b.</math> If {{mvar|a}} and {{mvar|b}} are [[real number|real]], there are three cases for the solutions, depending on the discriminant {{math|1=''D'' = ''a''<sup>2</sup> − 4''b''}}. In all three cases, the general solution depends on two arbitrary constants {{math|''c''<sub>1</sub>}} and {{math|''c''<sub>2</sub>}}. * If {{math|''D'' > 0}}, the characteristic polynomial has two distinct real roots {{mvar|α}}, and {{mvar|β}}. In this case, the general solution is <math display="block">c_1 e^{\alpha x} + c_2 e^{\beta x}.</math> * If {{math|1=''D'' = 0}}, the characteristic polynomial has a double root {{math|−''a''/2}}, and the general solution is <math display="block">(c_1 + c_2 x) e^{-ax/2}.</math> * If {{math|''D'' < 0}}, the characteristic polynomial has two [[complex conjugate]] roots {{math|''α'' ± ''βi''}}, and the general solution is <math display="block">c_1 e^{(\alpha + \beta i)x} + c_2 e^{(\alpha - \beta i)x},</math> which may be rewritten in real terms, using [[Euler's formula]] as <math display="block"> e^{\alpha x} (c_1\cos(\beta x) + c_2 \sin(\beta x)).</math> Finding the solution {{math|''y''(''x'')}} satisfying {{math|1=''y''(0) = ''d''<sub>1</sub>}} and {{math|1=''y''′(0) = ''d''<sub>2</sub>}}, one equates the values of the above general solution at {{math|0}} and its derivative there to {{math|''d''<sub>1</sub>}} and {{math|''d''<sub>2</sub>}}, respectively. This results in a linear system of two linear equations in the two unknowns {{math|''c''<sub>1</sub>}} and {{math|''c''<sub>2</sub>}}. Solving this system gives the solution for a so-called [[Cauchy boundary condition|Cauchy problem]], in which the values at {{math|0}} for the solution of the DEQ and its derivative are specified. ==Non-homogeneous equation with constant coefficients== A non-homogeneous equation of order {{mvar|n}} with constant coefficients may be written <math display="block">y^{(n)}(x) + a_1 y^{(n-1)}(x) + \cdots + a_{n-1} y'(x)+ a_ny(x) = f(x),</math> where {{math|''a''<sub>1</sub>, ..., ''a''<sub>''n''</sub>}} are real or complex numbers, {{mvar|f}} is a given function of {{mvar|x}}, and {{mvar|y}} is the unknown function (for sake of simplicity, "{{math|(''x'')}}" will be omitted in the following). There are several methods for solving such an equation. The best method depends on the nature of the function {{mvar|f}} that makes the equation non-homogeneous. If {{mvar|f}} is a linear combination of exponential and sinusoidal functions, then the [[exponential response formula]] may be used. If, more generally, {{mvar|f}} is a linear combination of functions of the form {{math|''x''<sup>''n''</sup>''e''<sup>''ax''</sup>}}, {{math|''x''<sup>''n''</sup> cos(''ax'')}}, and {{math|''x''<sup>''n''</sup> sin(''ax'')}}, where {{mvar|n}} is a nonnegative integer, and {{mvar|a}} a constant (which need not be the same in each term), then the [[method of undetermined coefficients]] may be used. Still more general, the [[annihilator method]] applies when {{mvar|f}} satisfies a homogeneous linear differential equation, typically, a [[holonomic function]]. The most general method is the [[variation of constants]], which is presented here. The general solution of the associated homogeneous equation <math display="block">y^{(n)} + a_1 y^{(n-1)} + \cdots + a_{n-1} y'+ a_ny = 0</math> is <math display="block">y=u_1y_1+\cdots+ u_ny_n,</math> where {{math|(''y''<sub>1</sub>, ..., ''y''<sub>''n''</sub>)}} is a basis of the vector space of the solutions and {{math|''u''<sub>1</sub>, ..., ''u''<sub>''n''</sub>}} are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering {{math|''u''<sub>1</sub>, ..., ''u''<sub>''n''</sub>}} as constants, they can be considered as unknown functions that have to be determined for making {{mvar|y}} a solution of the non-homogeneous equation. For this purpose, one adds the constraints <math display="block">\begin{align} 0 &= u'_1y_1 + u'_2y_2 + \cdots+u'_ny_n \\ 0 &= u'_1y'_1 + u'_2y'_2 + \cdots + u'_n y'_n \\ &\;\;\vdots \\ 0 &= u'_1y^{(n-2)}_1+u'_2y^{(n-2)}_2 + \cdots + u'_n y^{(n-2)}_n, \end{align}</math> which imply (by [[product rule]] and [[mathematical induction|induction]]) <math display="block">y^{(i)} = u_1 y_1^{(i)} + \cdots + u_n y_n^{(i)}</math> for {{math|1=''i'' = 1, ..., ''n'' – 1}}, and <math display="block">y^{(n)} = u_1 y_1^{(n)} + \cdots + u_n y_n^{(n)} +u'_1y_1^{(n-1)}+u'_2y_2^{(n-1)}+\cdots+u'_ny_n^{(n-1)}.</math> Replacing in the original equation {{mvar|y}} and its derivatives by these expressions, and using the fact that {{math|''y''<sub>1</sub>, ..., ''y''<sub>''n''</sub>}} are solutions of the original homogeneous equation, one gets <math display="block">f=u'_1y_1^{(n-1)} + \cdots + u'_ny_n^{(n-1)}.</math> This equation and the above ones with {{math|0}} as left-hand side form a system of {{mvar|n}} linear equations in {{math|''u''′<sub>1</sub>, ..., ''u''′<sub>''n''</sub>}} whose coefficients are known functions ({{mvar|f}}, the {{math|''y''{{sub|i}}}}, and their derivatives). This system can be solved by any method of [[linear algebra]]. The computation of [[antiderivative]]s gives {{math|''u''<sub>1</sub>, ..., ''u''<sub>''n''</sub>}}, and then {{math|1=''y'' = ''u''<sub>1</sub>''y''<sub>1</sub> + ⋯ + ''u''<sub>''n''</sub>''y''<sub>''n''</sub>}}. As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation. ==First-order equation with variable coefficients== The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of {{math|''y''′(''x'')}}, is: <math display="block">y'(x) = f(x) y(x) + g(x).</math> If the equation is homogeneous, i.e. {{math|1=''g''(''x'') = 0}}, one may rewrite and integrate: <math display="block">\frac{y'}{y}= f, \qquad \log y = k +F, </math> where {{mvar|k}} is an arbitrary [[constant of integration]] and <math>F=\textstyle\int f\,dx</math> is any [[antiderivative]] of {{mvar|f}}. Thus, the general solution of the homogeneous equation is <math display="block">y=ce^F,</math> where {{math|1=''c'' = ''e''<sup>''k''</sup>}} is an arbitrary constant. For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the [[multiplicative inverse|reciprocal]] {{math|''e''<sup>−''F''</sup>}} of a solution of the homogeneous equation.<ref>Motivation: In analogy to [[completing the square]] technique we write the equation as {{math|1=''y''′ − ''fy'' = ''g''}}, and try to modify the left side so it becomes a derivative. Specifically, we seek an "integrating factor" {{math|1=''h'' = ''h''(''x'')}} such that multiplying by it makes the left side equal to the derivative of {{math|''hy''}}, namely {{math|1=''hy''′ − ''hfy'' = (''hy'')′}}. This means {{math|1=''h''′ = −''hf''}}, so that {{math|1=''h'' = ''e''<sup>−∫ ''f'' ''dx''</sup> = ''e''<sup>−''F''</sup>}}, as in the text.</ref> This gives <math display="block">y'e^{-F}-yfe^{-F}= ge^{-F}.</math> As {{tmath|1=-fe^{-F} = \tfrac{d}{dx} \left(e^{-F}\right),}} the [[product rule]] allows rewriting the equation as <math display="block">\frac{d}{dx}\left(ye^{-F}\right)= ge^{-F}.</math> Thus, the general solution is <math display="block">y=ce^F + e^F\int ge^{-F}dx,</math> where {{mvar|c}} is a constant of integration, and {{mvar|F}} is any antiderivative of {{mvar|f}} (changing of antiderivative amounts to change the constant of integration). === Example === Solving the equation <math display="block">y'(x) + \frac{y(x)}{x} = 3x.</math> The associated homogeneous equation <math>y'(x) + \frac{y(x)}{x} = 0</math> gives <math display="block">\frac{y'}{y}=-\frac{1}{x},</math> that is <math display="block">y=\frac{c}{x}.</math> Dividing the original equation by one of these solutions gives <math display="block">xy'+y=3x^2.</math> That is <math display="block">(xy)'=3x^2,</math> <math display="block">xy=x^3 +c,</math> and <math display="block">y(x)=x^2+c/x.</math> For the initial condition <math display="block">y(1)=\alpha,</math> one gets the particular solution <math display="block">y(x)=x^2+\frac{\alpha-1}{x}.</math> ==System of linear differential equations== {{Main|Matrix differential equation}} {{see also|system of differential equations}} A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations. An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if {{tmath| y', y'', \ldots, y^{(k)} }} appear in an equation, one may replace them by new unknown functions {{tmath|y_1, \ldots, y_k }} that must satisfy the equations {{tmath|1=y'=y_1}} and {{tmath|1=y_i'=y_{i+1},}} for {{math|1=''i'' = 1, ..., ''k'' – 1}}. A linear system of the first order, which has {{mvar|n}} unknown functions and {{mvar|n}} differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a [[differential-algebraic system of equations|differential-algebraic system]], and this is a different theory. Therefore, the systems that are considered here have the form <math display="block">\begin{align} y_1'(x) &= b_1(x) +a_{1,1}(x)y_1+\cdots+a_{1,n}(x)y_n\\[1ex] &\;\;\vdots\\[1ex] y_n'(x) &= b_n(x) +a_{n,1}(x)y_1+\cdots+a_{n,n}(x)y_n, \end{align}</math> where {{tmath|b_n}} and the {{tmath|a_{i,j} }} are functions of {{mvar|x}}. In matrix notation, this system may be written (omitting "{{math|(''x'')}}") <math display="block">\mathbf{y}' = A\mathbf{y}+\mathbf{b}.</math> The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication. Let <math display="block">\mathbf{u}' = A\mathbf{u}.</math> be the homogeneous equation associated to the above matrix equation. Its solutions form a [[vector space]] of dimension {{mvar|n}}, and are therefore the columns of a [[square matrix]] of functions {{tmath|U(x)}}, whose [[determinant]] is not the zero function. If {{math|1=''n'' = 1}}, or {{mvar|A}} is a matrix of constants, or, more generally, if {{mvar|A}} commutes with its [[antiderivative]] {{tmath|1=\textstyle B=\int Adx}}, then one may choose {{mvar|U}} equal the [[matrix exponential|exponential]] of {{mvar|B}}. In fact, in these cases, one has <math display="block">\frac{d}{dx}\exp(B) = A\exp (B).</math> In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a [[numerical method]], or an approximation method such as [[Magnus expansion]]. Knowing the matrix {{mvar|U}}, the general solution of the non-homogeneous equation is <math display="block">\mathbf{y}(x) = U(x)\mathbf{y_0} + U(x)\int U^{-1}(x)\mathbf{b}(x)\,dx,</math> where the column matrix <math>\mathbf{y_0}</math> is an arbitrary [[constant of integration]]. If initial conditions are given as <math display="block">\mathbf y(x_0)=\mathbf y_0,</math> the solution that satisfies these initial conditions is <math display="block">\mathbf{y}(x) = U(x)U^{-1}(x_0)\mathbf{y_0} + U(x)\int_{x_0}^x U^{-1}(t)\mathbf{b}(t)\,dt.</math> ==Higher order with variable coefficients== A linear ordinary equation of order one with variable coefficients may be solved by [[quadrature (mathematics)|quadrature]], which means that the solutions may be expressed in terms of [[antiderivative|integrals]]. This is not the case for order at least two. This is the main result of [[Picard–Vessiot theory]] which was initiated by [[Émile Picard]] and [[Ernest Vessiot]], and whose recent developments are called [[differential Galois theory]]. The impossibility of solving by quadrature can be compared with the [[Abel–Ruffini theorem]], which states that an [[algebraic equation]] of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of [[differential Galois theory]]. Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers. Nevertheless, the case of order two with rational coefficients has been completely solved by [[Kovacic's algorithm]]. ===Cauchy–Euler equation=== [[Cauchy–Euler equation]]s are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form <math display="block">x^n y^{(n)}(x) + a_{n-1} x^{n-1} y^{(n-1)}(x) + \cdots + a_0 y(x) = 0,</math> where {{tmath|a_0, \ldots, a_{n-1} }} are constant coefficients. == Holonomic functions== {{Main|holonomic function}} A [[holonomic function]], also called a ''D-finite function'', is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients. Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include [[polynomial]]s, [[algebraic function]]s, [[logarithm]], [[exponential function]], [[sine]], [[cosine]], [[hyperbolic sine]], [[hyperbolic cosine]], [[inverse trigonometric functions|inverse trigonometric]] and [[inverse hyperbolic functions]], and many [[special function]]s such as [[Bessel function]]s and [[hypergeometric function]]s. Holonomic functions have several [[closure property|closure properties]]; in particular, sums, products, [[derivative]] and [[antiderivative|integrals]] of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are [[algorithm]]s for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.<ref name =zeilberger>Zeilberger, Doron. ''[https://www.sciencedirect.com/science/article/pii/037704279090042X/pdf?md5=8b21c545d20a52a50dffdf6808bba4a8&isDTMRedir=Y&pid=1-s2.0-037704279090042X-main.pdf A holonomic systems approach to special functions identities]''. Journal of computational and applied mathematics. 32.3 (1990): 321-368</ref> Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.<ref name=zeilberger/> A ''holonomic sequence'' is a sequence of numbers that may be generated by a [[recurrence relation]] with polynomial coefficients. The coefficients of the [[Taylor series]] at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a [[power series]] is holonomic, then the series defines a holonomic function (even if the [[radius of convergence]] is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and ''vice versa''. <ref name=zeilberger/> It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most [[calculus]] operations can be done automatically on these functions, such as [[derivative]], [[indefinite integral|indefinite]] and [[definite integral]], fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, [[limit (mathematics)|limit]]s, localization of [[singularity (mathematics)|singularities]], [[asymptotic behavior]] at infinity and near singularities, proof of identities, etc.<ref>Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September). ''[https://hal.inria.fr/docs/00/78/30/48/PDF/ddmf.pdf The dynamic dictionary of mathematical functions (DDMF)]''. In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.</ref> ==See also== * [[Continuous-repayment mortgage#Ordinary time differential equation|Continuous-repayment mortgage]] * [[Fourier transform]] * [[Laplace transform]] * [[Linear difference equation]] * [[Variation of parameters]] == References == {{Reflist}} *{{Citation |author1=Birkhoff, Garrett |author2=Rota, Gian-Carlo |name-list-style=amp| year = 1978 | title = Ordinary Differential Equations | isbn = 0-471-07411-X | publisher = John Wiley and Sons, Inc. | location = New York | oclc = }} *{{Citation | author = Gershenfeld, Neil | year = 1999 | title =The Nature of Mathematical Modeling | isbn = 978-0-521-57095-4 | publisher = Cambridge University Press | location = Cambridge, UK. | oclc = }} *{{Citation | author = Robinson, James C. | year = 2004 | title = An Introduction to Ordinary Differential Equations | isbn = 0-521-82650-0 | publisher = Cambridge University Press | location = Cambridge, UK. | oclc = }} ==External links== * http://eqworld.ipmnet.ru/en/solutions/ode.htm * [http://ddmf.msr-inria.inria.fr Dynamic Dictionary of Mathematical Function]. Automatic and interactive study of many holonomic functions. {{Differential equations topics}} {{Authority control}} {{DEFAULTSORT:Linear Differential Equation}} [[Category:Differential equations]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Differential equations
(
edit
)
Template:Differential equations topics
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Tmath
(
edit
)
Template:Visible anchor
(
edit
)