Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Partial differential equation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Analytical solutions == ===Separation of variables=== {{main|Separable partial differential equation}} Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is ''the'' solution (this also applies to ODEs). We assume as an [[ansatz]] that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.<ref>{{cite book |last1=Gershenfeld |first1=Neil |title=The nature of mathematical modeling |url=https://archive.org/details/naturemathematic00gers_334 |url-access=limited|date=2000|publisher=Cambridge University Press|location=Cambridge|isbn=0521570956|page=[https://archive.org/details/naturemathematic00gers_334/page/n32 27]|edition=Reprinted (with corr.)}}</ref> In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called [[separable partial differential equation]]s, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to [[diagonal matrices]] – thinking of "the value for fixed {{mvar|x}}" as a coordinate, each coordinate can be understood separately. This generalizes to the [[method of characteristics]], and is also used in [[integral transform]]s. ===Method of characteristics=== {{main|Method of characteristics}} The characteristic surface in {{math|1=''n'' = ''2''-}}dimensional space is called a '''characteristic curve'''.{{sfn|Zachmanoglou|Thoe|1986|pp=115–116}} In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the [[method of characteristics]]. More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces. ===Integral transform=== An [[integral transform]] may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator. An important example of this is [[Fourier analysis]], which diagonalizes the heat equation using the [[eigenbasis]] of sinusoidal waves. If the domain is finite or periodic, an infinite sum of solutions such as a [[Fourier series]] is appropriate, but an integral of solutions such as a [[Fourier integral]] is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. ===Change of variables=== Often a PDE can be reduced to a simpler form with a known solution by a suitable [[Change of variables (PDE)|change of variables]]. For example, the [[Black–Scholes equation]] <math display="block"> \frac{\partial V}{\partial t} + \tfrac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0 </math> is reducible to the [[heat equation]] <math display="block"> \frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2}</math> by the change of variables<ref>{{cite book |first1=Paul |last1=Wilmott |first2=Sam |last2=Howison |first3=Jeff |last3=Dewynne |title=The Mathematics of Financial Derivatives |location= |publisher=Cambridge University Press |year=1995 |isbn=0-521-49789-2 |pages=76–81 |url=https://books.google.com/books?id=VYVhnC3fIVEC&pg=PA76 }}</ref> <math display="block">\begin{align} V(S,t) &= v(x,\tau),\\[5px] x &= \ln\left(S \right),\\[5px] \tau &= \tfrac{1}{2} \sigma^2 (T - t),\\[5px] v(x,\tau) &= e^{-\alpha x-\beta\tau} u(x,\tau). \end{align}</math> ===Fundamental solution=== {{main|Fundamental solution}} Inhomogeneous equations{{clarification needed|date=July 2020}} can often be solved (for constant coefficient PDEs, always be solved) by finding the [[fundamental solution]] (the solution for a point source <math>P(D)u=\delta</math>), then taking the [[convolution]] with the boundary conditions to get the solution. This is analogous in [[signal processing]] to understanding a filter by its [[impulse response]]. ===Superposition principle=== {{further| Superposition principle }} The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example {{math|1=sin ''x'' + sin ''x'' = 2 sin ''x''}}. The same principle can be observed in PDEs where the solutions may be real or complex and additive. If {{math|''u''<sub>1</sub>}} and {{math|''u''<sub>2</sub>}} are solutions of linear PDE in some function space {{mvar|R}}, then {{math|1=''u'' = ''c''<sub>1</sub>''u''<sub>1</sub> + ''c''<sub>2</sub>''u''<sub>2</sub>}} with any constants {{math|''c''<sub>1</sub>}} and {{math|''c''<sub>2</sub>}} are also a solution of that PDE in the same function space. ===Methods for non-linear equations=== {{see also|nonlinear partial differential equation}} There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the [[Cauchy–Kowalevski theorem]]) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of [[mathematical analysis|analysis]]). Nevertheless, some techniques can be used for several types of equations. The [[h-principle|{{mvar|h}}-principle]] is the most powerful method to solve [[Underdetermined system|underdetermined]] equations. The [[Riquier–Janet theory]] is an effective method for obtaining information about many analytic [[Overdetermined system|overdetermined]] systems. The [[method of characteristics]] can be used in some very special cases to solve nonlinear partial differential equations.<ref>{{cite book |first=J. David |last=Logan |title=An Introduction to Nonlinear Partial Differential Equations |location=New York |publisher=John Wiley & Sons |year=1994 |isbn=0-471-59916-6 |chapter=First Order Equations and Characteristics |pages=51–79 }}</ref> In some cases, a PDE can be solved via [[perturbation analysis]] in which the solution is considered to be a correction to an equation with a known solution. Alternatives are [[numerical analysis]] techniques from simple [[finite difference]] schemes to the more mature [[multigrid]] and [[finite element method]]s. Many interesting problems in science and engineering are solved in this way using [[computer]]s, sometimes high performance [[supercomputer]]s. ===Lie group method=== From 1870 [[Sophus Lie]]'s work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called [[Lie group]]s, be referred, to a common source; and that ordinary differential equations which admit the same [[infinitesimal transformation]]s present comparable difficulties of integration. He also emphasized the subject of [[contact transformation|transformations of contact]]. A general approach to solving PDEs uses the symmetry property of differential equations, the continuous [[infinitesimal transformation]]s of solutions to solutions ([[Lie theory]]). Continuous [[group theory]], [[Lie algebras]] and [[differential geometry]] are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its [[Lax pair]]s, recursion operators, [[Bäcklund transform]] and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. ===Semi-analytical methods=== The [[Adomian decomposition method]],<ref>{{cite book |title=Solving Frontier problems of Physics: The decomposition method |first=G. |last=Adomian|author-link=George Adomian|publisher=Kluwer Academic Publishers |year=1994 |isbn=9789401582896 |url=https://books.google.com/books?id=UKPqCAAAQBAJ&q=%22partial+differential%22}}</ref> the [[Aleksandr Lyapunov|Lyapunov]] artificial small parameter method, and his [[homotopy perturbation method]] are all special cases of the more general [[homotopy analysis method]].<ref>{{cite book | last=Liao | first=S. J. |author-link=Liao Shijun| title=Beyond Perturbation: Introduction to the Homotopy Analysis Method | publisher=Chapman & Hall/ CRC Press | location=Boca Raton | year=2003 | isbn=1-58488-407-X }}</ref> These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known [[perturbation theory]], thus giving these methods greater flexibility and solution generality.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)