Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Numerical analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Areas of study== The field of numerical analysis includes many sub-disciplines. Some of the major ones are: ===Computing values of functions=== {| class="wikitable" style="float: right; width: 250px; clear: right; margin-left: 1em;" | Interpolation: Observing that the temperature varies from 20 degrees Celsius at 1:00 to 14 degrees at 3:00, a linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm. Extrapolation: If the [[gross domestic product]] of a country has been growing an average of 5% per year and was 100 billion last year, it might be extrapolated that it will be 105 billion this year. [[Image:Linear-regression.svg|right|100px|A line through 20 points]] Regression: In linear regression, given ''n'' points, a line is computed that passes as close as possible to those ''n'' points. [[Image:LemonadeJuly2006.JPG|right|100px|How much for a glass of lemonade?]] Optimization: Suppose lemonade is sold at a [[lemonade stand]], at $1.00 per glass, that 197 glasses of lemonade can be sold per day, and that for each increase of $0.01, one less glass of lemonade will be sold per day. If $1.485 could be charged, profit would be maximized, but due to the constraint of having to charge a whole-cent amount, charging $1.48 or $1.49 per glass will both yield the maximum income of $220.52 per day. [[Image:Wind-particle.png|right|Wind direction in blue, true trajectory in black, Euler method in red]] Differential equation: If 100 fans are set up to blow air from one end of the room to the other and then a feather is dropped into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the [[Euler method]] for solving an ordinary differential equation. |} One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the [[Horner scheme]], since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control [[round-off error]]s arising from the use of [[floating-point arithmetic]]. ===Interpolation, extrapolation, and regression=== [[Interpolation]] solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? [[Extrapolation]] is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.<ref>{{cite book |last1=Brezinski |first1=C. |last2=Zaglia |first2=M.R. |title=Extrapolation methods: theory and practice |publisher=Elsevier |date=2013 |isbn=978-0-08-050622-7 |url={{GBurl|WGviBQAAQBAJ|pg=PR7}}}}</ref> [[Regression analysis|Regression]] is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The [[numerical methods for linear least squares|least squares]]-method is one way to achieve this. ===Solving equations and systems of equations=== Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation <math>2x+5=3</math> is linear while <math>2x^2+5=3</math> is not. Much effort has been put in the development of methods for solving [[systems of linear equations]]. Standard direct methods, i.e., methods that use some [[matrix decomposition]] are [[Gaussian elimination]], [[LU decomposition]], [[Cholesky decomposition]] for [[symmetric matrix|symmetric]] (or [[hermitian matrix|hermitian]]) and [[positive-definite matrix]], and [[QR decomposition]] for non-square matrices. Iterative methods such as the [[Jacobi method]], [[Gauss–Seidel method]], [[successive over-relaxation]] and [[conjugate gradient method]]<ref>{{cite journal |last1=Hestenes |first1=Magnus R. |last2=Stiefel |first2=Eduard |title=Methods of Conjugate Gradients for Solving Linear Systems |journal=Journal of Research of the National Bureau of Standards |volume=49 |issue=6 |pages=409– |date=December 1952 |doi=10.6028/jres.049.044 |url= https://nvlpubs.nist.gov/nistpubs/jres/049/jresv49n6p409_A1b.pdf}}</ref> are usually preferred for large systems. General iterative methods can be developed using a [[matrix splitting]]. [[Root-finding algorithm]]s are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is [[derivative|differentiable]] and the derivative is known, then Newton's method is a popular choice.<ref>{{cite book |last1=Ezquerro Fernández |first1=J.A. |last2=Hernández Verón |first2=M.Á. |title=Newton's method: An updated approach of Kantorovich's theory |publisher=Birkhäuser |date=2017 |isbn=978-3-319-55976-6 |url={{GBurl|A3orDwAAQBAJ|pg=PR11}}}}</ref><ref>{{cite book |first=Peter |last=Deuflhard |title=Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms |publisher=Springer |edition=2nd |series=Computational Mathematics |volume=35 |date=2006 |isbn=978-3-540-21099-3 |url={{GBurl|l20xK__HG_kC|pg=PP1}} }}</ref> [[Linearization]] is another technique for solving nonlinear equations. ===Solving eigenvalue or singular value problems=== Several important problems can be phrased in terms of [[eigenvalue decomposition]]s or [[singular value decomposition]]s. For instance, the [[image compression|spectral image compression]] algorithm<ref>{{cite web |first1=C.J. |last1=Ogden |first2=T. |last2=Huff |title=The Singular Value Decomposition and Its Applications in Image Compression |date=1997 |work=Math 45 |publisher=College of the Redwoods |url=http://online.redwoods.cc.ca.us/instruct/darnold/laproj/Fall97/Tammie/tammie.pdf|archive-url=https://web.archive.org/web/20060925193348/http://online.redwoods.cc.ca.us/instruct/darnold/laproj/Fall97/Tammie/tammie.pdf |archive-date=25 September 2006 }}</ref> is based on the singular value decomposition. The corresponding tool in statistics is called [[principal component analysis]]. ===Optimization=== {{Main|Mathematical optimization}} Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some [[Constraint (mathematics)|constraint]]s. The field of optimization is further split in several subfields, depending on the form of the [[objective function]] and the constraint. For instance, [[linear programming]] deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the [[simplex algorithm|simplex method]]. The method of [[Lagrange multipliers]] can be used to reduce optimization problems with constraints to unconstrained optimization problems. ===Evaluating integrals=== {{Main|Numerical integration}} Numerical integration, in some instances also known as numerical [[quadrature (mathematics)|quadrature]], asks for the value of a definite [[integral]].<ref>{{cite book |last1=Davis |first1=P.J. |last2=Rabinowitz |first2=P. |title=Methods of numerical integration |publisher=Courier Corporation |date=2007 |isbn=978-0-486-45339-2 |url={{GBurl|gGCKdqka0HAC|pg=PR5}}}}</ref> Popular methods use one of the [[Newton–Cotes formulas]] (like the midpoint rule or [[Simpson's rule]]) or [[Gaussian quadrature]].<ref>{{MathWorld|author=Weisstein, Eric W. |title=Gaussian Quadrature |id=GaussianQuadrature}}</ref> These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use [[Monte Carlo method|Monte Carlo]] or [[quasi-Monte Carlo method]]s (see [[Monte Carlo integration]]<ref>{{cite book |first=John |last=Geweke |chapter=15. Monte carlo simulation and numerical integration |chapter-url=https://www.sciencedirect.com/science/article/pii/S1574002196010179) |doi=10.1016/S1574-0021(96)01017-9|title=Handbook of Computational Economics |publisher=Elsevier |volume=1 |date=1996 |isbn=9780444898579 |pages=731–800 }}</ref>), or, in modestly large dimensions, the method of [[sparse grid]]s. ===Differential equations=== {{Main|Numerical ordinary differential equations|Numerical partial differential equations}} Numerical analysis is also concerned with computing (in an approximate way) the solution of [[differential equations]], both [[ordinary differential equations]] and [[partial differential equations]].<ref>{{cite book |first=A. |last=Iserles |title=A first course in the numerical analysis of differential equations |publisher=Cambridge University Press |edition=2nd |date=2009 |isbn=978-0-521-73490-5 |url={{GBurl|M0tkw4oUucoC|pg=PR5}} }}</ref> Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace.<ref>{{cite book |first=W.F. |last=Ames |title=Numerical methods for partial differential equations |publisher=Academic Press |edition=3rd |date=2014 |isbn=978-0-08-057130-0 |url={{GBurl|KmjiBQAAQBAJ|pg=PP7}} }}</ref> This can be done by a [[finite element method]],<ref>{{cite book |first=C. |last=Johnson |title=Numerical solution of partial differential equations by the finite element method |publisher=Courier Corporation |date=2012 |isbn=978-0-486-46900-3 |url={{GBurl|0IFCAwAAQBAJ|p=2}} }}</ref><ref>{{cite book |last1=Brenner |first1=S. |last2=Scott |first2=R. |title=The mathematical theory of finite element methods |publisher=Springer |edition=2nd |date=2013 |isbn=978-1-4757-3658-8 |url={{GBurl|ServBwAAQBAJ|pg=PR11}}}}</ref><ref>{{cite book |last1=Strang |first1=G. |last2=Fix |first2=G.J. |orig-year=1973 |title=An analysis of the finite element method |publisher=Wellesley-Cambridge Press |date=2018 |isbn=9780980232783 |url=https://archive.org/details/analysisoffinite0000stra |oclc=1145780513 |edition=2nd}}</ref> a [[finite difference]] method,<ref>{{cite book |last=Strikwerda |first=J.C. |title=Finite difference schemes and partial differential equations |publisher=SIAM |edition=2nd |date=2004 |isbn=978-0-89871-793-8 |url={{GBurl|mbdt5XT25AsC|pg=PP5}}}}</ref> or (particularly in engineering) a [[finite volume method]].<ref>{{cite book |first=Randall |last=LeVeque |title=Finite Volume Methods for Hyperbolic Problems |publisher=Cambridge University Press |date=2002 |isbn=978-1-139-43418-8 |url={{GBurl|mfAfAwAAQBAJ|pg=PT6}}}}</ref> The theoretical justification of these methods often involves theorems from [[functional analysis]]. This reduces the problem to the solution of an algebraic equation.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)