Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Numerical analysis
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Methods for numerical approximations}} {{Use dmy dates|date=October 2020}} [[Image:Ybc7289-bw.jpg|thumb|250px|right|Babylonian clay tablet [[YBC 7289]] (c. 1800–1600 BCE) with annotations. The approximation of the [[square root of 2]] is four [[sexagesimal]] figures, which is about six [[decimal]] figures. 1 + 24/60 + 51/60<sup>2</sup> + 10/60<sup>3</sup> = 1.41421296...<ref>{{Cite web |url=http://it.stlawu.edu/%7Edmelvill/mesomath/tablets/YBC7289.html |title=Photograph, illustration, and description of the ''root(2)'' tablet from the Yale Babylonian Collection |access-date=2 October 2006 |archive-date=13 August 2012 |archive-url=https://web.archive.org/web/20120813054036/http://it.stlawu.edu/%7Edmelvill/mesomath/tablets/YBC7289.html |url-status=dead }}</ref>]] '''Numerical analysis''' is the study of [[algorithm]]s that use numerical [[approximation]] (as opposed to [[symbolic computation|symbolic manipulations]]) for the problems of [[mathematical analysis]] (as distinguished from [[discrete mathematics]]). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: [[ordinary differential equation]]s as found in [[celestial mechanics]] (predicting the motions of planets, stars and galaxies), [[numerical linear algebra]] in data analysis,<ref>{{cite book |first=J.W. |last=Demmel |title=Applied numerical linear algebra |publisher=[[Society for Industrial and Applied Mathematics|SIAM]] |date=1997 |isbn=978-1-61197-144-6 |doi=10.1137/1.9781611971446 |url=https://epubs.siam.org/doi/epdf/10.1137/1.9781611971446.fm}}</ref><ref>{{cite book |last1=Ciarlet |first1=P.G. |last2=Miara |first2=B. |last3=Thomas |first3=J.M. |title=Introduction to numerical linear algebra and optimization |publisher=Cambridge University Press |date=1989 |isbn=9780521327886 |oclc=877155729 }}</ref><ref>{{cite book |last1=Trefethen |first1=Lloyd |last2=Bau III |first2=David |title=Numerical Linear Algebra |publisher=SIAM |date=1997 |isbn=978-0-89871-361-9 |url={{GBurl|4Mou5YpRD_kC|pg=PR7}}}}</ref> and [[stochastic differential equation]]s and [[Markov chain]]s for simulating living cells in medicine and biology. Before modern computers, [[numerical method]]s often relied on hand [[interpolation]] formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.<ref name="20c">{{cite book |last1=Brezinski |first1=C. |last2=Wuytack |first2=L. |title=Numerical analysis: Historical developments in the 20th century |publisher=Elsevier |date=2012 |isbn=978-0-444-59858-5 |url={{GBurl|dt3Z1yu2VxwC|pg=PP6}}}}</ref> The numerical point of view goes back to the earliest mathematical writings. A tablet from the [[Yale Babylonian Collection]] ([[YBC 7289]]), gives a [[sexagesimal]] numerical approximation of the [[square root of 2]], the length of the [[diagonal]] in a [[unit square]]. Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used. ==Applications== The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically: * Advanced numerical methods are essential in making [[numerical weather prediction]] feasible. * Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. * Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving [[partial differential equation]]s numerically. * In the financial field, (private investment funds) and other financial institutions use [[quantitative finance]] tools from numerical analysis to attempt to calculate the value of [[share capital|stock]]s and [[Derivative (finance)|derivatives]] more precisely than other market participants.<ref> Stephen Blyth. [https://books.google.com/books?id=SXbcAAAAQBAJ "An Introduction to Quantitative Finance"]. 2013. page VII. </ref> * Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of [[operations research]]. * Insurance companies use numerical programs for [[Actuary|actuarial]] analysis. ==History== The field of numerical analysis predates the invention of modern computers by many centuries. [[Linear interpolation]] was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis,<ref name="20c"/> as is obvious from the names of important algorithms like [[Newton's method]], [[Lagrange polynomial|Lagrange interpolation polynomial]], [[Gaussian elimination]], or [[Euler's method]]. The origins of modern numerical analysis are often linked to a 1947 paper by [[John von Neumann]] and [[Herman Goldstine]],<ref name="watson" /><ref> {{cite book |editor1-link=Adhemar Bultheel |editor1-first=Adhemar |editor1-last=Bultheel |editor2-first=Ronald |editor2-last=Cools |title=The Birth of Numerical Analysis |volume=10 |publisher= World Scientific |date=2010 |isbn=978-981-283-625-0 |url={{GBurl|pKZpDQAAQBAJ|pg=PR17}} }} </ref> but others consider modern numerical analysis to go back to work by [[E. T. Whittaker]] in 1912.<ref name="watson" > {{cite book |first=G.A. |last=Watson |chapter=The history and development of numerical analysis in Scotland: a personal perspective |chapter-url=https://core.ac.uk/download/pdf/206717434.pdf |title=The Birth of Numerical Analysis |publisher=World Scientific |date=2010 |isbn=9789814469456 |pages=161–177 }} </ref> [[File:Handbook of Mathematical Functions, by Abramowitz and Stegun, cover.jpg|right|thumb|128px|NIST publication]] To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the [[NIST]] publication edited by [[Abramowitz and Stegun]], a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy. The [[mechanical calculator]] was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis,<ref name="20c"/> since now longer and more complicated calculations could be done. The [[Leslie Fox Prize for Numerical Analysis]] was initiated in 1985 by the [[Institute of Mathematics and its Applications]]. ==Key concepts== ===Direct and iterative methods=== Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in [[Arbitrary-precision arithmetic|infinite precision arithmetic]]. Examples include [[Gaussian elimination]], the [[QR decomposition|QR factorization]] method for solving [[system of linear equations|systems of linear equations]], and the [[simplex method]] of [[linear programming]]. In practice, [[floating-point arithmetic|finite precision]] is used and the result is an approximation of the true solution (assuming [[numerically stable|stability]]). In contrast to direct methods, [[iterative method]]s are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that [[Limit of a sequence|converge]] to the exact solution only in the limit. A convergence test, often involving [[Residual (numerical analysis)|the residual]], is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the [[bisection method]], and [[Jacobi iteration]]. In computational matrix algebra, iterative methods are generally needed for large problems.<ref>{{cite book |first=Y. |last=Saad |title=Iterative methods for sparse linear systems |publisher=SIAM |date=2003 |isbn=978-0-89871-534-7 |url={{GBurl|qtzmkzzqFmcC|pg=PR5}} }}</ref><ref>{{cite book |last1=Hageman |first1=L.A. |last2=Young |first2=D.M. |title=Applied iterative methods |publisher=Courier Corporation |edition=2nd |date=2012 |isbn=978-0-8284-0312-2 |url={{GBurl|se3YdgFgz4YC|pg=PR4}} }}</ref><ref>{{cite book |first=J.F. |last=Traub |title=Iterative methods for the solution of equations |publisher=American Mathematical Society |edition=2nd |date=1982 |isbn=978-0-8284-0312-2 |url={{GBurl|se3YdgFgz4YC|pg=PR4}}}} </ref><ref>{{cite book |first=A. |last=Greenbaum |title=Iterative methods for solving linear systems |publisher=SIAM |date=1997 |isbn=978-0-89871-396-1 |url={{GBurl|QpVpvE4gWZwC|pg=PP6}}}}</ref> Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. [[GMRES]] and the [[conjugate gradient method]]. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. As an example, consider the problem of solving :3''x''<sup>3</sup> + 4 = 28 for the unknown quantity ''x''. {| style="margin:auto; text-align:right" |+ Direct method |- | || 3''x''<sup>3</sup> + 4 = 28. |- | ''Subtract 4'' || 3''x''<sup>3</sup> = 24. |- | ''Divide by 3'' || ''x''<sup>3</sup> = 8. |- | ''Take cube roots'' || ''x'' = 2. |} For the iterative method, apply the [[bisection method]] to ''f''(''x'') = 3''x''<sup>3</sup> − 24. The initial values are ''a'' = 0, ''b'' = 3, ''f''(''a'') = −24, ''f''(''b'') = 57. {| style="margin:auto;" class="wikitable" |+ Iterative method |- ! ''a'' !! ''b'' !! mid !! ''f''(mid) |- | 0 || 3 || 1.5 || −13.875 |- | 1.5 || 3 || 2.25 || 10.17... |- | 1.5 || 2.25 || 1.875 || −4.22... |- | 1.875 || 2.25 || 2.0625 || 2.32... |} From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. ===Conditioning=== Ill-conditioned problem: Take the function {{math|size=100%|1=''f''(''x'') = 1/(''x'' − 1)}}. Note that ''f''(1.1) = 10 and ''f''(1.001) = 1000: a change in ''x'' of less than 0.1 turns into a change in ''f''(''x'') of nearly 1000. Evaluating ''f''(''x'') near ''x'' = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function {{math|size=100%|1=''f''(''x'') = 1/(''x'' − 1)}} near ''x'' = 10 is a well-conditioned problem. For instance, ''f''(10) = 1/9 ≈ 0.111 and ''f''(11) = 0.1: a modest change in ''x'' leads to a modest change in ''f''(''x''). ===Discretization=== Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called '[[discretization]]'. For example, the solution of a [[differential equation]] is a [[function (mathematics)|function]]. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a [[Continuum (set theory)|continuum]]. ==Generation and propagation of errors== {{further|Error propagation}} The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. ===Round-off=== [[Round-off error]]s arise because it is impossible to represent all [[real number]]s exactly on a machine with finite memory (which is what all practical [[digital computer]]s are). ===Truncation and discretization error=== [[Truncation error]]s are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a [[discretization error]] because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of <math>3x^3+4=28</math>, after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type {{tmath|a+b+c+d+e}} is even more inexact. A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen. ===Numerical stability and well-posed problems=== An algorithm is called ''[[numerically stable]]'' if an error, whatever its cause, does not grow to be much larger during the calculation.<ref name="stab">{{harvnb|Higham|2002}}</ref> This happens if the problem is ''[[well-conditioned]]'', meaning that the solution changes by only a small amount if the problem data are changed by a small amount.<ref name="stab"/> To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.<ref name="stab"/> Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. ==Areas of study== The field of numerical analysis includes many sub-disciplines. Some of the major ones are: ===Computing values of functions=== {| class="wikitable" style="float: right; width: 250px; clear: right; margin-left: 1em;" | Interpolation: Observing that the temperature varies from 20 degrees Celsius at 1:00 to 14 degrees at 3:00, a linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm. Extrapolation: If the [[gross domestic product]] of a country has been growing an average of 5% per year and was 100 billion last year, it might be extrapolated that it will be 105 billion this year. [[Image:Linear-regression.svg|right|100px|A line through 20 points]] Regression: In linear regression, given ''n'' points, a line is computed that passes as close as possible to those ''n'' points. [[Image:LemonadeJuly2006.JPG|right|100px|How much for a glass of lemonade?]] Optimization: Suppose lemonade is sold at a [[lemonade stand]], at $1.00 per glass, that 197 glasses of lemonade can be sold per day, and that for each increase of $0.01, one less glass of lemonade will be sold per day. If $1.485 could be charged, profit would be maximized, but due to the constraint of having to charge a whole-cent amount, charging $1.48 or $1.49 per glass will both yield the maximum income of $220.52 per day. [[Image:Wind-particle.png|right|Wind direction in blue, true trajectory in black, Euler method in red]] Differential equation: If 100 fans are set up to blow air from one end of the room to the other and then a feather is dropped into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the [[Euler method]] for solving an ordinary differential equation. |} One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the [[Horner scheme]], since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control [[round-off error]]s arising from the use of [[floating-point arithmetic]]. ===Interpolation, extrapolation, and regression=== [[Interpolation]] solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? [[Extrapolation]] is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.<ref>{{cite book |last1=Brezinski |first1=C. |last2=Zaglia |first2=M.R. |title=Extrapolation methods: theory and practice |publisher=Elsevier |date=2013 |isbn=978-0-08-050622-7 |url={{GBurl|WGviBQAAQBAJ|pg=PR7}}}}</ref> [[Regression analysis|Regression]] is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The [[numerical methods for linear least squares|least squares]]-method is one way to achieve this. ===Solving equations and systems of equations=== Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation <math>2x+5=3</math> is linear while <math>2x^2+5=3</math> is not. Much effort has been put in the development of methods for solving [[systems of linear equations]]. Standard direct methods, i.e., methods that use some [[matrix decomposition]] are [[Gaussian elimination]], [[LU decomposition]], [[Cholesky decomposition]] for [[symmetric matrix|symmetric]] (or [[hermitian matrix|hermitian]]) and [[positive-definite matrix]], and [[QR decomposition]] for non-square matrices. Iterative methods such as the [[Jacobi method]], [[Gauss–Seidel method]], [[successive over-relaxation]] and [[conjugate gradient method]]<ref>{{cite journal |last1=Hestenes |first1=Magnus R. |last2=Stiefel |first2=Eduard |title=Methods of Conjugate Gradients for Solving Linear Systems |journal=Journal of Research of the National Bureau of Standards |volume=49 |issue=6 |pages=409– |date=December 1952 |doi=10.6028/jres.049.044 |url= https://nvlpubs.nist.gov/nistpubs/jres/049/jresv49n6p409_A1b.pdf}}</ref> are usually preferred for large systems. General iterative methods can be developed using a [[matrix splitting]]. [[Root-finding algorithm]]s are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is [[derivative|differentiable]] and the derivative is known, then Newton's method is a popular choice.<ref>{{cite book |last1=Ezquerro Fernández |first1=J.A. |last2=Hernández Verón |first2=M.Á. |title=Newton's method: An updated approach of Kantorovich's theory |publisher=Birkhäuser |date=2017 |isbn=978-3-319-55976-6 |url={{GBurl|A3orDwAAQBAJ|pg=PR11}}}}</ref><ref>{{cite book |first=Peter |last=Deuflhard |title=Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms |publisher=Springer |edition=2nd |series=Computational Mathematics |volume=35 |date=2006 |isbn=978-3-540-21099-3 |url={{GBurl|l20xK__HG_kC|pg=PP1}} }}</ref> [[Linearization]] is another technique for solving nonlinear equations. ===Solving eigenvalue or singular value problems=== Several important problems can be phrased in terms of [[eigenvalue decomposition]]s or [[singular value decomposition]]s. For instance, the [[image compression|spectral image compression]] algorithm<ref>{{cite web |first1=C.J. |last1=Ogden |first2=T. |last2=Huff |title=The Singular Value Decomposition and Its Applications in Image Compression |date=1997 |work=Math 45 |publisher=College of the Redwoods |url=http://online.redwoods.cc.ca.us/instruct/darnold/laproj/Fall97/Tammie/tammie.pdf|archive-url=https://web.archive.org/web/20060925193348/http://online.redwoods.cc.ca.us/instruct/darnold/laproj/Fall97/Tammie/tammie.pdf |archive-date=25 September 2006 }}</ref> is based on the singular value decomposition. The corresponding tool in statistics is called [[principal component analysis]]. ===Optimization=== {{Main|Mathematical optimization}} Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some [[Constraint (mathematics)|constraint]]s. The field of optimization is further split in several subfields, depending on the form of the [[objective function]] and the constraint. For instance, [[linear programming]] deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the [[simplex algorithm|simplex method]]. The method of [[Lagrange multipliers]] can be used to reduce optimization problems with constraints to unconstrained optimization problems. ===Evaluating integrals=== {{Main|Numerical integration}} Numerical integration, in some instances also known as numerical [[quadrature (mathematics)|quadrature]], asks for the value of a definite [[integral]].<ref>{{cite book |last1=Davis |first1=P.J. |last2=Rabinowitz |first2=P. |title=Methods of numerical integration |publisher=Courier Corporation |date=2007 |isbn=978-0-486-45339-2 |url={{GBurl|gGCKdqka0HAC|pg=PR5}}}}</ref> Popular methods use one of the [[Newton–Cotes formulas]] (like the midpoint rule or [[Simpson's rule]]) or [[Gaussian quadrature]].<ref>{{MathWorld|author=Weisstein, Eric W. |title=Gaussian Quadrature |id=GaussianQuadrature}}</ref> These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use [[Monte Carlo method|Monte Carlo]] or [[quasi-Monte Carlo method]]s (see [[Monte Carlo integration]]<ref>{{cite book |first=John |last=Geweke |chapter=15. Monte carlo simulation and numerical integration |chapter-url=https://www.sciencedirect.com/science/article/pii/S1574002196010179) |doi=10.1016/S1574-0021(96)01017-9|title=Handbook of Computational Economics |publisher=Elsevier |volume=1 |date=1996 |isbn=9780444898579 |pages=731–800 }}</ref>), or, in modestly large dimensions, the method of [[sparse grid]]s. ===Differential equations=== {{Main|Numerical ordinary differential equations|Numerical partial differential equations}} Numerical analysis is also concerned with computing (in an approximate way) the solution of [[differential equations]], both [[ordinary differential equations]] and [[partial differential equations]].<ref>{{cite book |first=A. |last=Iserles |title=A first course in the numerical analysis of differential equations |publisher=Cambridge University Press |edition=2nd |date=2009 |isbn=978-0-521-73490-5 |url={{GBurl|M0tkw4oUucoC|pg=PR5}} }}</ref> Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace.<ref>{{cite book |first=W.F. |last=Ames |title=Numerical methods for partial differential equations |publisher=Academic Press |edition=3rd |date=2014 |isbn=978-0-08-057130-0 |url={{GBurl|KmjiBQAAQBAJ|pg=PP7}} }}</ref> This can be done by a [[finite element method]],<ref>{{cite book |first=C. |last=Johnson |title=Numerical solution of partial differential equations by the finite element method |publisher=Courier Corporation |date=2012 |isbn=978-0-486-46900-3 |url={{GBurl|0IFCAwAAQBAJ|p=2}} }}</ref><ref>{{cite book |last1=Brenner |first1=S. |last2=Scott |first2=R. |title=The mathematical theory of finite element methods |publisher=Springer |edition=2nd |date=2013 |isbn=978-1-4757-3658-8 |url={{GBurl|ServBwAAQBAJ|pg=PR11}}}}</ref><ref>{{cite book |last1=Strang |first1=G. |last2=Fix |first2=G.J. |orig-year=1973 |title=An analysis of the finite element method |publisher=Wellesley-Cambridge Press |date=2018 |isbn=9780980232783 |url=https://archive.org/details/analysisoffinite0000stra |oclc=1145780513 |edition=2nd}}</ref> a [[finite difference]] method,<ref>{{cite book |last=Strikwerda |first=J.C. |title=Finite difference schemes and partial differential equations |publisher=SIAM |edition=2nd |date=2004 |isbn=978-0-89871-793-8 |url={{GBurl|mbdt5XT25AsC|pg=PP5}}}}</ref> or (particularly in engineering) a [[finite volume method]].<ref>{{cite book |first=Randall |last=LeVeque |title=Finite Volume Methods for Hyperbolic Problems |publisher=Cambridge University Press |date=2002 |isbn=978-1-139-43418-8 |url={{GBurl|mfAfAwAAQBAJ|pg=PT6}}}}</ref> The theoretical justification of these methods often involves theorems from [[functional analysis]]. This reduces the problem to the solution of an algebraic equation. ==Software== {{Main|List of numerical-analysis software|Comparison of numerical-analysis software}} Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The [[Netlib]] repository contains various collections of software routines for numerical problems, mostly in [[Fortran]] and [[C (programming language)|C]]. Commercial products implementing many different numerical algorithms include the [[IMSL Numerical Libraries|IMSL]] and [[Numerical Algorithms Group|NAG]] libraries; a [[free software|free-software]] alternative is the [[GNU Scientific Library]]. Over the years the [[Royal Statistical Society]] published numerous algorithms in its [[Journal of the Royal Statistical Society, Series C (Applied Statistics)|''Applied Statistics'']] (code for these "AS" functions is [https://jblevins.org/mirror/amiller/#apstat here]); [[Association for Computing Machinery|ACM]] similarly, in its ''[[Transactions on Mathematical Software]]'' ("TOMS" code is [https://jblevins.org/mirror/amiller/#toms here]). The [[Naval Surface Warfare Center]] several times published its [https://apps.dtic.mil/sti/pdfs/ADA476840.pdf ''Library of Mathematics Subroutines''] (code [https://jblevins.org/mirror/amiller/#nswc here]). There are several popular numerical computing applications such as [[MATLAB]],<ref>{{cite book |last1=Quarteroni |first1=A. |last2=Saleri |first2=F. |last3=Gervasio |first3=P. |title=Scientific computing with MATLAB and Octave |publisher=Springer |edition=4th |date=2014 |isbn=978-3-642-45367-0 |url={{GBurl|_0m9BAAAQBAJ|pg=PR11}}}}</ref><ref name="gh">{{cite book |editor1-last=Gander |editor1-first=W. |editor2-last=Hrebicek |editor2-first=J. |title=Solving problems in scientific computing using Maple and Matlab® |publisher=Springer |date=2011 |isbn=978-3-642-18873-2 |url={{GBurl|di2qCAAAQBAJ|pg=PR14}}}}</ref><ref name="bf">{{cite book |last1=Barnes |first1=B. |last2=Fulford |first2=G.R. |title=Mathematical modelling with case studies: a differential equations approach using Maple and MATLAB |publisher=CRC Press |edition=2nd |date=2011 |isbn=978-1-4200-8350-7 |oclc=1058138488 }}</ref> [[TK Solver]], [[S-PLUS]], and [[IDL (programming language)|IDL]]<ref>{{cite book |first=L.E. |last=Gumley |title=Practical IDL programming |publisher=Elsevier |date=2001 |isbn=978-0-08-051444-4 |url={{GBurl|1d-tNpm_x4gC|pg=PR9}}}}</ref> as well as free and open-source alternatives such as [[FreeMat]], [[Scilab]],<ref>{{cite book |last1=Bunks |first1=C. |last2=Chancelier |first2=J.P. |last3=Delebecque |first3=F. |last4=Goursat |first4=M. |last5=Nikoukhah |first5=R. |last6=Steer |first6=S. |title=Engineering and scientific computing with Scilab |publisher=Springer |date=2012 |isbn=978-1-4612-7204-5 }}</ref><ref>{{cite book |last1=Thanki |first1=R.M. |last2=Kothari |first2=A.M. |title=Digital image processing using SCILAB |publisher=Springer |date=2019 |isbn=978-3-319-89533-8 |url={{GBurl|VydaDwAAQBAJ|pg=PR9}}}}</ref> [[GNU Octave]] (similar to Matlab), and [[IT++]] (a C++ library). There are also programming languages such as [[R (programming language)|R]]<ref>{{cite journal |last1=Ihaka |first1=R. |last2=Gentleman |first2=R. |title=R: a language for data analysis and graphics |journal=Journal of Computational and Graphical Statistics |volume=5 |issue=3 |pages=299–314 |date=1996 |doi=10.1080/10618600.1996.10474713 |s2cid=60206680 |url=https://www.stat.auckland.ac.nz/~ihaka/downloads/R-paper.pdf}}</ref> (similar to S-PLUS), [[Julia (programming language)|Julia]],<ref>{{Cite journal|last1=Bezanson|first1=Jeff|last2=Edelman|first2=Alan|last3=Karpinski|first3=Stefan|last4=Shah|first4=Viral B.|date=2017-01-01|title=Julia: A Fresh Approach to Numerical Computing|url=https://epubs.siam.org/doi/abs/10.1137/141000671|journal=SIAM Review|volume=59|issue=1|pages=65–98|doi=10.1137/141000671|arxiv=1411.1607 |issn=0036-1445|hdl=1721.1/110125|s2cid=13026838 |hdl-access=free}}</ref> and [[Python (programming language)|Python]] with libraries such as [[NumPy]], [[SciPy]]<ref>Jones, E., Oliphant, T., & Peterson, P. (2001). SciPy: Open source scientific tools for Python.</ref><ref>{{cite book |first=E. |last=Bressert |title=SciPy and NumPy: an overview for developers |publisher=O'Reilly |date=2012 |isbn=9781306810395 }}</ref><ref>{{cite book |first=F.J. |last=Blanco-Silva |title=Learning SciPy for numerical and scientific computing |publisher=Packt |date=2013 |isbn=9781782161639 }}</ref> and [[SymPy]]. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.<ref>[http://www.sciviews.org/benchmark/ Speed comparison of various number crunching packages] {{webarchive |url=https://web.archive.org/web/20061005024002/http://www.sciviews.org/benchmark/ |date=5 October 2006 }}</ref><ref>[http://www.scientificweb.com/ncrunch/ncrunch5.pdf Comparison of mathematical programs for data analysis] {{Webarchive|url=http://arquivo.pt/wayback/20160518062220/http://www.scientificweb.com/ncrunch/ncrunch5.pdf |date=18 May 2016 }} Stefan Steinhaus, ScientificWeb.com</ref> Many [[computer algebra system]]s such as [[Mathematica]] also benefit from the availability of [[arbitrary-precision arithmetic]] which can provide more accurate results.<ref>{{cite book |first=R.E. |last=Maeder |title=Programming in mathematica |publisher=Addison-Wesley |edition=3rd |date=1997 |isbn=9780201854497 |oclc=1311056676 |url=https://archive.org/details/programminginmat0000maed_l2m6}}</ref><ref>{{cite book |first=Stephen |last=Wolfram |date=1999 |title=The MATHEMATICA® book, version 4 |publisher=[[Cambridge University Press]] |url={{GBurl|Xny77v_QPkEC|pg=PR19}} |isbn=9781579550042 }}</ref><ref>{{cite book |last1=Shaw |first1=W.T. |last2=Tigg |first2=J. |title=Applied Mathematica: getting started, getting it done |publisher=Addison-Wesley |date=1993 |isbn=978-0-201-54217-2 |oclc=28149048 |url=http://www.gbv.de/dms/bowker/toc/9780201542172.pdf}}</ref><ref>{{cite book |last1=Marasco |first1=A. |last2=Romano |first2=A. |title=Scientific Computing with Mathematica: Mathematical Problems for Ordinary Differential Equations |publisher=Springer |date=2001 |isbn=978-0-8176-4205-1 |url={{GBurl|iFRqemnmMqUC|pg=PR7}}}}</ref> Also, any [[spreadsheet]] [[software]] can be used to solve simple problems relating to numerical analysis. [[Microsoft_Excel#|Excel]], for example, has hundreds of [[Microsoft Excel#Functions|available functions]], including for matrices, which may be used in conjunction with its [[Microsoft Excel#Add-ins|built in "solver"]]. ==See also== {{div col |colwidth=20em}} *[[:Category:Numerical analysts]] *[[Analysis of algorithms]] *[[Approximation theory]] *[[Computational science]] *[[Computational physics]] *[[Gordon Bell Prize]] *[[Interval arithmetic]] *[[List of numerical analysis topics]] *[[Local linearization method]] *[[Numerical differentiation]] *[[Numerical Recipes]] *[[Probabilistic numerics]] *[[Symbolic-numeric computation]] *[[Validated numerics]] {{div col end}} ==Notes== {{NoteFoot}} ==References== ===Citations=== {{Reflist}} ===Sources=== {{refbegin}} * {{cite book |author1 = Golub, Gene H. |author-link = Gene H. Golub |author2 = Charles F. Van Loan |author2-link = Charles F. Van Loan |title=Matrix Computations |edition=3rd |publisher=Johns Hopkins University Press |isbn=0-8018-5413-X |year = 1986 }} * {{cite book |author1 = Ralston Anthony |author2 = Rabinowitz Philips | title=A First Course in Numerical Analysis | edition=2nd |publisher=Dover publications | isbn=978-0486414546 |year=2001 }} *{{cite book |first=Nicholas J. |last=Higham |author-link=Nicholas Higham |title = Accuracy and Stability of Numerical Algorithms |url=https://archive.org/details/accuracystabilit0000high |url-access=registration |publisher=Society for Industrial and Applied Mathematics |isbn=0-89871-355-2 |orig-year=1996 |year=2002}} * {{cite book |last=Hildebrand |first=F. B. |author-link=Francis B. Hildebrand | title=Introduction to Numerical Analysis |edition=2nd |year=1974 |publisher=McGraw-Hill |isbn= 0-07-028761-9 }} * David Kincaid and Ward Cheney: ''Numerical Analysis : Mathematics of Scientific Computing'', 3rd Ed., AMS, ISBN 978-0-8218-4788-6 (2002). * {{cite book |last=Leader |first=Jeffery J. |author-link=Jeffery J. Leader |title=Numerical Analysis and Scientific Computation |year=2004 |publisher=Addison Wesley |isbn= 0-201-73499-0 }} * {{cite book|last= Wilkinson |first =J.H. |author-link=James H. Wilkinson |title=The Algebraic Eigenvalue Problem |url= https://archive.org/details/algebraiceigenva0000wilk |url-access= registration |publisher = Clarendon Press |orig-year=1965 |year=1988 |isbn=978-0-19-853418-1 }} * {{cite conference |last = Kahan |first = W. |author-link= William Kahan |title = A survey of error-analysis |journal = Info. Processing 71 |conference = Proc. IFIP Congress 71 in Ljubljana |volume = 2 |pages = 1214–39 |publisher = North-Holland |year = 1972 |isbn=978-0-7204-2063-0 |oclc=25116949 }} (examples of the importance of accurate arithmetic). *{{cite book |author-link=Lloyd N. Trefethen |last=Trefethen |first=Lloyd N. |chapter=IV.21 Numerical analysis |chapter-url=http://people.maths.ox.ac.uk/trefethen/NAessay.pdf |editor1-first=I. |editor1-last=Leader |editor2-first=T. |editor2-last=Gowers |editor3-first=J. |editor3-last=Barrow-Green |title=Princeton Companion of Mathematics |publisher=Princeton University Press |date=2008 |isbn=978-0-691-11880-2 |pages=604–614 |url={{GBurl|GLumDwAAQBAJ|pg=PR5}}}} {{refend}} ==External links== {{Sister project links| wikt=no|c=Category:Numerical analysis | b=Numerical Methods | n=no | q=Numerical analysis | s=no | v=no | voy=no | species=no | d=no}} ===Journals=== *''[[Numerische Mathematik]]'', volumes 1–..., [https://www.springer.com/mathematics/numerical+and+computational+mathematics/journal/211 Springer], 1959– **[http://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN362160546 volumes 1–66, 1959–1994] (searchable; pages are images). {{in lang|en|de}} *''[[Journal on Numerical Analysis]]'' [https://epubs.siam.org/journal/sjnaam (SINUM)], volumes 1–..., SIAM, 1964– ===Online texts=== *{{springer|title=Numerical analysis|id=p/n120130}} *[https://web.archive.org/web/20150905141405/http://www.nr.com/oldverswitcher.html ''Numerical Recipes''], William H. Press (free, downloadable previous editions) *[https://web.archive.org/web/20120225082123/http://kr.cs.ait.ac.th/~radok/math/mat7/stepsa.htm ''First Steps in Numerical Analysis''] ([[Internet Archive|archived]]), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner *[https://web.archive.org/web/20170801213333/http://www.phy.ornl.gov/csep/CSEP/TEXTOC.html ''CSEP'' (Computational Science Education Project)], [[U.S. Department of Energy]] ([[Internet Archive|archived 2017-08-01]]) *[https://dlmf.nist.gov/3 Numerical Methods], ch 3. in the ''[[Digital Library of Mathematical Functions]]'' *[https://personal.math.ubc.ca/~cbm/aands/page_875.htm Numerical Interpolation, Differentiation and Integration], ch 25. in the ''Handbook of Mathematical Functions'' ([[Abramowitz and Stegun]]) *[https://fncbook.com/ Tobin A. Driscoll and Richard J. Braun: ''Fundamentals of Numerical Computation'' (free online version)] ===Online course material=== *[http://www.damtp.cam.ac.uk/user/fdl/people/sd103/lectures/nummeth98/index.htm#L_1_Title_Page Numerical Methods] ({{Webarchive|url=https://web.archive.org/web/20090728181209/http://www.damtp.cam.ac.uk/user/fdl/people/sd103/lectures/nummeth98/index.htm#L_1_Title_Page |date=28 July 2009 }}), Stuart Dalziel [[University of Cambridge]] *[http://www.math.upenn.edu/~wilf/DeturckWilf.pdf Lectures on Numerical Analysis], Dennis Deturck and Herbert S. Wilf [[University of Pennsylvania]] *[http://johndfenton.com/Lectures/Numerical-Methods/Numerical-Methods.pdf Numerical methods], John D. Fenton [[University of Karlsruhe]] *[http://www-teaching.physics.ox.ac.uk/computing/NumericalMethods/NMfP.pdf Numerical Methods for Physicists], Anthony O’Hare [[Oxford University]] *[https://web.archive.org/web/20120225082123/http://kr.cs.ait.ac.th/~radok/math/mat7/stepsa.htm Lectures in Numerical Analysis] ([[Internet Archive|archived]]), R. Radok [[Mahidol University]] *[http://ocw.mit.edu/courses/mechanical-engineering/2-993j-introduction-to-numerical-analysis-for-engineering-13-002j-spring-2005/ Introduction to Numerical Analysis for Engineering], Henrik Schmidt [[Massachusetts Institute of Technology]] *[http://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/ ''Numerical Analysis for Engineering''], D. W. Harder [[University of Waterloo]] *[https://www.math.umd.edu/~diom/courses/AMSC466/Levy-notes.pdf Introduction to Numerical Analysis], Doron Levy [[University of Maryland]] *[https://web.archive.org/web/20070310212643/http://math.fullerton.edu/mathews/n2003/NumericalUndergradMod.html Numerical Analysis - Numerical Methods] (archived), John H. Mathews [[California State University Fullerton]] {{Areas of mathematics}} {{Branches of physics}} {{Computer science}} {{Authority control}} {{DEFAULTSORT:Numerical Analysis}} [[Category:Numerical analysis| ]] [[Category:Mathematical physics]] [[Category:Computational science]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Areas of mathematics
(
edit
)
Template:Authority control
(
edit
)
Template:Branches of physics
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Computer science
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Further
(
edit
)
Template:Harvnb
(
edit
)
Template:In lang
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:MathWorld
(
edit
)
Template:Navbox
(
edit
)
Template:NoteFoot
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:SfnRef
(
edit
)
Template:Short description
(
edit
)
Template:Sister project links
(
edit
)
Template:Springer
(
edit
)
Template:Tmath
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)