Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Numerical analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Key concepts== ===Direct and iterative methods=== Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in [[Arbitrary-precision arithmetic|infinite precision arithmetic]]. Examples include [[Gaussian elimination]], the [[QR decomposition|QR factorization]] method for solving [[system of linear equations|systems of linear equations]], and the [[simplex method]] of [[linear programming]]. In practice, [[floating-point arithmetic|finite precision]] is used and the result is an approximation of the true solution (assuming [[numerically stable|stability]]). In contrast to direct methods, [[iterative method]]s are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that [[Limit of a sequence|converge]] to the exact solution only in the limit. A convergence test, often involving [[Residual (numerical analysis)|the residual]], is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the [[bisection method]], and [[Jacobi iteration]]. In computational matrix algebra, iterative methods are generally needed for large problems.<ref>{{cite book |first=Y. |last=Saad |title=Iterative methods for sparse linear systems |publisher=SIAM |date=2003 |isbn=978-0-89871-534-7 |url={{GBurl|qtzmkzzqFmcC|pg=PR5}} }}</ref><ref>{{cite book |last1=Hageman |first1=L.A. |last2=Young |first2=D.M. |title=Applied iterative methods |publisher=Courier Corporation |edition=2nd |date=2012 |isbn=978-0-8284-0312-2 |url={{GBurl|se3YdgFgz4YC|pg=PR4}} }}</ref><ref>{{cite book |first=J.F. |last=Traub |title=Iterative methods for the solution of equations |publisher=American Mathematical Society |edition=2nd |date=1982 |isbn=978-0-8284-0312-2 |url={{GBurl|se3YdgFgz4YC|pg=PR4}}}} </ref><ref>{{cite book |first=A. |last=Greenbaum |title=Iterative methods for solving linear systems |publisher=SIAM |date=1997 |isbn=978-0-89871-396-1 |url={{GBurl|QpVpvE4gWZwC|pg=PP6}}}}</ref> Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. [[GMRES]] and the [[conjugate gradient method]]. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. As an example, consider the problem of solving :3''x''<sup>3</sup> + 4 = 28 for the unknown quantity ''x''. {| style="margin:auto; text-align:right" |+ Direct method |- | || 3''x''<sup>3</sup> + 4 = 28. |- | ''Subtract 4'' || 3''x''<sup>3</sup> = 24. |- | ''Divide by 3'' || ''x''<sup>3</sup> = 8. |- | ''Take cube roots'' || ''x'' = 2. |} For the iterative method, apply the [[bisection method]] to ''f''(''x'') = 3''x''<sup>3</sup> − 24. The initial values are ''a'' = 0, ''b'' = 3, ''f''(''a'') = −24, ''f''(''b'') = 57. {| style="margin:auto;" class="wikitable" |+ Iterative method |- ! ''a'' !! ''b'' !! mid !! ''f''(mid) |- | 0 || 3 || 1.5 || −13.875 |- | 1.5 || 3 || 2.25 || 10.17... |- | 1.5 || 2.25 || 1.875 || −4.22... |- | 1.875 || 2.25 || 2.0625 || 2.32... |} From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. ===Conditioning=== Ill-conditioned problem: Take the function {{math|size=100%|1=''f''(''x'') = 1/(''x'' − 1)}}. Note that ''f''(1.1) = 10 and ''f''(1.001) = 1000: a change in ''x'' of less than 0.1 turns into a change in ''f''(''x'') of nearly 1000. Evaluating ''f''(''x'') near ''x'' = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function {{math|size=100%|1=''f''(''x'') = 1/(''x'' − 1)}} near ''x'' = 10 is a well-conditioned problem. For instance, ''f''(10) = 1/9 β 0.111 and ''f''(11) = 0.1: a modest change in ''x'' leads to a modest change in ''f''(''x''). ===Discretization=== Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called '[[discretization]]'. For example, the solution of a [[differential equation]] is a [[function (mathematics)|function]]. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a [[Continuum (set theory)|continuum]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)