Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
List of algorithms
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Numerical algorithms=== {{further|Numerical analysis|List of numerical analysis topics}} ====Differential equation solving==== {{further|Differential equation}} * [[Euler method]] * [[Backward Euler method]] * [[Trapezoidal rule (differential equations)]] * [[Linear multistep method]]s * [[Runge–Kutta methods]] ** [[Euler integration]] * [[Multigrid method]]s (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations * [[Partial differential equation]]: ** [[Finite difference method]] ** [[Crank–Nicolson method]] for diffusion equations ** [[Lax–Wendroff method|Lax–Wendroff]] for wave equations * [[Verlet integration]] ({{IPA|fr|vɛʁˈlɛ}}): integrate Newton's equations of motion ====Elementary and special functions==== {{further|Special functions}} * [[Computing π|Computation of π]]: ** [[Borwein's algorithm]]: an algorithm to calculate the value of 1/π ** [[Gauss–Legendre algorithm]]: computes the digits of [[pi]] ** [[Chudnovsky algorithm]]: a fast method for calculating the digits of π ** [[Bailey–Borwein–Plouffe formula]]: (BBP formula) a spigot algorithm for the computation of the nth binary digit of π * [[Division algorithm]]s: for computing quotient and/or remainder of two numbers ** [[Long division]] ** [[Restoring division]] ** [[Non-restoring division]] ** [[SRT division]] ** [[Newton–Raphson division]]: uses [[Newton's method]] to find the [[Multiplicative inverse|reciprocal]] of D, and multiply that reciprocal by N to find the final quotient Q. ** [[Goldschmidt division]] * Hyperbolic and Trigonometric Functions: ** [[BKM algorithm]]: computes [[Elementary function (differential algebra)|elementary functions]] using a table of logarithms ** [[CORDIC]]: computes hyperbolic and trigonometric functions using a table of arctangents * Exponentiation: ** [[Addition-chain exponentiation]]: exponentiation by positive integer powers that requires a minimal number of multiplications ** [[Exponentiating by squaring]]: an algorithm used for the fast computation of [[Arbitrary-precision arithmetic|large integer]] powers of a number * [[Montgomery reduction]]: an algorithm that allows [[modular arithmetic]] to be performed efficiently when the modulus is large * [[Multiplication algorithm]]s: fast multiplication of two numbers ** [[Booth's multiplication algorithm]]: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation ** [[Fürer's algorithm]]: an integer multiplication algorithm for very large numbers possessing a very low [[Computational complexity theory|asymptotic complexity]] ** [[Karatsuba algorithm]]: an efficient procedure for multiplying large numbers ** [[Schönhage–Strassen algorithm]]: an asymptotically fast multiplication algorithm for large integers ** [[Toom–Cook multiplication]]: (Toom3) a multiplication algorithm for large integers * [[Multiplicative inverse#Algorithms|Multiplicative inverse Algorithms]]: for computing a number's multiplicative inverse (reciprocal). ** [[Newton's method#Multiplicative inverses of numbers and power series|Newton's method]] * [[Rounding functions]]: the classic ways to round numbers * [[Spigot algorithm]]: a way to compute the value of a [[mathematical constant]] without knowing preceding digits * Square and Nth root of a number: ** [[Alpha max plus beta min algorithm]]: an approximation of the square-root of the sum of two squares ** [[Methods of computing square roots]] ** [[Nth root algorithm|''n''th root algorithm]] * Summation: ** [[Binary splitting]]: a [[Divide and conquer algorithm|divide and conquer]] technique which speeds up the numerical evaluation of many types of series with rational terms ** [[Kahan summation algorithm]]: a more accurate method of summing floating-point numbers * [[Unrestricted algorithm]] ====Geometric==== * [[Radon transform#Radon inversion formula|Filtered back-projection]]: efficiently computes the inverse 2-dimensional [[Radon transform]]. * [[Level set method]] (LSM): a numerical technique for tracking interfaces and shapes ====Interpolation and extrapolation==== {{further|Interpolation|Extrapolation}} * [[Birkhoff interpolation]]: an extension of polynomial interpolation * [[Cubic interpolation]] * [[Hermite interpolation]] * [[Lagrange interpolation]]: interpolation using [[Lagrange polynomial]]s * [[Linear interpolation]]: a method of curve fitting using linear polynomials * [[Monotone cubic interpolation]]: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. * [[Multivariate interpolation]] ** [[Bicubic interpolation]]: a generalization of [[cubic interpolation]] to two dimensions ** [[Bilinear interpolation]]: an extension of [[linear interpolation]] for interpolating functions of two variables on a regular grid ** [[Lanczos resampling]] ("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data ** [[Nearest-neighbor interpolation]] ** [[Tricubic interpolation]]: a generalization of [[cubic interpolation]] to three dimensions * [[Pareto interpolation]]: a method of estimating the median and other properties of a population that follows a [[Pareto distribution]]. * [[Polynomial interpolation]] ** [[Neville's algorithm]] * [[Spline interpolation]]: Reduces error with [[Runge's phenomenon]]. ** [[De Boor algorithm]]: [[B-spline]]s ** [[De Casteljau's algorithm]]: [[Bézier curve]]s * [[Trigonometric interpolation]] ====Linear algebra==== {{further|Numerical linear algebra}} * Krylov methods (for large sparse matrix problems; third most-important numerical method class of the 20th century as ranked by SISC; after fast-fourier and fast-multipole) * [[Eigenvalue algorithm]]s ** [[Arnoldi iteration]] ** [[Inverse iteration]] ** [[Jacobi eigenvalue algorithm|Jacobi method]] ** [[Lanczos iteration]] ** [[Power iteration]] ** [[QR algorithm]] ** [[Rayleigh quotient iteration]] * [[Gram–Schmidt process]]: orthogonalizes a set of vectors * [[Matrix multiplication algorithm]]s ** [[Cannon's algorithm]]: a [[distributed algorithm]] for [[matrix multiplication]] especially suitable for computers laid out in an N × N mesh ** [[Coppersmith–Winograd algorithm]]: square [[matrix multiplication]] ** [[Freivalds' algorithm]]: a randomized algorithm used to verify matrix multiplication ** [[Strassen algorithm]]: faster [[matrix multiplication]] {{anchor|Solving systems of linear equations}} * Solving [[system of linear equations|systems of linear equations]] ** [[Biconjugate gradient method]]: solves systems of linear equations ** [[Conjugate gradient]]: an algorithm for the numerical solution of particular systems of linear equations ** [[Gaussian elimination]] ** [[Gauss–Jordan elimination]]: solves systems of linear equations ** [[Gauss–Seidel method]]: solves systems of linear equations iteratively ** [[Levinson recursion]]: solves equation involving a [[Toeplitz matrix]] ** [[Stone's method]]: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations ** [[Successive over-relaxation]] (SOR): method used to speed up convergence of the [[Gauss–Seidel method]] ** [[Tridiagonal matrix algorithm]] (Thomas algorithm): solves systems of tridiagonal equations * [[Sparse matrix]] algorithms ** [[Cuthill–McKee algorithm]]: reduce the [[bandwidth (matrix theory)|bandwidth]] of a [[symmetric sparse matrix]] ** [[Minimum degree algorithm]]: permute the rows and columns of a [[symmetric sparse matrix]] before applying the [[Cholesky decomposition]] ** [[Symbolic Cholesky decomposition]]: Efficient way of storing [[sparse matrix]] ====Monte Carlo==== {{further|Monte Carlo method}} * [[Gibbs sampling]]: generates a sequence of samples from the joint probability distribution of two or more random variables * [[Hybrid Monte Carlo]]: generates a sequence of samples using [[Hamiltonian dynamics|Hamiltonian]] weighted [[Markov chain Monte Carlo]], from a probability distribution which is difficult to sample directly. * [[Metropolis–Hastings algorithm]]: used to generate a sequence of samples from the [[probability distribution]] of one or more variables * [[Wang and Landau algorithm]]: an extension of [[Metropolis–Hastings algorithm]] sampling ====Numerical integration==== {{further|Numerical integration}} * [[MISER algorithm]]: Monte Carlo simulation, [[numerical integration]] ====Root finding==== {{main|Root-finding algorithm}} * [[Bisection method]] * [[False position method]]: and Illinois method: 2-point, bracketing * [[Halley's method]]: uses first and second derivatives * [[ITP Method|ITP method]]: minmax optimal and superlinear convergence simultaneously * [[Muller's method]]: 3-point, quadratic interpolation * [[Newton's method]]: finds zeros of functions with [[calculus]] * [[Ridder's method]]: 3-point, exponential scaling * [[Secant method]]: 2-point, 1-sided
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)