Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multiplication algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Algorithm to multiply two numbers}} {{Use dmy dates|date=May 2019|cs1-dates=y}} A '''multiplication algorithm''' is an [[algorithm]] (or method) to [[multiplication|multiply]] two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic. The oldest and simplest method, known since [[Ancient history|antiquity]] as '''long multiplication''' or '''grade-school multiplication''', consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a [[time complexity]] of <math>O(n^2)</math>, where ''n'' is the number of digits. When done by hand, this may also be reframed as [[grid method multiplication]] or [[lattice multiplication]]. In software, this may be called "shift and add" due to [[bitshifts]] and addition being the only two operations needed. In 1960, [[Anatoly Karatsuba]] discovered [[Karatsuba multiplication]], unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply [[complex numbers]] quickly.) Done [[recursively]], this has a time complexity of <math>O(n^{\log_2 3})</math>. Splitting numbers into more than two parts results in [[Toom-Cook multiplication]]; for example, using three parts results in the '''Toom-3''' algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical. In 1968, the [[Schönhage-Strassen algorithm]], which makes use of a [[Fourier transform]] over a [[Modulus (modular arithmetic)|modulus]], was discovered. It has a time complexity of <math>O(n\log n\log\log n)</math>. In 2007, [[Martin Fürer]] proposed an algorithm with complexity <math>O(n\log n 2^{\Theta(\log^* n)})</math>. In 2014, Harvey, [[Joris van der Hoeven]], and Lecerf proposed one with complexity <math>O(n\log n 2^{3\log^* n})</math>, thus making the [[implicit constant]] explicit; this was improved to <math>O(n\log n 2^{2\log^* n})</math> in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with a [[galactic algorithm]] with complexity <math>O(n\log n)</math>. This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains a [[conjecture]] today. Integer multiplication algorithms can also be used to multiply polynomials by means of the method of [[Kronecker substitution]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)