Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multiply–accumulate operation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Fused multiply–add== A '''fused multiply–add''' ('''FMA''' or '''fmadd''')<ref>{{cite web|title=fmadd instrs|website=[[IBM]] |url=https://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.alangref/idalangref_fmadd_instrs.htm}}</ref> is a floating-point multiply–add operation performed in one step ([[fused operation]]), with a single rounding. That is, where an unfused multiply–add would compute the product {{math|''b'' × ''c''}}, round it to ''N'' significant bits, add the result to ''a'', and round back to ''N'' significant bits, a fused multiply–add would compute the entire expression {{math|''a'' + (''b'' × ''c'')}} to its full precision before rounding the final result down to ''N'' significant bits. A fast FMA can speed up and improve the accuracy of many computations that involve the accumulation of products: * [[Dot product]] * [[Matrix multiplication]] * [[Polynomial evaluation]] (e.g., with [[Horner's rule]]) * [[Newton's method]] for evaluating functions (from the inverse function) * [[Convolutions]] and [[artificial neural networks]] * Multiplication in [[Quadruple-precision floating-point format#Double-double arithmetic|double-double arithmetic]] Fused multiply–add can usually be relied on to give more accurate results. However, [[William Morton Kahan|William Kahan]] has pointed out that it can give problems if used unthinkingly.<ref>{{cite web |title=IEEE Standard 754 for Binary Floating-Point Arithmetic |author-first=William |author-last=Kahan |author-link=William Morton Kahan |url=http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps |date=1996-05-31}}</ref> If {{math|''x''<sup>2</sup> − ''y''<sup>2</sup>}} is evaluated as {{math| ((''x'' × ''x'') − ''y'' × ''y'')}} (following Kahan's suggested notation in which redundant parentheses direct the compiler to round the {{math|(''x'' × ''x'')}} term first) using fused multiply–add, then the result may be negative even when {{math|''x'' {{=}} ''y''}} due to the first multiplication discarding low significance bits. This could then lead to an error if, for instance, the square root of the result is then evaluated. When implemented inside a [[microprocessor]], an FMA can be faster than a multiply operation followed by an add. However, standard industrial implementations based on the original IBM RS/6000 design require a 2''N''-bit adder to compute the sum properly.<ref>{{cite thesis |url=http://repositories.lib.utexas.edu/bitstream/handle/2152/3082/quinnelle60861.pdf |date=May 2007 |title=Floating-Point Fused Multiply–Add Architectures |author-first=Eric |author-last=Quinnell |degree=PhD |access-date=2011-03-28}}</ref> Another benefit of including this instruction is that it allows an efficient software implementation of [[division (mathematics)|division]] (see [[division algorithm]]) and [[square root]] (see [[methods of computing square roots]]) operations, thus eliminating the need for dedicated hardware for those operations.<ref name="goldschmidt_algo">{{cite conference |citeseerx=10.1.1.85.9648 |title=Software Division and Square Root Using Goldschmidt's Algorithms |author-first=Peter |author-last=Markstein |date=November 2004 |conference=6th Conference on Real Numbers and Computers |url=http://www.informatik.uni-trier.de/Reports/TR-08-2004/rnc6_12_markstein.pdf}}</ref> ===Dot product instruction=== Some machines combine multiple fused multiply add operations into a single step, e.g. performing a four-element dot-product on two 128-bit [[Single instruction, multiple data|SIMD]] registers <code>a0×b0 + a1×b1 + a2×b2 + a3×b3</code> with single cycle throughput. ===Support=== The FMA operation is included in [[IEEE 754-2008]]. The [[C99|1999 standard]] of the [[C (programming language)|C programming language]] supports the FMA operation through the <code>fma()</code> standard math library function and the automatic transformation of a multiplication followed by an addition (contraction of floating-point expressions), which can be explicitly enabled or disabled with standard pragmas ({{code|#pragma STDC FP_CONTRACT}}). The [[GNU Compiler Collection|GCC]] and [[Clang]] C compilers do such transformations by default for processor architectures that support FMA instructions. With GCC, which does not support the aforementioned pragma,<ref>{{cite web |title=Bug 20785 - Pragma STDC * (C99 FP) unimplemented |url=https://gcc.gnu.org/bugzilla/show_bug.cgi?id=20785 |access-date=2022-02-02 |website=gcc.gnu.org}}</ref> this can be globally controlled by the <code>-ffp-contract</code> command line option.<ref>{{Cite web|title=Optimize Options (Using the GNU Compiler Collection (GCC))|url=https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html|access-date=2022-02-02|website=gcc.gnu.org}}</ref> The fused multiply–add operation was introduced as "multiply–add fused" in the IBM [[POWER1]] (1990) processor,<ref>{{cite journal |last1=Montoye |first1=R. K. |last2=Hokenek |first2=E. |last3=Runyon |first3=S. L. |title=Design of the IBM RISC System/6000 floating-point execution unit |journal=IBM Journal of Research and Development |date=January 1990 |volume=34 |issue=1 |pages=59–70 |doi=10.1147/rd.341.0059}}{{closed access}}</ref> but has been added to numerous processors: * IBM [[POWER1]] (1990) * [[Hewlett-Packard|HP]] [[PA-8000]] (1996) and above * [[Hitachi, Ltd.|Hitachi]] [[SuperH#SH-4|SuperH SH-4]] (1998) * [[IBM]] [[z/Architecture]] (since 1998) * [[Sony Computer Entertainment|SCE]]-[[Toshiba]] [[Emotion Engine]] (1999) * Intel [[Itanium]] (2001) * STI [[Cell (microprocessor)|Cell]] (2006) * [[Fujitsu]] [[SPARC64 VI]] (2007) and above * ([[MIPS architecture|MIPS]]-compatible) [[Loongson]]-2F (2008)<ref>{{cite web |url=http://www.mdronline.com/mpr/h/2008/1103/224401.html |title=Godson-3 Emulates x86: New MIPS-Compatible Chinese Processor Has Extensions for x86 Translation}}</ref> * [[RISC-V]] instruction set (2010) * ARM processors with VFPv4 and/or NEONv2: ** [[ARM Cortex-M4F]] (2010) ** STM32 Cortex-M33 (VFMA operation)<ref>{{cite web |url=https://www.st.com/resource/en/programming_manual/pm0264-stm32-cortexm33-mcus-programming-manual-stmicroelectronics.pdf |title=STM32 Cortex-M33 MCUs programming manual |work=ST |access-date=2024-05-06}}</ref> ** [[ARM Cortex-A5]] (2012) ** [[ARM Cortex-A7 MPCore|ARM Cortex-A7]] (2013) ** [[ARM Cortex-A15 MPCore|ARM Cortex-A15]] (2012) ** [[Krait (CPU)|Qualcomm Krait]] (2012) ** [[Apple A6]] (2012) ** All [[ARM architecture#Armv8-A|ARMv8]] processors *** [[Fujitsu A64FX]] has "Four-operand FMA with Prefix Instruction". * x86 processors with [[FMA instruction set|FMA3 and/or FMA4 instruction set]] ** AMD [[Bulldozer (processor)|Bulldozer]] (2011, FMA4 only) ** AMD [[Piledriver (microarchitecture)|Piledriver]] (2012, FMA3 and FMA4)<ref>{{cite web |last1=Hollingsworth |first1=Brent |title=New "Bulldozer" and "Piledriver" Instructions |url=https://developer.amd.com/resources/developer-guides-manuals/new-bulldozer-and-piledriver-instructions/ |publisher=AMD Developer Central |date=October 2012}}</ref> ** [[Intel Haswell]] (2013, FMA3 only)<ref>{{cite web |url=http://www.reghardware.co.uk/2008/08/19/idf_intel_architecture_roadmap/ |title=Intel adds 22nm octo-core 'Haswell' to CPU design roadmap |work=The Register |access-date=2008-08-19 |archive-url=https://web.archive.org/web/20120217051330/http://www.reghardware.com/2008/08/19/idf_intel_architecture_roadmap/ |archive-date=2012-02-17 |url-status=dead }}</ref> ** AMD [[Steamroller (microarchitecture)|Steamroller]] (2014, FMA3 and FMA4) ** AMD [[Excavator (microarchitecture)|Excavator]] (2015, FMA3 and FMA4) ** Intel [[Skylake (microarchitecture)|Skylake]] (2015, FMA3 only) ** AMD [[Zen (microarchitecture)|Zen]] (2017, FMA3 only) * [[Elbrus-8S|Elbrus-8SV]] (2018) * GPUs and GPGPU boards: ** [[List of AMD graphics processing units|AMD GPUs]] (2009) and newer *** [[TeraScale (microarchitecture)#TeraScale 2|TeraScale 2 "Evergreen"]]-series based *** [[Graphics Core Next]]-based ** [[List of Nvidia graphics processing units|Nvidia GPUs]] (2010) and newer *** [[Fermi (microarchitecture)|Fermi]]-based (2010) *** [[Kepler (microarchitecture)|Kepler]]-based (2012) *** [[Maxwell (microarchitecture)|Maxwell]]-based (2014) *** [[Pascal (microarchitecture)|Pascal]]-based (2016) *** [[Volta (microarchitecture)|Volta]]-based (2017) ** Intel GPUs since [[Intel HD and Iris Graphics#Sandy Bridge|Sandy Bridge]] ** [[Intel MIC]] (2012) ** [[Mali (GPU)|ARM Mali T600 Series]] (2012) and above * Vector Processors: ** [[NEC SX-Aurora TSUBASA]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)