Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Addition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Computers === [[File:Opampsumming2.svg|right|frame|Addition with an op-amp. See [[Operational amplifier applications#Summing amplifier|Summing amplifier]] for details.]] [[Analog computer]]s work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an [[arithmetic mean|averaging]] [[lever]]. If the addends are the rotation speeds of two [[axle|shafts]], they can be added with a [[differential (mechanics)|differential]]. A hydraulic adder can add the [[pressure]]s in two chambers by exploiting [[Newton's laws of motion|Newton's second law]] to balance forces on an assembly of [[piston]]s. The most common situation for a general-purpose analog computer is to add two [[voltage]]s (referenced to [[ground (electricity)|ground]]); this can be accomplished roughly with a [[resistor]] [[Electronic circuit|network]], but a better design exploits an [[operational amplifier]].{{sfnp|Truitt|Rogers|1960|pp=1;44β49, 2;77β78}} Addition is also fundamental to the operation of [[computer|digital computers]], where the efficiency of addition, in particular the [[carry (arithmetic)|carry]] mechanism, is an important limitation to overall performance.{{sfnp|Gschwind|McCluskey|1975|p=[http://books.google.com/books?id=VLmrCAAAQBAJ&pg=PA233 233]}} [[File:BabbageDifferenceEngine.jpg|left|thumb|Part of Charles Babbage's [[Difference Engine]] including the addition and carry mechanisms]] The [[abacus]], also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in [[Asia]], [[Africa]], and elsewhere; it dates back to at least 2700β2300 BC, when it was used in [[Sumer]].<ref>{{cite book |last=Ifrah |first=Georges |year=2001 |title=The Universal History of Computing: From the Abacus to the Quantum Computer |publisher=John Wiley & Sons, Inc. |location=New York |isbn=978-0-471-39671-0 |url=https://archive.org/details/unset0000unse_w3q2 }} p. 11</ref> [[Blaise Pascal]] invented the mechanical calculator in 1642;<ref>{{harvtxt|Marguin|1994}}, p. 48. Quoting {{harvtxt|Taton|1963}}.</ref> it was the first operational [[adding machine]]. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century<ref>See [[Pascal's calculator#Competing designs|Competing designs]] in Pascal's calculator article</ref> and the earliest automatic, digital computer. [[Pascal's calculator]] was limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use the [[method of complements|Pascal's calculator's complement]], which required as many steps as an addition. [[Giovanni Poleni]] followed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically. [[File:Full-adder.svg|thumb|"[[Adder (electronics)|Full adder]]" logic circuit that adds two binary digits, ''A'' and ''B'', along with a carry input ''C<sub>in</sub>'', producing the sum bit, ''S'', and a carry output, ''C<sub>out</sub>''.]] [[Adder (electronics)|Adders]] execute integer addition in electronic digital computers, usually using [[binary arithmetic]]. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the [[Carry bypass adder|carry skip]] design, again following human intuition; one does not perform all the carries in computing {{nowrap|999 + 1}}, but one bypasses the group of 9s and skips to the answer.{{sfnp|Flynn|Oberman|2001|pp=2, 8}} In practice, computational addition may be achieved via [[Exclusive or|XOR]] and [[Bitwise operation#AND|AND]] bitwise logical operations in conjunction with bitshift operations as shown in the [[pseudocode]] below. Both XOR and AND gates are straightforward to realize in digital logic, allowing the realization of [[Adder (electronics)|full adder]] circuits, which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all [[floating-point arithmetic|floating-point operations]] as well as such basic tasks as [[memory address|address]] generation during [[memory (computers)|memory]] access and fetching [[instruction (computer science)|instructions]] during [[control flow|branching]]. To increase speed, modern designs calculate digits in [[parallel algorithm|parallel]]; these schemes go by such names as carry select, [[carry lookahead adder|carry lookahead]], and the [[Ling adder|Ling]] pseudocarry. Many implementations are, in fact, hybrids of these last three designs.{{sfnmp | 1a1 = Flynn | 1a2 = Oberman | 1y = 2001 | 1pp = 1β9 | 2a1 = Liu | 2a2 = Tan | 2a3 = Song | 2a4 = Chen | 2y = 2010 | 2p = 194 }} Unlike addition on paper, addition on a computer often changes the addends. Both addends are destroyed on the ancient [[abacus]] and adding board, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early [[Latin]] texts often claimed that in the process of adding "a number to a number", both numbers vanish.{{sfnp|Karpinski|1925|pp=102β103}} In modern times, the ADD instruction of a [[microprocessor]] often replaces the augend with the sum but preserves the addend.<ref>The identity of the augend and addend varies with architecture. For ADD in [[x86]] see {{harvtxt|Horowitz|Hill|2001}}, p. 679. For ADD in [[68k]] see {{harvtxt|Horowitz|Hill|2001}}, p. 767.</ref> In a [[high-level programming language]], evaluating <math> a + b </math> does not change either <math> a </math> or <math> b </math>; if the goal is to replace <math> a </math> with the sum this must be explicitly requested, typically with the statement <math> a = a + b </math>. Some languages like [[C (programming language)|C]] or [[C++]] allow this to be abbreviated as {{nowrap|1=<code>''a'' += ''b''</code>}}. <syntaxhighlight lang="c"> // Iterative algorithm int add(int x, int y) { int carry = 0; while (y != 0) { carry = AND(x, y); // Logical AND x = XOR(x, y); // Logical XOR y = carry << 1; // left bitshift carry by one } return x; } // Recursive algorithm int add(int x, int y) { return x if (y == 0) else add(XOR(x, y), AND(x, y) << 1); } </syntaxhighlight> On a computer, if the result of an addition is too large to store, an [[arithmetic overflow]] occurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause of [[software bug|program errors]]. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests.<ref>Joshua Bloch, [http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html "Extra, Extra β Read All About It: Nearly All Binary Searches and Mergesorts are Broken"] {{Webarchive|url=https://web.archive.org/web/20160401140544/http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html |date=2016-04-01 }}. Official Google Research Blog, June 2, 2006.</ref> The [[Year 2000 problem]] was a series of bugs where overflow errors occurred due to the use of a 2-digit format for years.{{sfnp|Neumann|1987}} Computers have another way of representing numbers, called ''[[floating-point arithmetic]]'', which is similar to scientific notation described above and which reduces the overflow problem. Each floating point number has two parts, an exponent and a mantissa. To add two floating-point numbers, the exponents must match, which typically means shifting the mantissa of the smaller number. If the disparity between the larger and smaller numbers is too great, a loss of precision may result. If many smaller numbers are to be added to a large number, it is best to add the smaller numbers together first and then add the total to the larger number, rather than adding small numbers to the large number one at a time. This makes floating point addition non-associative in general. See [[floating-point arithmetic#Accuracy problems]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)