Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Floating-point arithmetic
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History == {{see also|IEEE 754#History}} [[File:Quevedo 1917.jpg|thumb|upright=0.7|[[Leonardo Torres Quevedo]], in 1914, published an analysis of floating point based on the [[analytical engine]].]] In 1914, the Spanish engineer [[Leonardo Torres Quevedo]] published ''Essays on Automatics'',<ref>Torres Quevedo, Leonardo. [https://quickclick.es/rop/pdf/publico/1914/1914_tomoI_2043_01.pdf Automática: Complemento de la Teoría de las Máquinas, (pdf)], pp. 575–583, Revista de Obras Públicas, 19 November 1914.</ref> where he designed a special-purpose electromechanical calculator based on [[Charles Babbage]]'s [[analytical engine]] and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as ''n'' × 10<math>^m</math>, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "''n'' will always be the same number of [[Numerical digit|digits]] (e.g. six), the first digit of ''n'' will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: ''n''; ''m''." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through a [[typewriter]], as was the case of his [[Leonardo Torres y Quevedo#Analytical machines|Electromechanical Arithmometer]] in 1920.<ref>Ronald T. Kneusel. ''[https://books.google.com/books?id=eq4ZDgAAQBAJ&dq=leonardo+torres+quevedo++electromechanical+machine+essays&pg=PA84 Numbers and Computers],'' Springer, pp. 84–85, 2017. {{ISBN|978-3319505084}}</ref>{{Sfn|Randell|1982|pp=6, 11–13}}<ref>Randell, Brian. [https://dl.acm.org/doi/pdf/10.5555/1074100.1074334 Digital Computers, History of Origins, (pdf)], p. 545, Digital Computers: Origins, Encyclopedia of Computer Science, January 2003.</ref> [[File:Konrad Zuse (1992).jpg|thumb|upright=0.7|right|[[Konrad Zuse]], architect of the [[Z3 (computer)|Z3]] computer, which uses a 22-bit binary floating-point representation]] In 1938, [[Konrad Zuse]] of Berlin completed the [[Z1 (computer)|Z1]], the first binary, programmable [[mechanical computer]];<ref name="Rojas_1997"/> it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit.<ref name="Rojas_2014"/> The more reliable [[relay]]-based [[Z3 (computer)|Z3]], completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as <math>^1/_\infty = 0</math>, and it stops on undefined operations, such as <math>0 \times \infty</math>. Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes <math>\pm\infty</math> and NaN representations, anticipating features of the IEEE Standard by four decades.<ref name="Kahan_1997_JVNL"/> In contrast, [[John von Neumann|von Neumann]] recommended against floating-point numbers for the 1951 [[IAS machine]], arguing that fixed-point arithmetic is preferable.<ref name="Kahan_1997_JVNL"/> The first ''commercial'' computer with floating-point hardware was Zuse's [[Z4 (computer)|Z4]] computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the [[Model V|Model V]], which implemented [[decimal floating point|decimal floating-point numbers]].<ref name="Randell_1982_2"/> The [[Pilot ACE]] has binary floating-point arithmetic, and it became operational in 1950 at [[National Physical Laboratory, UK]]. Thirty-three were later sold commercially as the [[English Electric DEUCE]]. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-produced [[IBM 704]] followed in 1954; it introduced the use of a [[Exponent bias|biased exponent]]. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "[[scientific computation]]" (SC) capability (see also [[Extensions for Scientific Computation]] (XSC)). It was not until the launch of the Intel i486 in 1989 that ''general-purpose'' personal computers had floating-point capability in hardware as a standard feature. The [[UNIVAC 1100/2200 series]], introduced in 1962, supported two floating-point representations: * ''Single precision'': 36 bits, organized as a 1-bit sign, an 8-bit exponent, and a 27-bit significand. * ''Double precision'': 72 bits, organized as a 1-bit sign, an 11-bit exponent, and a 60-bit significand. The [[IBM 7094]], also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introduced [[IBM hexadecimal floating-point|hexadecimal floating-point representations]] in its [[System/360]] mainframes; these same representations are still available for use in modern [[z/Architecture]] systems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic. Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of the [[IEEE 754]] standard once the 32-bit (or 64-bit) [[Word (computer architecture)|word]] had become commonplace. This standard was significantly based on a proposal from Intel, which was designing the [[Intel 8087|i8087]] numerical coprocessor; Motorola, which was designing the [[68000]] around the same time, gave significant input as well. [[File:William Kahan 2008.jpg|thumb|upright=0.7|right|[[William Kahan]], principal architect of the [[IEEE 754]] floating-point standard]] In 1989, mathematician and computer scientist [[William Kahan]] was honored with the [[Turing Award]] for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, [[Harold S. Stone|Harold Stone]].<ref name="Severance_1998"/> Among the x86 innovations are these: * A precisely specified floating-point representation at the bit-string level, so that all compliant computers interpret bit patterns the same way. This makes it possible to accurately and efficiently transfer floating-point numbers from one computer to another (after accounting for [[endianness]]). * A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floating-point computation had developed for its hitherto seemingly non-deterministic behavior. * The ability of [[IEEE 754#Exception handling|exceptional conditions]] (overflow, [[Division by zero|divide by zero]], etc.) to propagate through a computation in a benign manner and then be handled by the software in a controlled fashion.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)