Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Character encoding
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History == The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as [[Bacon's cipher]], [[Braille]], [[international maritime signal flags]], and the 4-digit encoding of Chinese characters for a [[Chinese telegraph code]] ([[Hans Schjellerup]], 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, [[Morse code]], introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a [[telegraph key]] and decipherable by ear, and persists in [[amateur radio]] and [[Non-directional beacon|aeronautical]] use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. [[Unicode]]).<ref>{{cite web | url=http://blog.smartbear.com/development/ancient-computer-character-code-tables-and-why-theyre-still-relevant/ | title=Ancient Computer Character Code Tables β and Why They're Still Relevant | publisher=Smartbear | date=17 April 2014 | access-date=29 April 2014 | author=Tom Henderson |url-status=dead |archive-url=https://web.archive.org/web/20140430000312/http://blog.smartbear.com/development/ancient-computer-character-code-tables-and-why-theyre-still-relevant/ |archive-date= Apr 30, 2014 }}</ref> Common examples of character encoding systems include Morse code, the [[Baudot code]], the [[American Standard Code for Information Interchange]] (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has replaced most earlier character encodings, but the path of code development to the present is fairly well known. The Baudot code, a five-[[bit]] encoding, was created by [[Γmile Baudot]] in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name ''baudot'' has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. [[File:Blue-punch-card-front-horiz.png|thumb|Hollerith 80-column punch card with EBCDIC character set]] [[Herman Hollerith]] invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical [[tabulating machine]]s represented date internally by the timing of pulses relative to the motion of the cards through the machine. When [[IBM]] went to electronic processing, starting with the [[IBM 603]] Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM used several [[BCD (character encoding)|binary-coded decimal]] (BCD) six-bit character encoding schemes, starting as early as 1953 in its [[IBM 702|702]]<ref>{{cite web|url=http://www.bitsavers.org/pdf/ibm/702/22-6173-1_702prelim_Feb56.pdf |via=bitsavers.org |archive-url=https://ghostarchive.org/archive/20221009/http://www.bitsavers.org/pdf/ibm/702/22-6173-1_702prelim_Feb56.pdf |archive-date=2022-10-09 |url-status=live|title=IBM Electronic Data-Processing Machines Type 702 Preliminary Manual of Information|date=1954|id=22-6173-1|page=80}}</ref> and [[IBM 704|704]] computers, and in its later [[IBM 700/7000 series|7000 Series]] and [[IBM 1400 series|1400 series]], as well as in associated peripherals. Since the punched card code then in use was limited to digits, upper-case English letters and a few special characters, six bits were sufficient. These BCD encodings extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping them easily to punch-card encoding which was already in widespread use. IBM's codes were used primarily with IBM equipment. Other computer vendors of the era had their own character codes, often six-bit, such as the encoding used by the {{nobreak|[[UNIVAC I]]}}.<ref>{{cite web|url=http://www.bitsavers.org/pdf/univac/univac1/UnivacI_RefCard.pdf|title=UNIVAC System|type=reference card}}</ref> They usually had the ability to read tapes produced on IBM equipment. IBM's BCD encodings were the precursors of their [[Extended Binary-Coded Decimal Interchange Code]] (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the [[IBM System/360]] that featured a larger character set, including lower case letters. In 1959 the U.S. military defined its [[Fieldata]] code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler seven-bit code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European [[ECMA-6]] standard.<ref>{{cite web | url=https://www.sr-ix.com/Archive/CharCodeHist/index.html | title=An annotated history of some character codes | date= 20 April 2016 |website=Sensitive Research | access-date=1 November 2018 | author=Tom Jennings}}</ref> Eight-bit [[extended ASCII]] encodings, such as various vendor extensions and the [[ISO/IEC 8859]] series, supported all ASCII characters as well as additional non-ASCII characters. While trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's [[hard disk drive]] could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail),<ref name="Strelho">{{cite news |last1=Strelho |first1=Kevin |title=IBM Drives Hard Disks to New Standards |url=https://books.google.com/books?id=zC4EAAAAMBAJ&pg=PA29 |access-date=November 10, 2020 |work=InfoWorld |publisher=Popular Computing Inc. |date=April 15, 1985 |pages=29β33}}</ref> so it was very important at the time to make every bit count. The compromise solution that was eventually found and {{vague|text=developed into Unicode|reason=Became Unicode or was added to Unicode? See talk page.|date=April 2023}} was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called [[code point]]s. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement [[variable-width encoding|variable-length encodings]] where an escape sequence would signal that subsequent bits should be parsed as a higher code point.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)