Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
T-carrier
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Legacy== Existing frequency-division multiplexing carrier systems worked well for connections between distant cities, but required expensive modulators, demodulators and filters for every voice channel. In the late 1950s, [[Bell Labs]] sought cheaper terminal equipment for connections within metropolitan areas. Pulse-code modulation allowed sharing a coder and decoder among several voice trunks, so this method was chosen for the T1 system introduced into local use in 1961. In later decades, the cost of digital electronics declined to the point that an individual [[codec]] per voice channel became commonplace, but by then the other advantages of digital transmission had become entrenched. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of [[Frame synchronization|framing information]] which facilitates the synchronization and demultiplexing at the receiver. The T2 and T3 circuit channels carry multiple T1 channels multiplexed, resulting in transmission rates of 6.312 and 44.736 Mbit/s, respectively.<ref>{{FOLDOC|T-carrier+system}}</ref> A T3 line comprises 28 T1 lines, each operating at total signaling rate of 1.544 Mbit/s. It is possible to get a '''fractional T3''' line,<ref>{{cite web |url=https://community.cisco.com/t5/routing/fractional-t3/td-p/952360 |title=fractional T3|date=29 May 2008}}</ref><ref>{{cite magazine |magazine=[[Network World]] |title=Fractional T-3 |date=Aug 16, 1993 |page=40}}</ref> meaning a T3 line with some of the 28 lines turned off, resulting in a slower transfer rate but typically at reduced cost. Supposedly, the 1.544 Mbit/s rate was chosen because tests by [[AT&T Long Lines]] in [[Chicago]] were conducted underground.{{citation needed|date=March 2013}} The test site was typical of Bell System [[outside plant]] of the time in that, to accommodate [[loading coil]]s, [[Utility tunnel|cable vault]] manholes were physically {{convert|2000|m|ft|abbr=off|sp=us}} apart, which determined the repeater spacing. The optimum [[bit rate]] was chosen [[Empiricism|empirically]]βthe capacity was increased until the failure rate was unacceptable, then reduced to leave a margin. [[Companding]] allowed acceptable audio performance with only seven bits per PCM sample in this original T1/D1 system. The later D3 and D4 channel banks had an extended frame format, allowing eight bits per sample, reduced to seven every sixth sample or frame when one bit was "robbed" for signaling the state of the channel. The standard does not allow an all zero sample which would produce a long string of binary zeros and cause the repeaters to lose bit sync. However, when carrying data (Switched 56) there could be long strings of zeros, so one bit per sample is set to "1" (jam bit 7) leaving 7 bits Γ 8,000 frames per second for data. A more detailed understanding of the development of the 1.544 Mbit/s rate and its division into channels is as follows. Given that the telephone system nominal [[voiceband]] (including [[Guard band|guardband]]) is 4,000 [[Hertz|Hz]], the required digital sampling rate is 8,000 Hz (see [[Nyquist rate]]). Since each T1 frame contains 1 byte of voice data for each of the 24 channels, that system needs then 8,000 frames per second to maintain those 24 simultaneous voice channels. Because each frame of a T1 is 193 bits in length (24 channels Γ 8 bits per channel + 1 framing bit = 193 bits), 8,000 frames per second is multiplied by 193 bits to yield a transfer rate of 1.544 Mbit/s (8,000 Γ 193 = 1,544,000). Initially, T1 used [[Alternate Mark Inversion]] (AMI) to reduce frequency [[Bandwidth (signal processing)|bandwidth]] and eliminate the [[Direct current|DC]] component of the signal. Later [[B8ZS]] became common practice. For AMI, each mark pulse had the opposite polarity of the previous one and each space was at a level of zero, resulting in a three level signal which carried only binary data. Similar 1970s British 23 channel systems at 1.536 [[megabaud]] were equipped with [[ternary signal]] repeaters, in anticipation of using a 3B2T or [[4B3T]] code to increase the number of voice channels in the future. But in the 1980s, the systems were merely replaced with European standard ones. American T-carriers could only work in AMI or B8ZS mode. The AMI or B8ZS signal allowed a simple error rate measurement. The D bank in the central office could detect a bit with the wrong polarity, or "[[Bipolar violation|bipolarity violation]]" and sound an alarm. Later systems could count the number of violations and reframes and otherwise measure signal quality and allow a more sophisticated [[alarm indication signal]] system. The decision to use a 193-bit frame was made in 1958. To allow for the identification of information bits within a [[Framing (telecommunication)|frame]], two alternatives were considered. Assign (a) just one extra bit, or (b) additional eight bits per frame. The 8-bit choice is cleaner, resulting in a 200-bit frame, twenty-five 8-bit '''[[Time-division multiple access|channels]]''', of which 24 are traffic and one 8-bit channel available for operations, administration, and maintenance ([[OA&M]]). AT&T chose the single bit per frame not to reduce the required bit rate (1.544 vs 1.6 Mbit/s), but because AT&T Marketing worried that "if 8 bits were chosen for OA&M function, someone would then try to sell this as a voice channel and you wind up with nothing."{{citation needed|date = July 2015}} Soon after commercial success of T1 in 1962, the T1 engineering team realized the mistake of having only one bit to serve the increasing demand for [[Extended Super Frame|housekeeping]] functions. They petitioned AT&T management to change to 8-bit framing. This was flatly turned down because it would make installed systems obsolete. Having this hindsight, some ten years later, [[European Conference of Postal and Telecommunications Administrations|CEPT]] chose eight bits for framing the European [[E-carrier|E1]], although, as feared, the extra channel is sometimes appropriated for voice or data.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)