Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Network congestion
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Congestive collapse== Congestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at choke points in the network, where incoming traffic exceeds outgoing bandwidth. Connection points between a [[local area network]] and a [[wide area network]] are common choke points. When a network is in this condition, it settles into a stable state where traffic demand is high but little useful throughput is available, during which [[packet delay]] and loss occur and [[quality of service]] is extremely poor. Congestive collapse was identified as a possible problem by 1984.<ref>{{IETF RFC|896}}</ref> It was first observed on the early Internet in October 1986,<ref>{{cite book |title=TCP/IP Illustrated, Volume 1: The Protocols |author1=Fall, K.R. |author2=Stevens, W.R. |isbn=9780132808187 |url=https://books.google.com/books?id=a23OAn5i8R0C |date=2011 |edition=2 |publisher=Pearson Education |page=739}}</ref> when the [[NSFNET]] phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s,<ref>{{citation |author1=Van Jacobson |author2=Michael J. Karels |date=November 1988 |title=Congestion Avoidance and Control |url=https://ee.lbl.gov/papers/congavoid.pdf |quote=In October of ’86, the Internet had the first of what became a series of ‘congestion collapses’. During this period, the data throughput from LBL to UC Berkeley (sites separatedby 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. In particular, we wondered if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions.The answer to both of these questions was “yes”.}}</ref> which continued until end nodes started implementing [[Van Jacobson]] and [[Sally Floyd]]'s congestion control between 1987 and 1988.<ref>{{cite news |last1=Hafner |first1=Katie |title=Sally Floyd, Who Helped Things Run Smoothly Online, Dies at 69 |url=https://www.nytimes.com/2019/09/04/science/sally-floyd-dead.html |work=New York Times |date=4 September 2019 |access-date=5 September 2019 |ref=floyd69}}</ref> When more [[Packet (information technology)|packets]] were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the endpoints of the network to retransmit the information. However, early TCP implementations had poor retransmission behavior. When this packet loss occurred, the endpoints sent extra packets that repeated the information lost, doubling the incoming rate.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)