Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
TCP offload engine
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Purpose== Originally [[Transmission Control Protocol|TCP]] was designed for unreliable low speed networks (such as early [[dial-up]] [[modem]]s) but with the growth of the Internet in terms of [[Internet backbone|backbone]] transmission speeds (using [[Optical Carrier]], [[Gigabit Ethernet]] and [[10 Gigabit Ethernet]] links) and faster and more reliable access mechanisms (such as [[digital subscriber line|DSL]] and [[cable modem]]s) it is frequently used in [[data center]]s and desktop [[personal computer|PC]] environments at speeds of over 1 Gigabit per second. At these speeds the TCP software implementations on host systems require significant computing power. In the early 2000s, full-duplex gigabit TCP communication could consume more than 80% of a 2.4 GHz [[Pentium 4]] processor,<ref name="Foong"/> resulting in small or no processing resources left for the applications to run on the system. TCP is a [[connection-oriented protocol]] which adds complexity and processing overhead. These aspects include: * [[Transmission Control Protocol#Connection establishment|Connection establishment]] using the "3-way handshake" (SYNchronize; SYNchronize-ACKnowledge; ACKnowledge). * Acknowledgment of packets as they are received by the far end, adding to the message flow between the endpoints and thus the protocol load. * [[Checksum]] and sequence number calculations - again a burden on a general purpose CPU to perform. * [[Sliding window]] calculations for packet acknowledgement and [[congestion control]]. * [[Transmission Control Protocol|Connection termination]]. Moving some or all of these functions to dedicated hardware, a TCP offload engine, frees the system's main [[CPU]] for other tasks. ===Freed-up CPU cycles=== A generally accepted rule of thumb is that 1 Hertz of CPU processing is required to send or receive {{val|1|ul=bit/s}} of TCP/IP.<ref name="Foong">{{cite conference |author1=Annie P. Foong |author2=Thomas R. Huff |author3=Herbert H. Hum |author4=Jaidev P. Patwardhan |author5=Greg J. Regnier |date=2003-04-02 |title=TCP performance re-visited |conference=Proceedings of the International Symposium on Performance Analysis of Systems and Software (ISPASS) |location=Austin, Texas |url=http://www.nanogrids.org/jaidev/papers/ispass03.pdf}}</ref> For example, 5 Gbit/s (625 MB/s) of network traffic requires 5 GHz of CPU processing. This implies that 2 entire cores of a 2.5 GHz [[multi-core processor]] will be required to handle the TCP/IP processing associated with 5 Gbit/s of TCP/IP traffic. Since Ethernet (10GE in this example) is bidirectional, it is possible to send and receive 10 Gbit/s (for an aggregate throughput of 20 Gbit/s). Using the 1 Hz/(bit/s) rule this equates to eight 2.5 GHz cores. Many of the CPU cycles used for TCP/IP processing are ''freed-up'' by TCP/IP offload and may be used by the CPU (usually a [[Server (computing)|server]] CPU) to perform other tasks such as file system processing (in a file server) or indexing (in a backup media server). In other words, a server with TCP/IP offload can do more '''server''' work than a server without TCP/IP offload NICs. ===Reduction of PCI traffic=== In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (server and PC) endpoints. Many older end point hosts are [[Peripheral Component Interconnect|PCI]] bus based, which provides a standard interface for the addition of certain [[peripherals]] such as Network Interfaces to [[Server (computing)|Servers]] and PCs. PCI is inefficient for transferring small bursts of data from main memory, across the PCI bus to the network interface ICs, but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g. acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput. A TOE solution, located on the network interface, is located on the other side of the PCI bus from the CPU host so it can address this I/O efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)