Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Network interface controller
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== {{Anchor|RSS|XPS|MULTIQUEUE|NPAR|FLOW-DIRECTOR}}Performance and advanced functionality == [[File:ForeRunnerLE 25 ATM Network Interface (1).jpg|thumb|right|An [[Asynchronous Transfer Mode]] (ATM) network interface]] [[File:An Intel 82574L Gigabit Ethernet NIC, PCI Express x1 card.jpg|thumb|right|[[Intel]] 82574L [[Gigabit Ethernet]] NIC, a PCI Express Γ1 card, which provides two hardware receive queues<ref>{{cite web | url = http://www.intel.com/content/dam/doc/datasheet/82574l-gbe-controller-datasheet.pdf | title = Intel 82574 Gigabit Ethernet Controller Family Datasheet | date = June 2014 | access-date = November 16, 2014 | publisher = [[Intel]] | page = 1 }}</ref>]] '''Multiqueue NICs''' provide multiple transmit and receive [[Queue (abstract data type)|queues]], allowing packets received by the NIC to be assigned to one of its receive queues. The NIC may distribute incoming traffic between the receive queues using a [[hash function]]. Each receive queue is assigned to a separate [[interrupt]]; by routing each of those interrupts to different [[CPU]]s or [[Multi-core processor|CPU cores]], processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed improving performance.<ref name="linux-net-scaling">{{cite web | url = https://www.kernel.org/doc/Documentation/networking/scaling.txt | title = Linux kernel documentation: Documentation/networking/scaling.txt | date = May 9, 2014 | access-date = November 16, 2014 | author1 = Tom Herbert | author2 = Willem de Bruijn | publisher = [[kernel.org]] }}</ref><ref>{{cite web | url = http://www.mouser.com/pdfdocs/i210brief.pdf | title = Intel Ethernet Controller i210 Family Product Brief | year = 2012 | access-date = November 16, 2014 | publisher = [[Intel]] }}</ref> The hardware-based distribution of the interrupts, described above, is referred to as '''receive-side scaling''' (RSS).<ref name="intel-grantley">{{cite web | url = http://www.intel.com/content/dam/technology-provider/secure/us/en/documents/product-marketing-information/tst-grantley-launch-presentation-2014.pdf | title = Intel Look Inside: Intel Ethernet | work = Xeon E5 v3 (Grantley) Launch | date = November 27, 2014 | access-date = March 26, 2015 | publisher = [[Intel]] | archive-url = https://web.archive.org/web/20150326095816/http://www.intel.com/content/dam/technology-provider/secure/us/en/documents/product-marketing-information/tst-grantley-launch-presentation-2014.pdf | archive-date = March 26, 2015 }}</ref>{{rp|82}} Purely software implementations also exist, such as the [[receive packet steering]] (RPS), [[receive flow steering]] (RFS),<ref name="linux-net-scaling" /> and [[Intel]] ''Flow Director''.<ref name="intel-grantley" />{{rp|98,99}}<ref>{{cite web | url = https://www.kernel.org/doc/Documentation/networking/ixgbe.txt | title = Linux kernel documentation: Documentation/networking/ixgbe.txt | date = December 15, 2014 | access-date = March 26, 2015 | publisher = [[kernel.org]] }}</ref><ref>{{cite web | url = http://www.intel.com/content/www/us/en/ethernet-controllers/ethernet-flow-director-video.html | title = Intel Ethernet Flow Director | date = February 16, 2015 | access-date = March 26, 2015 | publisher = [[Intel]] }}</ref><ref>{{cite web | url = http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/intel-ethernet-flow-director.pdf | title = Introduction to Intel Ethernet Flow Director and Memcached Performance | date = October 14, 2014 | access-date = October 11, 2015 | publisher = [[Intel]] }}</ref> Further performance improvements can be achieved by routing the interrupt requests to the CPUs or cores executing the applications that are the ultimate destinations for [[network packet]]s that generated the interrupts. This technique improves [[locality of reference]] and results in higher overall performance, reduced latency and better hardware utilization because of the higher utilization of [[CPU cache]]s and fewer required [[context switch]]es. With multi-queue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs or CPU cores, internal operating system contentions can be avoided. This approach is usually referred to as '''transmit packet steering''' (XPS).<ref name="linux-net-scaling" /> Some products feature '''NIC partitioning''' ('''NPAR''', also known as '''port partitioning''') that uses [[SR-IOV]] virtualization to divide a single 10 Gigabit Ethernet NIC into multiple discrete virtual NICs with dedicated bandwidth, which are presented to the firmware and operating system as separate [[PCI device function]]s.<ref name="Dell">{{cite web | url = http://www.dell.com/downloads/global/products/pedge/en/Dell-Broadcom-NPAR-White-Paper.pdf | title = Enhancing Scalability Through Network Interface Card Partitioning | date = April 2011 | access-date = May 12, 2014 | publisher = [[Dell]] }}</ref><ref>{{cite web | url = http://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/10-gbe-ethernet-flexible-port-partitioning-brief.pdf | title = An Introduction to Intel Flexible Port Partitioning Using SR-IOV Technology | date = September 2011 | access-date = September 24, 2015 | author1 = Patrick Kutch | author2 = Brian Johnson | author3 = Greg Rose | publisher = [[Intel]] }}</ref> Some NICs provide a [[TCP offload engine]] to offload processing of the entire [[TCP/IP]] stack to the network controller. It is primarily used with high-speed network interfaces, such as Gigabit Ethernet and 10 Gigabit Ethernet, for which the processing overhead of the network stack becomes significant.<ref>{{cite web | url = https://lwn.net/Articles/243949/ | title = Large receive offload | date = August 1, 2007 | access-date = May 2, 2015 | author = Jonathan Corbet | publisher = [[LWN.net]] }}</ref> {{Anchor|SOLARFLARE|OPENONLOAD|USER-LEVEL-NETWORKING}} Some NICs offer integrated [[field-programmable gate array]]s (FPGAs) for user-programmable processing of network traffic before it reaches the host computer, allowing for significantly reduced [[Latency (engineering)|latencies]] in time-sensitive workloads.<ref>{{cite web|title=High Performance Solutions for Cyber Security|url=http://newwavedv.com/markets/defense/cyber-security/|website=New Wave Design & Verification|publisher=New Wave DV}}</ref> Moreover, some NICs offer complete low-latency [[TCP/IP stack]]s running on integrated FPGAs in combination with [[userspace]] libraries that intercept networking operations usually performed by the [[operating system kernel]]; Solarflare's open-source '''OpenOnload''' network stack that runs on [[Linux]] is an example. This kind of functionality is usually referred to as '''user-level networking'''.<ref>{{cite web | url = https://www.theregister.co.uk/2012/02/08/solarflare_application_onload_engine/ | title = Solarflare turns network adapters into servers: When a CPU just isn't fast enough | date = 2012-02-08 | access-date = 2014-05-08 | author = Timothy Prickett Morgan | website = [[The Register]] }}</ref><ref>{{cite web | url = http://www.openonload.org/ | title = OpenOnload | date = 2013-12-03 | access-date = 2014-05-08 | website = openonload.org }}</ref><ref>{{cite web | url = http://www.openonload.org/openonload-google-talk.pdf | title = OpenOnload: A user-level network stack | date = 2008-03-21 | access-date = 2014-05-08 | author1 = Steve Pope | author2 = David Riddoch | website = openonload.org }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)