Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
PCI Express
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == [[File:ASUS GTX-650 Ti TOP Cu-II PCI Express 3.0 x16 graphics card.jpg|thumb|[[Asus]] Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 x16 graphics card]] [[File:NVIDIA-GTX-1070-FoundersEdition-FL.jpg|thumb|The [[Nvidia]] GeForce GTX 1070, a PCI Express 3.0 x16 Graphics card]] [[File:An Intel 82574L Gigabit Ethernet NIC, PCI Express x1 card.jpg|thumb|[[Intel]] 82574L Gigabit Ethernet [[Network interface controller|NIC]], a PCI Express x1 card]] [[File:SATA 6 Gbit-s controller, in form of a PCI Express card.jpg|thumb|A [[Marvell Technology|Marvell]]-based [[SATA 3.0]] controller, as a PCI Express x1 card]] PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an [[expansion card]] interface for add-in boards. In virtually all modern ({{as of|2012|lc=on}}) PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both integrated peripherals (surface-mounted ICs) and add-on peripherals (expansion cards). In most of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals. {{As of | 2013}}, PCI Express has replaced [[Accelerated Graphics Port|AGP]] as the default interface for graphics cards on new systems. Almost all models of [[graphics card]]s released since 2010 by [[AMD Graphics|AMD]] (ATI) and [[Nvidia]] use PCI Express. Nvidia used the high-bandwidth data transfer of PCIe for its [[Scalable Link Interface]] (SLI) technology, which allowed multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance.{{Citation needed |reason=Needs cables running across the top connecting the cards, unsure if this is something proprietary or PCIe 1x based.|date=September 2021}} This interface has, since, been discontinued. AMD has also developed a multi-GPU system based on PCIe called [[ATI CrossFire|CrossFire]].{{Citation needed |reason=Needs cables running across the top connecting the cards, unsure if this is something proprietary or PCIe 1x based.|date=September 2021}} AMD, Nvidia, and Intel have released motherboard chipsets that support as many as four PCIe x16 slots, allowing tri-GPU and quad-GPU card configurations. === External GPUs === Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with a power supply and cooling); this is possible with an ExpressCard or [[Thunderbolt (interface)|Thunderbolt]] interface. An ExpressCard interface provides [[bit rate]]s of 5 Gbit/s (0.5 GB/s throughput), whereas a Thunderbolt interface provides bit rates of up to 40 Gbit/s (5 GB/s throughput). In 2006, [[Nvidia]] developed the [[Nvidia Quadro Plex|Quadro Plex]] external PCIe family of [[Graphics processing unit|GPUs]] that can be used for advanced graphic applications for the professional market.<ref name="gxHZT" /> These video cards require a PCI Express x8 or x16 slot for the host-side card, which connects to the Plex via a [[VHDCI]] carrying eight PCIe lanes.<ref name="zJYg8" /> In 2008, AMD announced the [[ATI XGP]] technology, based on a proprietary cabling system that is compatible with PCIe x8 signal transmissions.<ref name="HgxXj" /> This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter.<ref name="WHU07" /> Around 2010 Acer launched the Dynavivid graphics dock for XGP.<ref name="mvR09" /> In 2010, external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot. These hubs can accept full-sized graphics cards. Examples include MSI GUS,<ref name="ZAJ0y" /> Village Instrument's ViDock,<ref name="J5UtH" /> the Asus [[XG Station]], Bplus PE4H V3.2 adapter,<ref name="jWWJt" /> as well as more improvised DIY devices.<ref name="ERk1e" /> However such solutions are limited by the size (often only x1) and version of the available PCIe slot on a laptop. The Intel Thunderbolt interface has provided a new option to connect with a PCIe card externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards (two at x8 and one at x4).<ref name="CvJwZ" /> MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards.<ref name="OLzu7" /> Other products such as the Sonnet's Echo Express<ref name="5LOrR" /> and mLogic's mLink are Thunderbolt PCIe chassis in a smaller form factor.<ref name="apXPa" /> In 2017, more fully featured external card hubs were introduced, such as the Razer Core, which has a full-length PCIe x16 interface.<ref name="PXbHS" /> === Storage devices === [[File:PCIe card full height.jpg|thumb|An [[OCZ]] RevoDrive [[Solid-state drive|SSD]], a full-height x4 PCI Express card]] {{See also|SATA Express|NVMe}} The PCI Express protocol can be used as data interface to [[flash memory]] devices, such as [[memory card]]s and [[solid-state drive]]s (SSDs). The [[XQD card]] is a memory card format utilizing PCI Express, developed by the CompactFlash Association, with transfer rates of up to 1 GB/s.<ref name="49Gx4" /> Many high-performance, enterprise-class SSDs are designed as PCI Express [[RAID controller]] cards.{{Citation needed|date=September 2021}} Before NVMe was standardized, many of these cards utilized proprietary interfaces and custom drivers to communicate with the operating system; they had much higher transfer rates (over 1 GB/s) and IOPS (over one million I/O operations per second) when compared to Serial ATA or [[Serial attached SCSI|SAS]] drives.{{Quantify |reason=Listing numbers isn't a comparison.|date=September 2021}}<ref name="P3Feb" /><ref name="NeWKh" /> For example, in 2011 OCZ and Marvell co-developed a native PCI Express solid-state drive controller for a PCI Express 3.0 x16 slot with maximum capacity of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million IOPS in random transfers.<ref name="VLf63" />{{Relevance inline|reason=Can't find any info on whether this product was even sold, and I can't imagine it was when the other thing OCZ was in the news for at the time were their PCIe based SSDs being constantly failing nightmares that were impossible to recover because of the proprietary interface and required drivers... Enterprise would have rightfully scoffed at those numbers, thrown a couple thousand more regular SSDs in their giant storage arrays if they needed more than the billions of IOPS they were already achieving, maybe updated from 40 Gbps to 100 Gbps runs between the storage servers, and called it a day.|date=September 2021}} [[SATA Express]] was an interface for connecting SSDs through SATA-compatible ports, optionally providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device.<ref name="ymSig" /> [[M.2]] is a specification for internally mounted computer [[expansion card]]s and associated connectors, which also uses multiple PCI Express lanes.<ref name="SNLQe" /> PCI Express storage devices can implement both [[AHCI]] logical interface for backward compatibility, and [[NVM Express]] logical interface for much faster I/O operations provided by utilizing internal parallelism offered by such devices. Enterprise-class SSDs can also implement [[SCSI over PCI Express]].<ref name="FJvMX" /> === Cluster interconnect === Certain [[data-center]] applications (such as large [[computer cluster]]s) require the use of fiber-optic interconnects due to the distance limitations inherent in copper cabling. Typically, a network-oriented standard such as Ethernet or [[Fibre Channel]] suffices for these applications, but in some cases the overhead introduced by [[routing|routable]] protocols is undesirable and a lower-level interconnect, such as [[InfiniBand]], [[RapidIO]], or [[NUMAlink]] is needed. Local-bus standards such as PCIe and [[HyperTransport]] can in principle be used for this purpose,<ref name="YUum6" /> but {{as of | 2015 | lc = on}}, solutions are only available from niche vendors such as [[Dolphin Interconnect Solutions|Dolphin ICS]], and TTTech Auto.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)