Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
HyperTransport
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Computer processor interconnection technology first introduced in 2001}} {{distinguish |text=[[Hyper-Threading]], which is also sometimes abbreviated "HT"}} {{Use mdy dates|date=November 2022}} [[File:HyperTransport Consortium logo.svg|thumb|Logo of the HyperTransport Consortium]] '''HyperTransport''' ('''HT'''), formerly known as '''Lightning Data Transport''', is a technology for interconnection of computer [[Processor (computing)|processor]]s. It is a bidirectional [[Serial communication|serial]]/[[Parallel communication|parallel]] high-[[Bandwidth (computing)|bandwidth]], low-[[Memory latency|latency]] [[point-to-point link]] that was introduced on April 2, 2001.<ref>{{cite press release |title=API NetWorks Accelerates Use of HyperTransport Technology With Launch of Industry's First HyperTransport Technology-to-PCI Bridge Chip |url=http://www.hypertransport.org/consortium/cons_pressrelease.cfm?RecordID=62 |website=HyperTransport Consortium |date=April 2, 2001 |url-status=dead |archive-url=https://web.archive.org/web/20061010070210/http://www.hypertransport.org/consortium/cons_pressrelease.cfm?RecordID=62 |archive-date=October 10, 2006}}</ref> The [[HyperTransport Consortium]] is in charge of promoting and developing HyperTransport technology. HyperTransport is best known as the [[system bus]] architecture of [[AMD]] [[central processing unit]]s (CPUs) from [[Athlon 64]] through [[AMD FX]] and the associated [[motherboard]] chipsets. HyperTransport has also been used by [[IBM]] and [[Apple Inc.|Apple]] for the [[Power Mac G5]] machines, as well as a number of modern [[MIPS architecture|MIPS]] systems. The current specification HTX 3.1 remained competitive for 2014 high-speed (2666 and 3200 [[megatransfer|MT]]/s or about 10.4 GB/s and 12.8 GB/s) [[DDR4]] RAM and slower (around 1 GB/s [http://www.extremetech.com/computing/175283-sandisk-announces-ulltra-dimms-terabytes-of-low-latency-flash-storage-directly-off-the-ram-channel] similar to high end [[Solid-state drive#Standard card form factors|PCIe SSDs]] [[ULLtraDIMM]] flash RAM) technology{{clarify|date=June 2015}}—a wider range of RAM speeds on a common CPU bus than any Intel [[front-side bus]]. Intel technologies require each speed range of RAM to have its own interface, resulting in a more complex motherboard layout but with fewer bottlenecks. HTX 3.1 at 26 GB/s can serve as a unified bus for as many as four DDR4 sticks running at the fastest proposed speeds. Beyond that DDR4 RAM may require two or more HTX 3.1 buses diminishing its value as unified transport. == Overview == === Links and rates === HyperTransport comes in four versions—1.x, 2.0, 3.0, and 3.1—which run from 200{{nbsp}}[[MHz]] to 3.2 GHz. It is also a DDR or "[[double data rate]]" connection, meaning it sends data on both the rising and falling edges of the [[clock signal]]. This allows for a maximum data rate of 6400 MT/s when running at 3.2 GHz. The operating frequency is autonegotiated with the motherboard chipset (North Bridge) in current computing. HyperTransport supports an autonegotiated bit width, ranging from 2 to 32 bits per link; there are two unidirectional links per HyperTransport bus. With the advent of version 3.1, using full [[32-bit]] links and utilizing the full HyperTransport 3.1 specification's operating frequency, the theoretical transfer rate is 25.6 [[Gigabyte|GB]]/s (3.2 GHz × 2 transfers per clock cycle × 32 bits per link) per direction, or 51.2 GB/s aggregated throughput, making it faster than most existing bus standard for PC workstations and servers as well as making it faster than most bus standards for high-performance computing and networking. Links of various widths can be mixed together in a single system configuration as in one [[16-bit]] link to another CPU and one [[8-bit]] link to a peripheral device, which allows for a wider interconnect between [[CPU]]s, and a lower bandwidth interconnect to [[peripheral]]s as appropriate. It also supports link splitting, where a single 16-bit link can be divided into two 8-bit links. The technology also typically has lower latency than other solutions due to its lower overhead. Electrically, HyperTransport is similar to [[low-voltage differential signaling]] (LVDS) operating at 1.2 V.<ref>{{Cite web |title=Overview |url=http://www.hypertransport.org/docs/wp/HT_Overview.pdf |website=HyperTransport Consortium |url-status=dead |archive-url=https://web.archive.org/web/20110716171022/http://www.hypertransport.org/docs/wp/HT_Overview.pdf |archive-date=July 16, 2011}}</ref> HyperTransport 2.0 added post-cursor transmitter [[deemphasis]]. HyperTransport 3.0 added scrambling and receiver phase alignment as well as optional transmitter precursor deemphasis. ===Packet-oriented=== HyperTransport is [[Packet (information technology)|packet]]-based, where each packet consists of a set of [[32-bit]] words, regardless of the physical width of the link. The first word in a packet always contains a command field. Many packets contain a 40-bit address. An additional 32-bit control packet is prepended when 64-bit addressing is required. The data payload is sent after the control packet. Transfers are always padded to a multiple of 32 bits, regardless of their actual length. HyperTransport packets enter the interconnect in segments known as bit times. The number of bit times required depends on the link width. HyperTransport also supports system management messaging, signaling interrupts, issuing probes to adjacent devices or processors, [[I/O]] transactions, and general data transactions. There are two kinds of write commands supported: posted and non-posted. Posted writes do not require a response from the target. This is usually used for high bandwidth devices such as [[uniform memory access]] traffic or [[direct memory access]] transfers. Non-posted writes require a response from the receiver in the form of a "target done" response. Reads also require a response, containing the read data. HyperTransport supports the PCI consumer/producer ordering model. === Power-managed === HyperTransport also facilitates [[power management]] as it is compliant with the [[Advanced Configuration and Power Interface]] specification. This means that changes in processor sleep states (C states) can signal changes in device states (D states), e.g. powering off disks when the CPU goes to sleep. HyperTransport 3.0 added further capabilities to allow a centralized power management controller to implement power management policies. == Applications == === Front-side bus replacement === The primary use for HyperTransport is to replace the Intel-defined [[front-side bus]], which is different for every type of Intel processor. For instance, a [[Pentium compatible processor|Pentium]] cannot be plugged into a [[PCI Express]] bus directly, but must first go through an adapter to expand the system. The proprietary front-side bus must connect through adapters for the various standard buses, like [[Accelerated Graphics Port|AGP]] or PCI Express. These are typically included in the respective controller functions, namely the ''[[Northbridge (computing)|northbridge]]'' and ''[[Southbridge (computing)|southbridge]]''. In contrast, HyperTransport is an open specification, published by a multi-company consortium. A single HyperTransport adapter chip will work with a wide spectrum of HyperTransport enabled microprocessors. [[AMD]] used HyperTransport to replace the [[front-side bus]] <!-- There was a reason the article was titled "Front side bus" (now Front-side bus): it is correct. Just because some phrase has an acronym does not mean its constituent words should be capitalized. --> in their [[Opteron]], [[Athlon 64]], [[Athlon II]], [[Sempron 64]], [[Turion 64]], [[Phenom (processor)|Phenom]], [[Phenom II]] and [[Bulldozer (microarchitecture)|FX]] families of microprocessors. === Multiprocessor interconnect === Another use for HyperTransport is as an interconnect for [[Non-Uniform Memory Access|NUMA]] [[multiprocessor]] computers. AMD used HyperTransport with a proprietary [[cache coherency]] extension as part of their Direct Connect Architecture in their [[Opteron]] and [[Athlon 64 FX]] ([[AMD Quad FX platform|Dual Socket Direct Connect (DSDC) Architecture]]) line of processors. [[#Infinity Fabric|Infinity Fabric]] used with the [[EPYC]] server CPUs is a superset of HyperTransport. The [[HORUS interconnect]] from [[Newisys]] extends this concept to larger clusters. The Aqua device from 3Leaf Systems virtualizes and interconnects CPUs, memory, and I/O. === Router or switch bus replacement === HyperTransport can also be used as a bus in [[Router (computing)|router]]s and [[Network switch|switches]]. Routers and switches have multiple network interfaces, and must forward data between these ports as fast as possible. For example, a four-port, 1000 [[Mbit]]/s [[Ethernet]] router needs a maximum 8000 Mbit/s of internal bandwidth (1000 Mbit/s × 4 ports × 2 directions)—HyperTransport greatly exceeds the bandwidth this application requires. However a 4 + 1 port 10 Gb router would require 100 Gbit/s of internal bandwidth. Add to that 802.11ac 8 antennas and the WiGig 60 GHz standard (802.11ad) and HyperTransport becomes more feasible (with anywhere between 20 and 24 lanes used for the needed bandwidth). === Co-processor interconnect === The issue of latency and bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. Co-processors such as [[FPGAs]] have appeared that can access the HyperTransport bus and become integrated on the motherboard. Current generation FPGAs from both main manufacturers ([[Altera]] and [[Xilinx]]) directly support the HyperTransport interface, and have [[Semiconductor intellectual property core|IP Cores]] available. Companies such as XtremeData, Inc. and DRC take these FPGAs (Xilinx in DRC's case) and create a module that allows FPGAs to plug directly into the Opteron socket. AMD started an initiative named [[Torrenza]] on September 21, 2006, to further promote the usage of HyperTransport for plug-in cards and [[coprocessors]]. This initiative opened their "Socket F" to plug-in boards such as those from XtremeData and DRC. === Add-on card connector (HTX and HTX3) === [[File:HyperTransport16 pcie8riser pcie16.jpg|thumb|Connectors from top to bottom: HTX, PCI-Express for riser card, PCI-Express]] A connector specification that allows a slot-based peripheral to have direct connection to a microprocessor using a HyperTransport interface was released by the HyperTransport Consortium. It is known as '''H'''yper'''T'''ransport e'''X'''pansion ('''HTX'''). Using a reversed instance of the same mechanical connector as a 16-lane [[PCI Express]] slot (plus an x1 connector for power pins), HTX allows development of plug-in cards that support direct access to a CPU and [[Direct memory access|DMA]] to the system [[RAM]]. The initial card for this slot was the [[QLogic]] InfiniPath InfiniBand HCA. IBM and [[Hewlett-Packard|HP]], among others, have released HTX compliant systems. The original HTX standard is limited to 16{{nbsp}}bits and 800{{nbsp}}MHz.<ref>{{cite web |last1=Emberson |first1=David |last2=Holden |first2=Brian |date=December 12, 2007 |title=HTX specification |url=http://www.hypertransport.org/docs/uploads/HTX_Specifications.pdf |website=HyperTransport Consortium |page=4 |access-date=January 30, 2008 |url-status=dead |archive-url=https://web.archive.org/web/20120308085021/http://www.hypertransport.org/docs/uploads/HTX_Specifications.pdf |archive-date=March 8, 2012}}</ref> In August 2008, the HyperTransport Consortium released HTX3, which extends the clock rate of HTX to 2.6 GHz (5.2 GT/s, 10.7 GTi, 5.2 real GHz data rate, 3 MT/s edit rate) and retains backwards compatibility.<ref>{{cite web |last=Emberson |first=David |date=June 25, 2008 |title=HTX3 specification |url=http://www.hypertransport.org/docs/uploads/HTX3_Specifications.pdf |website=HyperTransport Consortium |page=4 |access-date=August 17, 2008 |url-status=dead |archive-url=https://web.archive.org/web/20120308085016/http://www.hypertransport.org/docs/uploads/HTX3_Specifications.pdf |archive-date=March 8, 2012}}</ref> === Testing === The "DUT" test connector<ref>{{cite web |last1=Holden |first1=Brian |last2=Meschke |first2=Mike |last3=Abu-Lebdeh |first3=Ziad |last4=D'Orfani |first4=Renato |title=DUT Connector and Test Environment for HyperTransport |url=http://www.hypertransport.org/docs/spec/HTC20021219-0017-0001.pdf |website=HyperTransport Consortium |language=en-US |access-date=November 12, 2022 |url-status=dead |archive-url=https://web.archive.org/web/20060903165421/http://www.hypertransport.org/docs/spec/HTC20021219-0017-0001.pdf |archive-date=September 3, 2006}}</ref> is defined to enable standardized functional test system interconnection. == Implementations == * [[AMD]] [[AMD64]] and Direct Connect Architecture based CPUs * [[AMD]] chipsets ** AMD-8000 series ** [[AMD 580 chipset series|AMD 480 series]] ** [[AMD 580 chipset series|AMD 580 series]] ** [[AMD 690 chipset series|AMD 690 series]] ** [[AMD 700 chipset series|AMD 700 series]] ** [[AMD 800 chipset series|AMD 800 series]] ** [[AMD 900 chipset series|AMD 900 series]] * [[ATI Technologies|ATI]] chipsets ** ATI [[Xpress 200|Radeon Xpress 200]] for AMD processor ** ATI [[Xpress 3200|Radeon Xpress 3200]] for AMD processor * [[Broadcom]] (then [[ServerWorks]]) HyperTransport SystemI/O controllers ** HT-2000 ** HT-2100 * [[Cisco]] QuantumFlow Processors * ht_tunnel from [[OpenCores]] project (MPL licence) * [[IBM]] CPC925 and CPC945 ([[PowerPC G5#Northbridges|PowerPC 970 northbridges]]) chipsets * [[Loongson]]-3 [[MIPS architecture|MIPS]] processor * [[Nvidia]] nForce chipsets ** [[nForce]] and [[nForce2]] series (link between North and South Bridges) ** nForce Professional MCPs (Media and Communication Processor) ** [[nForce 3]] series ** [[nForce 4]] series ** [[nForce 500|nForce 500 series]] ** [[nForce 600|nForce 600 series]] ** [[nForce 700|nForce 700 series]] ** [[nForce 900|nForce 900 series]] * [[PMC-Sierra]] RM9000X2 [[MIPS architecture|MIPS]] CPU * [[Power Mac G5]]<ref>{{cite web |author=Apple |date=25 June 2003 |title=WWDC 2003 Keynote |url=https://www.youtube.com/watch?v=iwsn27J_tlo |website=YouTube |language=en-US |access-date=October 16, 2009 |archive-url=https://web.archive.org/web/20120708180002/http://www.youtube.com/watch?v=iwsn27J_tlo&gl=US&hl=en |archive-date=July 8, 2012}}</ref> * [[Raza Microelectronics Inc|Raza]] Thread Processors * SiByte [[MIPS architecture|MIPS]] CPUs from [[Broadcom]] * [[Transmeta]] TM8000 Efficeon CPUs * [[VIA Technologies|VIA]] chipsets K8 series == Frequency specifications == {| class="wikitable" style="text-align:center; vetical-align:center;" |- ! rowspan=2 | HyperTransport<br />version ! rowspan=2 | Year ! rowspan=2 | Max. HT frequency ! rowspan=2 | Max. link width ! colspan=3 | Max. aggregate bandwidth (GB/s) |- ! bi-directional ! 16-bit unidirectional ! 32-bit unidirectional* |- ! 1.0 | 2001 | 800 MHz | 32-bit | 12.8 | 3.2 | 6.4 |- ! 1.1 | 2002 | 800 MHz | 32-bit | 12.8 | 3.2 | 6.4 |- ! 2.0 | 2004 | 1.4 GHz | 32-bit | 22.4 | 5.6 | 11.2 |- ! 3.0 | 2006 | 2.6 GHz | 32-bit | 41.6 | 10.4 | 20.8 |- ! 3.1 | 2008 | 3.2 GHz | 32-bit | 51.2 | 12.8 | 25.6 |} <nowiki>*</nowiki> AMD [[Athlon 64]], Athlon 64 FX, [[Athlon 64 X2]], Athlon X2, [[Athlon II]], Phenom, [[Phenom II]], [[Sempron]], [[AMD Turion|Turion]] series and later use one 16-bit HyperTransport link. AMD Athlon 64 FX ([[Socket F|1207]]), [[Opteron]] use up to three 16-bit HyperTransport links. Common clock rates for these processor links are 800 MHz to 1 GHz (older single and multi socket systems on 754/939/940 links) and 1.6 GHz to 2.0 GHz (newer single socket systems on AM2+/AM3 links—most newer CPUs using 2.0{{nbsp}}GHz). While HyperTransport itself is capable of 32-bit width links, that width is not currently utilized by any AMD processors. Some chipsets though do not even utilize the 16-bit width used by the processors. Those include the Nvidia [[nForce3]] 150, nForce3 Pro 150, and the [[ULi]] M1689—which use a 16-bit HyperTransport downstream link but limit the HyperTransport upstream link to 8 bits. == Name == There has been some marketing confusion{{citation needed|date=August 2023}} between the use of '''HT''' referring to '''H'''yper'''T'''ransport and the later use of '''HT''' to refer to [[Intel]]'s [[Hyper-Threading]] feature on some [[Pentium 4]]-based and the newer Nehalem and Westmere-based [[Intel Core]] microprocessors. Hyper-Threading is officially known as '''H'''yper-'''T'''hreading '''T'''echnology ('''HTT''') or '''HT Technology'''. Because of this potential for confusion, the HyperTransport Consortium always uses the written-out form: "HyperTransport." == Infinity Fabric == '''Infinity Fabric''' ('''IF''') is a superset of HyperTransport announced by AMD in 2016 as an interconnect for its GPUs and CPUs. It is also usable as interchip interconnect for communication between CPUs and GPUs (for [[Heterogeneous System Architecture]]), an arrangement known as '''Infinity Architecture'''.<ref>{{cite web |author=AMD |title=AMD_presentation_EPYC |url=https://s14.postimg.org/rgf0wv38x/image.jpg |access-date=May 24, 2017 |ref=AMD_presentation_EPYC |archive-url=https://web.archive.org/web/20170821125859/https://s14.postimg.org/rgf0wv38x/image.jpg |archive-date=August 21, 2017 |url-status=dead}}</ref><ref>{{cite web |last1=Merritt |first1=Rick |date=December 13, 2016 |title=AMD Clocks Ryzen at 3.4 GHz+ |url=http://www.eetimes.com/document.asp?doc_id=1330981&page_number=2 |website=EE Times |language=en-US |access-date=January 17, 2017 |url-status=dead |archive-url=https://web.archive.org/web/20190808171653/https://www.eetimes.com/document.asp?doc_id=1330981&page_number=2 |archive-date=August 8, 2019}}</ref><ref>{{cite web |last1=Alcorn |first1=Paul |date=March 5, 2020 |title=AMD's CPU-to-GPU Infinity Fabric Detailed |url=https://www.tomshardware.com/news/amd-infinity-fabric-cpu-to-gpu |website=Tom's Hardware |language=en-US |access-date=November 12, 2022}}</ref> The company said the Infinity Fabric would scale from 30{{nbsp}}GB/s to 512{{nbsp}}GB/s, and be used in the [[Zen (microarchitecture)|Zen]]-based CPUs and [[Graphics Core Next#Graphics Core Next 5|Vega]] GPUs which were subsequently released in 2017. On Zen and [[Zen+]] CPUs, the "SDF" data interconnects are run at the same frequency as the DRAM memory clock (MEMCLK), a decision made to remove the latency caused by different clock speeds. As a result, using a faster RAM module makes the entire bus faster. The links are 32-bit wide, as in HT, but 8 transfers are done per cycle (128-bit packets) compared to the original 2. Electrical changes are made for higher power efficiency.<ref>{{cite web |title=Infinity Fabric (IF) - AMD |url=https://en.wikichip.org/wiki/amd/infinity_fabric |website=WikiChip |language=en-US}}</ref> On [[Zen 2]] and [[Zen 3]] CPUs, the IF bus is on a separate clock, either in a 1:1 or a 2:1 ratio to the DRAM clock. This avoids a limitation on desktop platforms where maximum DRAM speeds were in practice limited by the IF speed. The bus width has also been doubled.<ref>{{Cite web |last=Cutress |first=Ian |date=June 10, 2019 |title=AMD Zen 2 Microarchitecture Analysis: Ryzen 3000 and EPYC Rome |url=https://www.anandtech.com/show/14525/amd-zen-2-microarchitecture-analysis-ryzen-3000-and-epyc-rome/11 |website=AnandTech |language=en-US |access-date=November 12, 2022}}</ref> On [[Zen 4]] and later CPUs, the IF bus is able to run at an asynchronous clock to the DRAM, to allow the higher clock speeds that DDR5 is capable of.<ref>{{cite web |last1=Killian |first1=Zak |title=AMD Addresses Zen 4 Ryzen 7000 Series Memory Overclocking And Configuration Details |url=https://hothardware.com/news/amd-addresses-zen-4-memory-oc-details |website=HotHardware |access-date=4 April 2024 |language=en-us |date=1 September 2022}}</ref> [[UALink]] will utilize Infinity Fabric as the primary shared memory protocol. == See also == * [[Elastic interface bus]] * [[Fibre Channel]] * [[Front-side bus]] * [[Intel QuickPath Interconnect]] * [[List of interface bit rates]] * [[PCI Express]] * [[RapidIO]] * [[AGESA]] == References == {{Reflist |32em}} == External links == * {{Citation |title=HyperTransport Consortium |url=http://www.hypertransport.org/ |type=home |access-date=November 2, 2002 |archive-date=August 22, 2008 |archive-url=https://web.archive.org/web/20080822010544/http://www.hypertransport.org/default.cfm?page=HyperTransportSpecifications |url-status=dead }} * {{Citation |title=Technology |url=http://www.hypertransport.org/default.cfm?page=TechnologyHyperTransportOverview |website=HyperTransport}}{{dead link|date=April 2017 |bot=InternetArchiveBot |fix-attempted=yes }}. * {{Citation |title=Technical Specifications |url=http://www.hypertransport.org/default.cfm?page=HyperTransportSpecifications |website=HyperTransport |url-status=dead |archive-url=https://web.archive.org/web/20080822010544/http://www.hypertransport.org/default.cfm?page=HyperTransportSpecifications |archive-date=August 22, 2008}} * {{Citation |title=Center of Excellence for HyperTransport |url=http://htce.uni-hd.de/ |publisher=Uni HD |language=de |access-date=September 4, 2008 |url-status=dead |archive-url=https://web.archive.org/web/20081029020858/http://www.htce.uni-hd.de/ |archive-date=October 29, 2008}} {{Computer-bus}} [[Category:Computer buses]] [[Category:Macintosh internals]] [[Category:Serial buses]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite press release
(
edit
)
Template:Cite web
(
edit
)
Template:Clarify
(
edit
)
Template:Computer-bus
(
edit
)
Template:Dead link
(
edit
)
Template:Distinguish
(
edit
)
Template:Nbsp
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Use mdy dates
(
edit
)