Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Direct memory access
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Examples == === ISA === In the original [[IBM PC]] (and the follow-up [[PC/XT]]), there was only one [[Intel 8237]] DMA controller capable of providing four DMA channels (numbered 0–3). These DMA channels performed 8-bit transfers (as the 8237 was an 8-bit device, ideally matched to the PC's [[i8088]] CPU/bus architecture), could only address the first ([[i8086]]/8088-standard) megabyte of RAM, and were limited to addressing single 64 [[kilobyte|kB]] segments within that space (although the source and destination channels could address different segments). Additionally, the controller could only be used for transfers to, from or between expansion bus I/O devices, as the 8237 could only perform memory-to-memory transfers using channels 0 & 1, of which channel 0 in the PC (& XT) was dedicated to [[dynamic memory]] [[memory refresh|refresh]]. This prevented it from being used as a general-purpose "[[Blitter]]", and consequently block memory moves in the PC, limited by the general PIO speed of the CPU, were very slow. With the [[IBM PC/AT]], the enhanced [[AT bus]] (more familiarly retronymed as the [[Industry Standard Architecture]] (ISA)) added a second 8237 DMA controller to provide three additional, and as highlighted by resource clashes with the XT's additional expandability over the original PC, much-needed channels (5–7; channel 4 is used as a cascade to the first 8237). ISA DMA's extended 24-bit address bus width allows it to access up to 16 MB lower memory.<ref>{{Cite web |title=ISA DMA - OSDev Wiki |url=https://wiki.osdev.org/ISA_DMA |access-date=2025-04-20 |website=wiki.osdev.org}}</ref> The page register was also rewired to address the full 16 MB memory address space of the 80286 CPU. This second controller was also integrated in a way capable of performing 16-bit transfers when an I/O device is used as the data source and/or destination (as it actually only processes data itself for memory-to-memory transfers, otherwise simply ''controlling'' the data flow between other parts of the 16-bit system, making its own data bus width relatively immaterial), doubling data throughput when the upper three channels are used. For compatibility, the lower four DMA channels were still limited to 8-bit transfers only, and whilst memory-to-memory transfers were now technically possible due to the freeing up of channel 0 from having to handle DRAM refresh, from a practical standpoint they were of limited value because of the controller's consequent low throughput compared to what the CPU could now achieve (i.e., a 16-bit, more optimised [[80286]] running at a minimum of 6 MHz, vs an 8-bit controller locked at 4.77 MHz). In both cases, the 64 kB [[x86 memory segmentation|segment boundary]] issue remained, with individual transfers unable to cross segments (instead "wrapping around" to the start of the same segment) even in 16-bit mode, although this was in practice more a problem of programming complexity than performance as the continued need for DRAM refresh (however handled) to monopolise the bus approximately every 15 [[μs]] prevented use of large (and fast, but uninterruptible) block transfers. Due to their lagging performance (1.6 [[megabyte|MB]]/s maximum 8-bit transfer capability at 5 MHz,<ref name="i8237sheet">{{cite web |title=Intel 8237 & 8237-2 Datasheet |url=http://www.jbox.dk/rc702/hardware/intel-8237.pdf |website=JKbox RC702 subsite |access-date=20 April 2019}}</ref> but no more than 0.9 MB/s in the PC/XT and 1.6 MB/s for 16-bit transfers in the AT due to ISA bus overheads and other interference such as memory refresh interruptions<ref name="DMAfundamentals">{{cite web |title=DMA Fundamentals on various PC platforms, National Instruments, pages 6 & 7 |url=https://cires1.colorado.edu/jimenez-group/QAMSResources/Docs/DMAFundamentals.pdf |access-date=26 April 2025 |website=University of Colorado Boulder}}</ref>) and unavailability of any speed grades that would allow installation of direct replacements operating at speeds higher than the original PC's standard 4.77 MHz clock, these devices have been effectively obsolete since the late 1980s. Particularly, the advent of the [[80386]] processor in 1985 and its capacity for 32-bit transfers (although great improvements in the efficiency of address calculation and block memory moves in Intel CPUs after the [[80186]] meant that PIO transfers even by the 16-bit-bus [[80286|286]] and [[80386SX|386SX]] could still easily outstrip the 8237), as well as the development of further evolutions to ([[Extended Industry Standard Architecture|EISA]]) or replacements for ([[Micro Channel architecture|MCA]], [[VESA local bus|VLB]] and [[Peripheral Component Interconnect|PCI]]) the "ISA" bus with their own much higher-performance DMA subsystems (up to a maximum of 33 MB/s for EISA, 40 MB/s MCA, typically 133 MB/s VLB/PCI) made the original DMA controllers seem more of a performance millstone than a booster. They were supported to the extent they are required to support built-in legacy PC hardware on later machines. The pieces of legacy hardware that continued to use ISA DMA after 32-bit expansion buses became common were [[Sound Blaster]] cards that needed to maintain full hardware compatibility with the [[Sound Blaster standard]]; and [[Super I/O]] devices on motherboards that often integrated a built-in [[floppy disk]] controller, an [[IrDA]] infrared controller when FIR (fast infrared) mode is selected, and an [[IEEE 1284]] parallel port controller when ECP mode is selected. In cases where an original 8237s or direct compatibles were still used, transfer to or from these devices may still be limited to the first 16 MB of main [[RAM]] regardless of the system's actual address space or amount of installed memory. Each DMA channel has a 16-bit address register and a 16-bit count register associated with it. To initiate a data transfer the device driver sets up the DMA channel's address and count registers together with the direction of the data transfer, read or write. It then instructs the DMA hardware to begin the transfer. When the transfer is complete, the device [[interrupt]]s the CPU. Scatter-gather or [[vectored I/O]] DMA allows the transfer of data to and from multiple memory areas in a single DMA transaction. It is equivalent to the chaining together of multiple simple DMA requests. The motivation is to off-load multiple [[input/output]] interrupt and data copy tasks from the CPU. DRQ stands for ''Data request''; DACK for ''Data acknowledge''. These symbols, seen on hardware [[schematic]]s of computer systems with DMA functionality, represent electronic signaling lines between the CPU and DMA controller. Each DMA channel has one Request and one Acknowledge line. A device that uses DMA must be configured to use both lines of the assigned DMA channel. 16-bit ISA permitted bus mastering.<ref>{{Citation |title=PC Architecture for Technicians: Level 1 |contribution=Chapter 12: ISA Bus |contribution-url=http://faculty.chemeketa.edu/csekafet/elt256/pcarch-full_isa-bus.pdf |author=Intel Corp. |date=2003-04-25 |access-date=2015-01-27}}</ref> Standard ISA DMA assignments:{{cn|date=November 2024}} {{ordered list|start=0 | [[DRAM]] refresh (obsolete) | User hardware usually ISA sound card | [[Floppy disk]] controller | [[WDMA (computer)|WDMA]] for [[hard disk]] controller (replaced by [[UDMA]] modes), parallel port (ECP capable port), or certain SoundBlaster Clones like the OPTi 928 | [[8237]] DMA controller | Hard disk controller ([[PS/2]] only), or user hardware usually ISA sound card | User hardware | User hardware }} === PCI === A [[Peripheral Component Interconnect|PCI]] architecture has no central DMA controller, unlike ISA. Instead, A PCI device can request control of the bus ("become the [[bus master]]") and request to read from and write to system memory. More precisely, a PCI component requests bus ownership from the PCI bus controller (usually PCI host bridge, and PCI to PCI bridge<ref>{{Cite web|title=Bus Specifics - Writing Device Drivers for Oracle® Solaris 11.3|url=https://docs.oracle.com/cd/E53394_01/html/E54850/hwovr-25520.html|access-date=2020-12-18|website=docs.oracle.com}}</ref>), which will [[Arbiter (electronics)|arbitrate]] if several devices request bus ownership simultaneously, since there can only be one bus master at one time. When the component is granted ownership, it will issue normal read and write commands on the PCI bus, which will be claimed by the PCI bus controller. As an example, on an [[Intel Core]]-based PC, the southbridge will forward the transactions to the [[memory controller]] (which is [[Integrated circuit design|integrated]] on the CPU die) using [[Direct Media Interface|DMI]], which will in turn convert them to DDR operations and send them out on the memory bus. As a result, there are quite a number of steps involved in a PCI DMA transfer; however, that poses little problem, since the PCI device or PCI bus itself are an order of magnitude slower than the rest of the components (see [[list of device bandwidths]]). A modern x86 CPU may use more than 4 GB of memory, either utilizing the native 64-bit mode of [[x86-64]] CPU, or the [[Physical Address Extension]] (PAE), a 36-bit addressing mode. In such a case, a device using DMA with a 32-bit address bus is unable to address memory above the 4 GB line. The new [[Double Address Cycle]] (DAC) mechanism, if implemented on both the PCI bus and the device itself,<ref>{{cite web|url=http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx#E2D|title=Physical Address Extension — PAE Memory and Windows|publisher=Microsoft Windows Hardware Development Central|year=2005|access-date=2008-04-07}}</ref> enables 64-bit DMA addressing. Otherwise, the operating system would need to work around the problem by either using costly [[double buffering (DMA)|double buffer]]s (DOS/Windows nomenclature) also known as [[bounce buffer]]s ([[FreeBSD]]/Linux), or it could use an [[IOMMU]] to provide address translation services if one is present. === I/OAT === As an example of DMA engine incorporated in a general-purpose CPU, some Intel [[Xeon]] chipsets include a DMA engine called [[I/O Acceleration Technology]] (I/OAT), which can offload memory copying from the main CPU, freeing it to do other work.<ref>{{cite web | last = Corbet | first = Jonathan | title = Memory copies in hardware | work = [[LWN.net]] | date = December 8, 2005 | url = https://lwn.net/Articles/162966/ }}</ref> In 2006, Intel's [[Linux kernel]] developer Andrew Grover performed benchmarks using I/OAT to offload network traffic copies and found no more than 10% improvement in CPU utilization with receiving workloads.<ref name="linuxnet-ioat">{{cite web |first=Andrew |last=Grover |title=I/OAT on LinuxNet wiki |work=Overview of I/OAT on Linux, with links to several benchmarks |date=2006-06-01 |url=http://www.linuxfoundation.org/collaborate/workgroups/networking/i/oat |access-date=2006-12-12 |archive-date=2016-05-05 |archive-url=https://web.archive.org/web/20160505034410/http://www.linuxfoundation.org/collaborate/workgroups/networking/i/oat |url-status=dead }}</ref> === DDIO === Further performance-oriented enhancements to the DMA mechanism have been introduced in Intel [[Xeon E5]] processors with their '''Data Direct I/O''' ('''DDIO''') feature, allowing the DMA "windows" to reside within [[CPU cache]]s instead of system RAM. As a result, CPU caches are used as the primary source and destination for I/O, allowing [[network interface controller]]s (NICs) to DMA directly to the Last level cache (L3 cache) of local CPUs and avoid costly fetching of the I/O data from system RAM. As a result, DDIO reduces the overall I/O processing latency, allows processing of the I/O to be performed entirely in-cache, prevents the available RAM bandwidth/latency from becoming a performance bottleneck, and may lower the power consumption by allowing RAM to remain longer in low-powered state.<ref>{{cite web | url = http://www.intel.com/content/dam/www/public/us/en/documents/faqs/data-direct-i-o-faq.pdf | title = Intel Data Direct I/O (Intel DDIO): Frequently Asked Questions | date = March 2012 | access-date = 2015-10-11 | publisher = [[Intel]] }}</ref><ref>{{cite web | url = http://rhelblog.redhat.com/2015/09/29/pushing-the-limits-of-kernel-networking/ | title = Pushing the Limits of Kernel Networking | date = 2015-09-29 | access-date = 2015-10-11 | author = Rashid Khan | website = redhat.com }}</ref><ref>{{cite web | url = http://www.solarflare.com/content/userfiles/documents/intel_solarflare_webinar_paper.pdf | title = Achieving Lowest Latencies at Highest Message Rates with Intel Xeon Processor E5-2600 and Solarflare SFN6122F 10 GbE Server Adapter | date = 2012-06-07 | access-date = 2015-10-11 | website = solarflare.com }}</ref><ref>{{cite web | url = https://events.static.linuxfound.org/sites/events/files/slides/pushing-kernel-networking.pdf | title = Pushing the Limits of Kernel Networking | date = 2015-08-19 | access-date = 2015-10-11 | author = Alexander Duyck | website = linuxfoundation.org | page = 5 }}</ref> === AHB === {{main|Advanced Microcontroller Bus Architecture}} In [[System-on-a-chip|systems-on-a-chip]] and [[embedded system]]s, typical system bus infrastructure is a complex on-chip bus such as [[Advanced Microcontroller Bus Architecture#High-performance Bus|AMBA High-performance Bus]]. AMBA defines two kinds of AHB components: master and slave. A slave interface is similar to programmed I/O through which the software (running on embedded CPU, e.g. [[ARM architecture|ARM]]) can write/read I/O registers or (less commonly) local memory blocks inside the device. A master interface can be used by the device to perform DMA transactions to/from system memory without heavily loading the CPU. Therefore, high bandwidth devices such as network controllers that need to transfer huge amounts of data to/from system memory will have two interface adapters to the AHB: a master and a slave interface. This is because on-chip buses like AHB do not support [[Three-state logic|tri-stating]] the bus or alternating the direction of any line on the bus. Like PCI, no central DMA controller is required since the DMA is bus-mastering, but an [[Arbiter (electronics)|arbiter]] is required in case of multiple masters present on the system. Internally, a multichannel DMA engine is usually present in the device to perform multiple concurrent [[scatter-gather]] operations as programmed by the software. === Cell === {{main|Cell (microprocessor)}} As an example usage of DMA in a [[multiprocessor-system-on-chip]], IBM/Sony/Toshiba's [[Cell processor]] incorporates a DMA engine for each of its 9 processing elements including one Power processor element (PPE) and eight synergistic processor elements (SPEs). Since the SPE's load/store instructions can read/write only its own local memory, an SPE entirely depends on DMAs to transfer data to and from the main memory and local memories of other SPEs. Thus the DMA acts as a primary means of data transfer among cores inside this [[CPU]] (in contrast to cache-coherent CMP architectures such as Intel's cancelled [[GPGPU|general-purpose GPU]], [[Larrabee (microarchitecture)|Larrabee]]). DMA in Cell is fully [[#Cache coherency|cache coherent]] (note however local stores of SPEs operated upon by DMA do not act as globally coherent cache in the [[CPU cache|standard sense]]). In both read ("get") and write ("put"), a DMA command can transfer either a single block area of size up to 16 KB, or a list of 2 to 2048 such blocks. The DMA command is issued by specifying a pair of a local address and a remote address: for example when a SPE program issues a put DMA command, it specifies an address of its own local memory as the source and a virtual memory address (pointing to either the main memory or the local memory of another SPE) as the target, together with a block size. According to an experiment, an effective peak performance of DMA in Cell (3 GHz, under uniform traffic) reaches 200 GB per second.<ref name="petrini-cell">{{cite journal |first=Michael |last=Kistler |title=Cell Multiprocessor Communication Network: Built for Speed |journal=[[IEEE Micro]] |date=May 2006|volume=26 |issue=3 |pages=10–23 |doi=10.1109/MM.2006.49 |s2cid=7735690 |url=http://portal.acm.org/citation.cfm?id=1158825.1159067 |url-access=subscription }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)