Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Direct Rendering Manager
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Graphics Execution Manager === Due to the increasing size of [[video memory]] and the growing complexity of graphics APIs such as [[OpenGL]], the strategy of reinitializing the graphics card state at each [[context switch]] was too expensive, performance-wise. Also, modern [[Linux desktop]]s needed an optimal way to share off-screen buffers with the [[compositing manager]]. These requirements led to the development of new methods to manage graphics [[Data buffer|buffers]] inside the kernel. The ''Graphics Execution Manager'' (GEM) emerged as one of these methods.{{r|Packard Anholt 2008}} GEM provides an API with explicit [[memory management]] primitives.{{r|Packard Anholt 2008}} Through GEM, a user-space program can create, handle and destroy memory objects living in the GPU video memory. These objects, called "GEM objects",{{r|drmbook-mm}} are persistent from the user-space program's perspective and don't need to be reloaded every time the program regains control of the GPU. When a user-space program needs a chunk of video memory (to store a [[framebuffer]], [[Texture mapping|texture]] or any other data required by the GPU{{r|Vetter 2012}}), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations.{{r|Packard Anholt 2008}}{{r|drmbook-mm}} GEM API also provides operations to populate the buffer and to release it when it is not needed anymore. Memory from unreleased GEM handles gets recovered when the user-space process closes the DRM device [[file descriptor]]—intentionally or because it terminates.{{r|Vetter 2011}} GEM also allows two or more user-space [[Process (computing)|processes]] using the same DRM device (hence the same DRM driver) to share a GEM object.{{r|Vetter 2011}} GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called ''GEM names''. A GEM name refers to one, and only one, GEM object created within the same DRM device by the same DRM driver, by using a unique 32-bit [[Integer (computer science)|integer]]. GEM provides an operation ''flink'' to obtain a GEM name from a GEM handle.{{r|Vetter 2011}}{{r|Peres Ravier 2013|p=16}} The process can then pass this GEM name (32-bit integer) to another process using any [[Inter-process communication|IPC]] mechanism available.{{r|Peres Ravier 2013|p=15}} The GEM name can be used by the recipient process to obtain a local GEM handle pointing to the original GEM object. Unfortunately, the use of GEM names to share buffers is not secure.{{r|Peres Ravier 2013|p=16}}{{r|Packard 2012}}{{r|Herrmann 2013 XDC}} A malicious third-party process accessing the same DRM device could try to guess the GEM name of a buffer shared by two other processes, simply by probing 32-bit integers.{{r|Kerrisk 2012}}{{r|Herrmann 2013 XDC}} Once a GEM name is found, its contents can be accessed and modified, violating the [[CIA triad|confidentiality and integrity]] of the information of the buffer. This drawback was overcome later by the introduction of [[DMA-BUF]] support into DRM, as DMA-BUF represents buffers in userspace as file descriptors, which [[File descriptor#File descriptors as capabilities|may be shared securely]]. Another important task for any video-memory management system besides managing the video-memory space is handling the memory synchronization between the GPU and the CPU. Current [[memory architecture]]s are very complex and usually involve various levels of [[CPU cache|caches]] for the system memory and sometimes for the video memory too. Therefore, video-memory managers should also handle the [[cache coherence]] to ensure the data shared between CPU and GPU is consistent.{{r|Packard 2008 GEM}} This means that often video-memory management internals are highly dependent on hardware details of the GPU and memory architecture, and thus driver-specific.{{r|drm-mem manpage}} GEM was initially developed by [[Intel]] engineers to provide a video-memory manager for its i915 driver.{{r|Packard 2008 GEM}} The [[Intel GMA#Intel GPU based|Intel GMA 9xx]] family are [[GPU#Integrated graphics|integrated GPUs]] with a Uniform Memory Architecture (UMA), where the GPU and CPU share the physical memory, and there is not a dedicated VRAM.{{r|Intel UMA}} GEM defines "memory domains" for memory synchronization, and while these memory domains are GPU-independent,{{r|Packard Anholt 2008}} they are specifically designed with an UMA memory architecture in mind, making them less suitable for other memory architectures like those with a separate VRAM. For this reason, other DRM drivers have decided to expose to user-space programs the GEM API, but internally they implemented a different memory manager better suited for their particular hardware and memory architecture.{{r|Larabel 2008 gem-ttm}} The GEM API also provides ioctls for control of the execution flow (command buffers), but they are Intel-specific, to be used with Intel i915 and later GPUs.{{r|Packard Anholt 2008}} No other DRM driver has attempted to implement any part of the GEM API beyond the memory-management specific ioctls. ==== Translation Table Maps ==== ''Translation Table Maps'' (TTM) is the name of the generic memory manager for GPUs that was developed before GEM.{{r|Corbet 2007}}{{r|drmbook-mm}} It was specifically designed to manage the different types of memory that a GPU might access, including dedicated [[Video RAM]] (commonly installed in the video card) and [[main memory|system memory]] accessible through an [[Input–output memory management unit|I/O memory management unit]] called the [[Graphics Address Remapping Table]] (GART).{{r|Corbet 2007}} TTM should also handle the portions of the video RAM that are not directly addressable by the CPU and do it with the best possible performance, considering that user-space graphics applications typically work with large amounts of video data. Another important matter was to maintain the consistency between the different memories and caches involved. The main concept of TTM are the "buffer objects", regions of video memory that at some point must be addressable by the GPU.{{r|Corbet 2007}} When a user-space graphics application wants access to a certain buffer object (usually to fill it with content), TTM may require relocating it to a type of memory addressable by the CPU. Further relocations—or GART mapping operations—could happen when the GPU needs access to a buffer object but it isn't in the GPU's address space yet. Each of these relocation operations must handle any related data and cache-coherency issues.{{r|Corbet 2007}} Another important TTM concept is ''[[Memory barrier|fences]]''. Fences are essentially a mechanism to manage concurrency between the CPU and the GPU.{{r|Corbet 2008}} A fence tracks when a buffer object is no longer used by the GPU, generally to notify any user-space process with access to it.{{r|Corbet 2007}} The fact that TTM tried to manage all kind of memory architectures, including those with and without a dedicated VRAM, in a suitable way, and to provide every conceivable feature in a memory manager for use with any type of hardware, led to an overly complex solution with an API far larger than needed.{{r|Corbet 2008}}{{r|drmbook-mm}} Some DRM developers thought that it wouldn't fit well with any specific driver, especially the API. When GEM emerged as a simpler memory manager, its API was preferred over the TTM one. But some driver developers considered that the approach taken by TTM was more suitable for discrete video cards with dedicated video memory and IOMMUs, so they decided to use TTM internally, while exposing their buffer objects as GEM objects and thus supporting the GEM API.{{r|Larabel 2008 gem-ttm}} Examples of current drivers using TTM as an internal memory manager but providing a GEM API are the radeon driver for AMD video cards and the [[nouveau (software)|nouveau]] driver for NVIDIA video cards. ==== {{Anchor|PRIME}} DMA Buffer Sharing and PRIME ==== The ''DMA Buffer Sharing API'' (often abbreviated as DMA-BUF) is a [[Linux kernel]] internal [[Application programming interface|API]] designed to provide a generic mechanism to share [[Direct memory access|DMA]] buffers across multiple devices, possibly managed by different types of device drivers.{{r|Corbet 2012 dmabuf}}{{r|Clark Semwal 2012}} For example, a [[Video4Linux]] device and a graphics adapter device could share buffers through DMA-BUF to achieve [[zero-copy]] of the data of a video stream produced by the first and consumed by the latter. Any Linux [[device driver]] can implement this API as exporter, as user (consumer) or both. This feature was exploited for the first time in DRM to implement PRIME, a solution for [[GPU offloading]] that uses DMA-BUF to share the resulting framebuffers between the DRM drivers of the discrete and the integrated GPU.{{r|Peres 2014|p=13}} An important feature of DMA-BUF is that a shared buffer is presented to user space as a [[file descriptor]].{{r|drmbook-mm}}{{r|Peres Ravier 2013|p=17}} For the development of PRIME two new ioctls were added to the DRM API, one to convert a local GEM handle to a DMA-BUF file descriptor and another for the exact opposite operation. These two new ioctls were later reused as a way to fix the inherent unsafety of GEM buffer sharing.{{r|Peres Ravier 2013|p=17}} Unlike GEM names, file descriptors can not be guessed (they are not a global namespace), and Unix operating systems provide a safe way to pass them through a [[Unix domain socket]] using the SCM_RIGHTS semantics.{{r|drmbook-mm}}{{r|Pinchart 2013|p=11}} A process that wants to share a GEM object with another process can convert its local GEM handle to a DMA-BUF file descriptor and pass it to the recipient, which in turn can get its own GEM handle from the received file descriptor.{{r|Peres Ravier 2013|p=16}} This method is used by [[Direct Rendering Infrastructure|DRI3]] to share buffers between the client and the X Server{{r|Edge 2013}} and also by [[Wayland (display server protocol)|Wayland]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)