Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Direct Rendering Manager
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Software architecture == [[File:High level Overview of DRM.svg|thumb|A process using the Direct Rendering Manager of the Linux Kernel to access a 3D accelerated graphics card]] The Direct Rendering Manager resides in [[kernel space]], so user-space programs must use kernel [[system call]]s to request its services. However, DRM doesn't define its own customized system calls. Instead, it follows the [[Unix]] principle of "[[everything is a file]]" to expose the [[GPU]]s through the filesystem name space, using [[device file]]s under the <code>/dev</code> hierarchy. Each GPU detected by DRM is referred to as a ''DRM device'', and a device file <code>/dev/dri/card''X''</code> (where ''X'' is a sequential number) is created to interface with it.{{r|Kitching 2012}}{{r|Herrmann 2013 DRM split}} User-space programs that want to talk to the GPU must [[open (system call)|open]] this file and use [[ioctl]] calls to communicate with DRM. Different ioctls correspond to different functions of the DRM [[Application programming interface|API]]. A [[Library (computing)|library]] called ''libdrm'' was created to facilitate the interface of user-space programs with the DRM subsystem. This library is merely a [[Wrapper library|wrapper]] that provides a [[Subroutine|function]] written in [[C (programming language)|C]] for every ioctl of the DRM API, as well as constants, structures and other helper elements.{{r|libdrm README}} The use of libdrm not only avoids exposing the kernel interface directly to applications, but presents the usual advantages of [[Code reuse|reusing]] and sharing code between programs. [[File:DRM architecture.svg|thumb|Direct Rendering Manager architecture details: DRM core and DRM driver (including GEM and KMS) interfaced by libdrm]] DRM consists of two parts: a generic "DRM core" and a specific one ("DRM driver") for each type of supported hardware.{{r|Airlie redesign}} DRM core provides the basic framework where different DRM drivers can register and also provides to user space a minimal set of ioctls with common, hardware-independent functionality.{{r|Kitching 2012}} A DRM driver, on the other hand, implements the hardware-dependent part of the API, specific to the type of GPU it supports; it should provide the implementation of the remaining ioctls not covered by DRM core, but it may also extend the API, offering additional ioctls with extra functionality only available on such hardware.{{r|Kitching 2012}} When a specific DRM driver provides an enhanced API, user-space libdrm is also extended by an extra library libdrm-''driver'' that can be used by user space to interface with the additional ioctls. === API === The DRM core exports several interfaces to user-space applications, generally intended to be used through corresponding <code>libdrm</code> wrapper functions. In addition, drivers export device-specific interfaces for use by user-space drivers and device-aware applications through [[ioctl]]s and [[sysfs]] files. External interfaces include: memory mapping, context management, [[Direct memory access|DMA]] operations, [[Accelerated Graphics Port|AGP]] management, [[Vertical blanking interval|vblank]] control, fence management, memory management, and output management. ==== DRM-Master and DRM-Auth ==== There are several operations (ioctls) in the DRM API that either for security purposes or for concurrency issues must be restricted to be used by a single user-space process per device.{{r|Kitching 2012}} To implement this restriction, DRM limits such ioctls to be only invoked by the process considered the "master" of a DRM device, usually called ''DRM-Master''. Only one of all processes that have the device node <code>/dev/dri/card''X''</code> opened will have its [[file handle]] marked as master, specifically the first calling the {{mono|SET_MASTER}} ioctl. Any attempt to use one of these restricted ioctls without being the DRM-Master will return an error. A process can also give up its master role—and let another process acquire it—by calling the {{mono|DROP_MASTER}} ioctl. The [[X Window System|X Server]]—or any other [[display server]]—is commonly the process that acquires the DRM-Master status in every DRM device it manages, usually when it opens the corresponding device node during its startup, and keeps these privileges for the entire graphical session until it finishes or dies. For the remaining user-space processes there is another way to gain the privilege to invoke some restricted operations on the DRM device called ''DRM-Auth''. It is basically a method of authentication against the DRM device, in order to prove to it that the process has the DRM-Master's approval to get such privileges. The procedure consists of:{{r|Peres Ravier 2013|p=13}} * The client gets a unique token—a 32-bit integer—from the DRM device using the {{mono|GET_MAGIC}} ioctl and passes it to the DRM-Master process by whatever means (normally some sort of [[Inter-process communication|IPC]]; for example, in [[DRI2]] there is a {{mono|DRI2Authenticate}} request that any X client can send to the X Server.{{r|DRI2 spec}}) * The DRM-Master process, in turn, sends back the token to the DRM device by invoking the {{mono|AUTH_MAGIC}} ioctl. * The device grants special rights to the process file handle whose auth token matches the received token from the DRM-Master. === Graphics Execution Manager === Due to the increasing size of [[video memory]] and the growing complexity of graphics APIs such as [[OpenGL]], the strategy of reinitializing the graphics card state at each [[context switch]] was too expensive, performance-wise. Also, modern [[Linux desktop]]s needed an optimal way to share off-screen buffers with the [[compositing manager]]. These requirements led to the development of new methods to manage graphics [[Data buffer|buffers]] inside the kernel. The ''Graphics Execution Manager'' (GEM) emerged as one of these methods.{{r|Packard Anholt 2008}} GEM provides an API with explicit [[memory management]] primitives.{{r|Packard Anholt 2008}} Through GEM, a user-space program can create, handle and destroy memory objects living in the GPU video memory. These objects, called "GEM objects",{{r|drmbook-mm}} are persistent from the user-space program's perspective and don't need to be reloaded every time the program regains control of the GPU. When a user-space program needs a chunk of video memory (to store a [[framebuffer]], [[Texture mapping|texture]] or any other data required by the GPU{{r|Vetter 2012}}), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations.{{r|Packard Anholt 2008}}{{r|drmbook-mm}} GEM API also provides operations to populate the buffer and to release it when it is not needed anymore. Memory from unreleased GEM handles gets recovered when the user-space process closes the DRM device [[file descriptor]]—intentionally or because it terminates.{{r|Vetter 2011}} GEM also allows two or more user-space [[Process (computing)|processes]] using the same DRM device (hence the same DRM driver) to share a GEM object.{{r|Vetter 2011}} GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called ''GEM names''. A GEM name refers to one, and only one, GEM object created within the same DRM device by the same DRM driver, by using a unique 32-bit [[Integer (computer science)|integer]]. GEM provides an operation ''flink'' to obtain a GEM name from a GEM handle.{{r|Vetter 2011}}{{r|Peres Ravier 2013|p=16}} The process can then pass this GEM name (32-bit integer) to another process using any [[Inter-process communication|IPC]] mechanism available.{{r|Peres Ravier 2013|p=15}} The GEM name can be used by the recipient process to obtain a local GEM handle pointing to the original GEM object. Unfortunately, the use of GEM names to share buffers is not secure.{{r|Peres Ravier 2013|p=16}}{{r|Packard 2012}}{{r|Herrmann 2013 XDC}} A malicious third-party process accessing the same DRM device could try to guess the GEM name of a buffer shared by two other processes, simply by probing 32-bit integers.{{r|Kerrisk 2012}}{{r|Herrmann 2013 XDC}} Once a GEM name is found, its contents can be accessed and modified, violating the [[CIA triad|confidentiality and integrity]] of the information of the buffer. This drawback was overcome later by the introduction of [[DMA-BUF]] support into DRM, as DMA-BUF represents buffers in userspace as file descriptors, which [[File descriptor#File descriptors as capabilities|may be shared securely]]. Another important task for any video-memory management system besides managing the video-memory space is handling the memory synchronization between the GPU and the CPU. Current [[memory architecture]]s are very complex and usually involve various levels of [[CPU cache|caches]] for the system memory and sometimes for the video memory too. Therefore, video-memory managers should also handle the [[cache coherence]] to ensure the data shared between CPU and GPU is consistent.{{r|Packard 2008 GEM}} This means that often video-memory management internals are highly dependent on hardware details of the GPU and memory architecture, and thus driver-specific.{{r|drm-mem manpage}} GEM was initially developed by [[Intel]] engineers to provide a video-memory manager for its i915 driver.{{r|Packard 2008 GEM}} The [[Intel GMA#Intel GPU based|Intel GMA 9xx]] family are [[GPU#Integrated graphics|integrated GPUs]] with a Uniform Memory Architecture (UMA), where the GPU and CPU share the physical memory, and there is not a dedicated VRAM.{{r|Intel UMA}} GEM defines "memory domains" for memory synchronization, and while these memory domains are GPU-independent,{{r|Packard Anholt 2008}} they are specifically designed with an UMA memory architecture in mind, making them less suitable for other memory architectures like those with a separate VRAM. For this reason, other DRM drivers have decided to expose to user-space programs the GEM API, but internally they implemented a different memory manager better suited for their particular hardware and memory architecture.{{r|Larabel 2008 gem-ttm}} The GEM API also provides ioctls for control of the execution flow (command buffers), but they are Intel-specific, to be used with Intel i915 and later GPUs.{{r|Packard Anholt 2008}} No other DRM driver has attempted to implement any part of the GEM API beyond the memory-management specific ioctls. ==== Translation Table Maps ==== ''Translation Table Maps'' (TTM) is the name of the generic memory manager for GPUs that was developed before GEM.{{r|Corbet 2007}}{{r|drmbook-mm}} It was specifically designed to manage the different types of memory that a GPU might access, including dedicated [[Video RAM]] (commonly installed in the video card) and [[main memory|system memory]] accessible through an [[Input–output memory management unit|I/O memory management unit]] called the [[Graphics Address Remapping Table]] (GART).{{r|Corbet 2007}} TTM should also handle the portions of the video RAM that are not directly addressable by the CPU and do it with the best possible performance, considering that user-space graphics applications typically work with large amounts of video data. Another important matter was to maintain the consistency between the different memories and caches involved. The main concept of TTM are the "buffer objects", regions of video memory that at some point must be addressable by the GPU.{{r|Corbet 2007}} When a user-space graphics application wants access to a certain buffer object (usually to fill it with content), TTM may require relocating it to a type of memory addressable by the CPU. Further relocations—or GART mapping operations—could happen when the GPU needs access to a buffer object but it isn't in the GPU's address space yet. Each of these relocation operations must handle any related data and cache-coherency issues.{{r|Corbet 2007}} Another important TTM concept is ''[[Memory barrier|fences]]''. Fences are essentially a mechanism to manage concurrency between the CPU and the GPU.{{r|Corbet 2008}} A fence tracks when a buffer object is no longer used by the GPU, generally to notify any user-space process with access to it.{{r|Corbet 2007}} The fact that TTM tried to manage all kind of memory architectures, including those with and without a dedicated VRAM, in a suitable way, and to provide every conceivable feature in a memory manager for use with any type of hardware, led to an overly complex solution with an API far larger than needed.{{r|Corbet 2008}}{{r|drmbook-mm}} Some DRM developers thought that it wouldn't fit well with any specific driver, especially the API. When GEM emerged as a simpler memory manager, its API was preferred over the TTM one. But some driver developers considered that the approach taken by TTM was more suitable for discrete video cards with dedicated video memory and IOMMUs, so they decided to use TTM internally, while exposing their buffer objects as GEM objects and thus supporting the GEM API.{{r|Larabel 2008 gem-ttm}} Examples of current drivers using TTM as an internal memory manager but providing a GEM API are the radeon driver for AMD video cards and the [[nouveau (software)|nouveau]] driver for NVIDIA video cards. ==== {{Anchor|PRIME}} DMA Buffer Sharing and PRIME ==== The ''DMA Buffer Sharing API'' (often abbreviated as DMA-BUF) is a [[Linux kernel]] internal [[Application programming interface|API]] designed to provide a generic mechanism to share [[Direct memory access|DMA]] buffers across multiple devices, possibly managed by different types of device drivers.{{r|Corbet 2012 dmabuf}}{{r|Clark Semwal 2012}} For example, a [[Video4Linux]] device and a graphics adapter device could share buffers through DMA-BUF to achieve [[zero-copy]] of the data of a video stream produced by the first and consumed by the latter. Any Linux [[device driver]] can implement this API as exporter, as user (consumer) or both. This feature was exploited for the first time in DRM to implement PRIME, a solution for [[GPU offloading]] that uses DMA-BUF to share the resulting framebuffers between the DRM drivers of the discrete and the integrated GPU.{{r|Peres 2014|p=13}} An important feature of DMA-BUF is that a shared buffer is presented to user space as a [[file descriptor]].{{r|drmbook-mm}}{{r|Peres Ravier 2013|p=17}} For the development of PRIME two new ioctls were added to the DRM API, one to convert a local GEM handle to a DMA-BUF file descriptor and another for the exact opposite operation. These two new ioctls were later reused as a way to fix the inherent unsafety of GEM buffer sharing.{{r|Peres Ravier 2013|p=17}} Unlike GEM names, file descriptors can not be guessed (they are not a global namespace), and Unix operating systems provide a safe way to pass them through a [[Unix domain socket]] using the SCM_RIGHTS semantics.{{r|drmbook-mm}}{{r|Pinchart 2013|p=11}} A process that wants to share a GEM object with another process can convert its local GEM handle to a DMA-BUF file descriptor and pass it to the recipient, which in turn can get its own GEM handle from the received file descriptor.{{r|Peres Ravier 2013|p=16}} This method is used by [[Direct Rendering Infrastructure|DRI3]] to share buffers between the client and the X Server{{r|Edge 2013}} and also by [[Wayland (display server protocol)|Wayland]]. === {{Anchor|Kernel mode setting|KMS driver}} Kernel Mode Setting === [[File:Linux kernel and daemons with exclusive access.svg|thumb|There must be a "DRM master" in user-space, this program has exclusive access to KMS.]] In order to work properly, a video card or graphics adapter must set a ''[[Framebuffer#Display modes|mode]]''—a combination of [[screen resolution]], [[color depth]] and [[refresh rate]]—that is within the range of values supported by itself and the attached [[Computer monitor|display screen]]. This operation is called [[mode-setting]],{{r|Kernelnewbies linux 2.6.29}} and it usually requires raw access to the graphics hardware—i.e. the ability to write to certain registers of the video card [[display controller]].{{r|OSDev VGA}}{{r|Rathmann 2008}} A mode-setting operation must be performed before starting to use the [[framebuffer]], and also when the mode is required to change by an application or the user. In the early days, [[User_space_and_kernel_space|user-space]] programs that wanted to use the graphical framebuffer were also responsible for providing the mode-setting operations.{{r|White}} Thus, they needed to run with privileged access to the video hardware. In Unix-type operating systems, the [[X Window System|X Server]] was the most prominent example. Its mode-setting implementation lived in the [[Device Dependent X|DDX driver]] for each specific type of video card.{{r|Paalanen 2014}} This approach, later referred to as ''User space Mode-Setting'' or UMS,{{r|kms manpage}}{{r|Corbet 2010}} poses several issues.{{r|X wiki modesetting}}{{r|Kernelnewbies linux 2.6.29}} It not only breaks the isolation that operating systems should provide between programs and hardware, raising both stability and security concerns, but also could leave the graphics hardware in an inconsistent state if two or more user space programs try to do the mode-setting at the same time. To avoid these conflicts, the X Server became in practice the only user space program that performed mode-setting operations; the remainder user space programs relied on the X Server to set the appropriate mode and to handle any other operation involving mode-setting. Initially the mode-setting was performed exclusively during the X Server startup process, but later the X Server gained the ability to do it while running.{{r|Corbet 2007 LCA}} The XFree86-VidModeExtension extension was introduced in [[XFree86]] 3.1.2 to let any X client request [[XFree86 Modeline|modeline]] (resolution) changes to the X Server.{{r|XF86vidmode man}}{{r|X11R6.1 relnotes}} VidMode extension was later superseded by the more generic [[XRandR]] extension. However, this was not the only code doing mode-setting in a [[Linux]] system. During the system booting process, the Linux kernel must set a minimal [[text mode]] for the [[Linux console|virtual console]] (based on the standard modes defined by [[VESA BIOS]] extensions).{{r|Corbet 2004}} Also the [[Linux framebuffer|Linux kernel framebuffer driver]] contained mode-setting code to configure framebuffer devices.{{r|fb doc}} To avoid mode-setting conflicts, the [[XFree86|XFree86 Server]]—and later the [[X.Org Server]]—handled the case when the user switched from the graphical environment to a text [[virtual console]] by saving its mode-setting state, and restoring it when the user switched back to X.{{r|Fedora KMS}} This process caused an annoying flicker in the transition, and also can fail, leading to a corrupted or unusable output display.{{r|Barnes modesetting}} The user space mode setting approach also caused other issues:{{r|DRI Wiki modesetting}}{{r|Barnes modesetting}} * The [[sleep mode|suspend/resume process]] has to rely on user space tools to restore the previous mode. One single failure or crash of one of these programs could leave the system without a working display due to a modeset misconfiguration, and therefore unusable. * It was also impossible for the kernel to show error or debug messages when the screen was in a graphics mode—for example when X was running—since the only modes the kernel knew about were the VESA BIOS standard text modes. * A more pressing issue was the proliferation of graphical applications bypassing the X Server and the emergence of other graphics stack alternatives to X, extending the duplication of mode-setting code across the system even further. To address these problems, the mode-setting code was moved to a single place inside the kernel, specifically to the existing DRM module.{{r|X wiki modesetting}}{{r|Corbet 2007 LCA}}{{r|Packard 2007}}{{r|Barnes modesetting}}{{r|DRI Wiki modesetting}} Then, every process—including the X Server—should be able to command the kernel to perform mode-setting operations, and the kernel would ensure that concurrent operations don't result in an inconsistent state. The new kernel API and code added to the DRM module to perform these mode-setting operations was called ''Kernel Mode-Setting'' (KMS).{{r|Kernelnewbies linux 2.6.29}} Kernel Mode-Setting provides several benefits. The most immediate is of course the removal of duplicate mode-setting code, from both the kernel (Linux console, fbdev) and user space (X Server DDX drivers). KMS also makes it easier to write alternative graphics systems, which now don't need to implement their own mode-setting code.{{r|Barnes modesetting}}{{r|DRI Wiki modesetting}} By providing centralized mode management, KMS solves the flickering issues while changing between console and X, and also between different instances of X (fast user switching).{{r|Fedora KMS}}{{r|Packard 2007}} Since it is available in the kernel, it can also be used at the beginning of the boot process, saving flickering due to mode changes in these early stages. The fact that KMS is part of the kernel allows it to use resources only available at kernel space such as [[Hardware interrupt|interrupts]].{{r|Packard 2009}} For example, the mode recovery after a suspend/resume process simplifies a lot by being managed by the kernel itself, and incidentally improves security (no more user space tools requiring root permissions). The kernel also allows the [[hotplug]] of new display devices easily, solving a longstanding problem.{{r|Packard 2009}} Mode-setting is also closely related to memory management—since framebuffers are basically memory buffers—so a tight integration with the graphics memory manager is highly recommended. That's the main reason why the kernel mode-setting code was incorporated into DRM and not as a separate subsystem.{{r|Packard 2007}} To avoid breaking backwards compatibility of the DRM API, Kernel Mode-Setting is provided as an additional ''driver feature'' of certain DRM drivers.{{r|drmbook-init}} Any DRM driver can choose to provide the {{mono|DRIVER_MODESET}} flag when it registers with the DRM core to indicate that supports the KMS API.{{r|Kitching 2012}} Those drivers that implement Kernel Mode-Setting are often called ''KMS drivers'' as a way to differentiate them from the legacy—without KMS—DRM drivers. KMS has been adopted to such an extent that certain drivers which lack 3D acceleration (or for which the hardware vendor doesn't want to expose or implement it) nevertheless implement the KMS API without the rest of the DRM API, allowing display servers (like [[Wayland (protocol)|Wayland]]) to run with ease.<ref>{{Cite web |date=2023-01-31 |title=q3k (@q3k@hackerspace.pl) |url=https://social.hackerspace.pl/@q3k/109783347119397449 |access-date=2023-02-13 |website=Warsaw Hackerspace Social Club |language=en |quote=DRM/KMS driver fully working now, although still without DMA. Oh, and it's written in Rust, although it's mostly just full of raw unsafe blocks.}}</ref><ref>{{Cite web |date=2023-01-31 |title=q3k (@q3k@hackerspace.pl) |url=https://social.hackerspace.pl/@q3k/109785149255867298 |access-date=2023-02-13 |website=Warsaw Hackerspace Social Club |language=en |quote=Cool thing is, since we have a 'normal' DRM/KMS driver (and help from @emersion@hackerspace.pl) we can just do things like... run Wayland! Weston on an iPod Nano 5G.}}</ref> ==== KMS device model ==== KMS models and manages the output devices as a series of abstract hardware blocks commonly found on the display output pipeline of a [[display controller]]. These blocks are:{{r|drmbook-kms}} * '''CRTCs''': each CRTC (from [[Cathode-ray tube|CRT]] Controller{{r|X wiki videocards}}{{r|Paalanen 2014}}) represents a scanout engine of the display controller, pointing to a ''scanout buffer'' ([[framebuffer]]).{{r|drmbook-kms}} The purpose of a CRTC is to read the pixel data currently in the scanout buffer and generate from it the [[Video signal|video mode timing signal]] with the help of a [[Phase-locked loop|PLL circuit]].{{r|Deucher 2010}} The number of CRTCs available determines how many independent output devices the hardware can handle at the same time, so in order to use ''[[Multi-monitor|multi-head]]'' configurations at least one CRTC per display device is required.{{r|drmbook-kms}} Two—or more—CRTCs can also work in ''clone mode'' if they scan out from the same framebuffer to send the same image to several output devices.{{r|Deucher 2010}}{{r|X wiki videocards}} * '''Connectors''': a connector represents where the display controller sends the video signal from a scanout operation to be displayed. Usually, the KMS concept of a connector corresponds to a physical connector ([[VGA connector|VGA]], [[Digital Visual Interface|DVI]], [[FPD-Link]], [[HDMI]], [[DisplayPort]], [[S-Video]], ...) in the hardware where an output device ([[Computer monitor|monitor]], [[laptop]] panel, ...) is permanently or can temporarily be attached. Information related to the current physically attached output device—such as connection status, [[Extended Display Identification Data|EDID]] data, [[VESA Display Power Management Signaling|DPMS]] status or supported video modes—is also stored within the connector.{{r|drmbook-kms}} * '''Encoders''': the display controller must encode the video mode timing signal from the CRTC using a format suitable for the intended connector.{{r|drmbook-kms}} An encoder represents the hardware block able to do one of these encodings. Examples of encodings—for digital outputs—are [[Transition-minimized differential signaling|TMDS]] and [[Low-voltage differential signaling|LVDS]]; for analog outputs such as [[Video Graphics Array|VGA]] and TV out, specific [[Digital-to-analog converter|DAC]] blocks are generally used. A connector can only receive the signal from one encoder at a time,{{r|drmbook-kms}} and each type of connector only supports some encodings. There also might be additional physical restrictions by which not every CRTC is connected to every available encoder, limiting the possible combinations of CRTC-encoder-connector. * '''Planes''': a plane is not a hardware block but a memory object containing a buffer from which a scanout engine (a CRTC) is fed. The plane that holds the [[framebuffer]] is called the ''primary plane'', and each CRTC must have one associated,{{r|drmbook-kms}} since it is the source for the CRTC to determine the video mode—display resolution (width and height), pixel size, pixel format, refresh rate, etc. A CRTC might have also ''cursor planes'' associated to it if the display controller supports hardware cursor overlays, or ''secondary planes'' if it's able to scan out from additional [[hardware overlay]]s and compose or blend "on the fly" the final image sent to the output device.{{r|Paalanen 2014}} ==== {{Anchor|Atomic mode setting|nuclear pageflip}}Atomic Display ==== In recent years there has been an ongoing effort to bring [[Atomicity (programming)|atomicity]] to some regular operations pertaining the KMS API, specifically to the [[mode setting]] and [[page flipping]] operations.{{r|Paalanen 2014}}{{r|Vetter 2015 LWN p1}} This enhanced KMS API is what is called ''Atomic Display'' (formerly known as ''atomic mode-setting'' and ''atomic'' or ''nuclear pageflip''). The purpose of the ''atomic mode-setting'' is to ensure a correct change of mode in complex configurations with multiple restrictions, by avoiding intermediate steps which could lead to an inconsistent or invalid video state;{{r|Vetter 2015 LWN p1}} it also avoids risky video states when a failed mode-setting process has to be undone ("rollback").{{r|Reding 2015|p=9}} Atomic mode-setting allows one to know beforehand if certain specific mode configuration is appropriate, by providing mode testing capabilities.{{r|Vetter 2015 LWN p1}} When an atomic mode is tested and its validity confirmed, it can be applied with a single indivisible (atomic) [[Atomic commit|commit]] operation. Both test and commit operations are provided by the same new [[ioctl]] with different flags. ''Atomic page flip'' on the other hand allows to update multiple planes on the same output (for instance the primary plane, the cursor plane and maybe some overlays or secondary planes) all synchronized within the same [[Vertical blanking interval|VBLANK]] interval, ensuring a proper display without tearing.{{r|Reding 2015|p=9,14}}{{r|Vetter 2015 LWN p1}} This requirement is especially relevant to mobile and embedded display controllers, that tend to use multiple planes/overlays to save power. The new atomic API is built upon the old KMS API. It uses the same model and objects (CRTCs, encoders, connectors, planes, ...), but with an increasing number of object properties that can be modified.{{r|Vetter 2015 LWN p1}} The atomic procedure is based on changing the relevant properties to build the state that we want to test or commit. The properties we want to modify depend on whether we want to do a mode-setting (mostly CRTCs, encoders and connectors properties) or page flipping (usually planes properties). The ioctl is the same for both cases, the difference being the list of properties passed with each one.{{r|Vetter 2015 LWN p2}} === Render nodes === In the original DRM API, the DRM device <code>/dev/dri/card''X''</code> is used for both privileged (modesetting, other display control) and non-privileged (rendering, [[GPGPU]] compute) operations.{{r|Herrmann 2013 DRM split}} For security reasons, opening the associated DRM device file requires special privileges "equivalent to root-privileges".{{r|Herrmann 2013 rendernodes}} This leads to an architecture where only some reliable user space programs (the X server, a graphical compositor, ...) have full access to the DRM API, including the privileged parts like the modeset API. Other user space applications that want to render or make GPGPU computations should be granted by the owner of the DRM device ("DRM Master") through the use of a special authentication interface.{{r|drmbook-rendernodes}} Then the authenticated applications can render or make computations using a restricted version of the DRM API without privileged operations. This design imposes a severe constraint: there must always be a running graphics server (the X Server, a Wayland compositor, ...) acting as DRM-Master of a DRM device so that other user space programs can be granted the use of the device, even in cases not involving any graphics display like GPGPU computations.{{r|Herrmann 2013 rendernodes}}{{r|drmbook-rendernodes}} The "render nodes" concept tries to solve these scenarios by splitting the DRM user space API into two interfaces – one privileged and one non-privileged – and using separate device files (or "nodes") for each one.{{r|Herrmann 2013 DRM split}} For every GPU found, its corresponding DRM driver—if it supports the render nodes feature—creates a device file <code>/dev/dri/renderD''X''</code>, called the ''render node'', in addition to the primary node <code>/dev/dri/card''X''</code>.{{r|drmbook-rendernodes}}{{r|Herrmann 2013 DRM split}} Clients that use a direct rendering model and applications that want to take advantage of the computing facilities of a GPU, can do it without requiring additional privileges by simply opening any existing render node and dispatching GPU operations using the limited subset of the DRM API supported by those nodes—provided they have [[file system permissions]] to open the device file. Display servers, compositors and any other program that requires the modeset API or any other privileged operation must open the standard primary node that grants access to the full DRM API and use it as usual. Render nodes explicitly disallow the GEM ''flink'' operation to prevent buffer sharing using insecure GEM global names; only PRIME (DMA-BUF) [[file descriptor]]s can be used to share buffers with another client, including the graphics server.{{r|Herrmann 2013 DRM split}}{{r|drmbook-rendernodes}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)