Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Real-time computer graphics
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Sub-field of computer graphics}} {{More citations needed|date=September 2017}} {{3D computer graphics}} [[File:Riparian river view.jpg|thumb|[[Virtual reality]] render of a river from 2000]] [[File:CAVE Crayoland.jpg|thumb|Virtual environment at [[University of Illinois]], 2001 |alt=]] [[File:Visions big.jpg|thumb|[[Music visualization]]s are generated in real-time.]] '''Real-time computer graphics''' or '''real-time rendering''' is the sub-field of [[computer graphics]] focused on producing and analyzing images in [[Real-time computing|real time]]. The term can refer to anything from rendering an application's graphical user interface ([[Graphical user interface|GUI]]) to real-time [[image analysis]], but is most often used in reference to interactive [[3D computer graphics]], typically using a [[graphics processing unit]] (GPU). One example of this concept is a [[video game]] that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and [[polygon]]s in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional [[Von Neumann architecture]]-based systems. An early workaround to this problem was the use of [[sprite (computer graphics)|sprite]]s, [[2D computer graphics|2D images]] that could imitate 3D graphics. Different techniques for [[rendering (computer graphics)|rendering]] now exist, such as [[Real-time ray tracing|ray-tracing]] and [[Rasterisation|rasterization]]. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience. == Principles of real-time 3D computer graphics == {{Main|3D computer graphics}} The goal of computer graphics is to generate [[Computer-generated imagery|computer-generated images]], or [[Film frame|frame]]s, using certain desired metrics. One such metric is the number of [[Image-based modeling and rendering|frames generated]] in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on [[Ray tracing (graphics)|ray tracing]]. In this process, millions or billions of rays are traced from the [[Virtual camera system|camera]] to the [[Virtual world|world]] for detailed rendering—this expensive operation can take hours or days to render a single frame. [[File:Real-time Raymarched Terrain.png|thumb|[[Terrain rendering]] made in 2014]] Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of [[Z-buffering|z-buffer]] [[triangle rasterization]]. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets [[Shader#Vertex shaders|positioned, rotated and scaled]] on the screen, and [[Rasterisation|rasterizer]] hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called [[Fragment (computer graphics)|fragments]] that are suitable for displaying on a [[Computer monitor|display screen]]. The fragments are drawn on the screen using a color that is computed in several steps. For example, a [[Texture mapping|texture]] can be used to "paint" a triangle based on a stored image, and then [[shadow mapping]] can alter that triangle's colors based on line-of-sight to light sources. {{See also|Level of detail (computer graphics)}} ===Video game graphics=== Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern [[DirectX]]/[[OpenGL]] class hardware is capable of generating complex effects, such as [[shadow volume]]s, [[motion blur]]ring, and [[Shader#Geometry shaders|triangle generation]], in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual [[gameplay]] graphics and the pre-rendered [[cutscenes]] traditionally found in video games.<ref name="hoso">{{cite book|url={{google books |plainurl=y |id=fvSbCgAAQBAJ|page=86}}|title=How Software Works: The Magic Behind Encryption, CGI, Search Engines and Other Everyday Technologies.|last=Spraul|first=V. Anton|publisher=No Starch Press|year=2013|isbn=978-1593276669|page=86|access-date=24 September 2017}}</ref> Cutscenes are typically rendered in real-time—and may be [[interactivity|interactive]].<ref name="tvge">{{cite book|url={{google books |plainurl=y |id=XiM0ntMybNwC|page=86}}|title=The Video Game Explosion: A History from PONG to Playstation and Beyond|last=Wolf|first=Mark J. P.|publisher=ABC-CLIO|year=2008|isbn=9780313338687|page=86|access-date=24 September 2017}}</ref> Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. === Advantages === [[File:FaceRig with full body.png|thumb|Real-time full body and [[face tracking]]]] Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) [[Persistence of vision|perspective speed of the human visual system]]. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current{{when|date=August 2019}} Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of [[Game physics|physics]] and [[Computer animation|animation]]. These techniques largely dictate what is to be drawn on the screen—especially ''where'' to draw objects in the scene. These techniques help realistically imitate real world behavior (the [[Dimension#Time|temporal dimension]], not the [[Dimension|spatial dimensions]]), adding to the computer graphics' degree of realism. Real-time previewing with [[graphics software]], especially when adjusting [[computer graphics lighting|lighting effects]], can increase work speed.<ref name="dili">{{cite book|url={{google books |plainurl=y |id=IRrsAQAAQBAJ|page=442}}|title=Digital Lighting and Rendering: Edition 3|last=Birn|first=Jeremy|publisher=New Riders|year=2013|isbn=9780133439175|page=442|access-date=24 September 2017}}</ref> Some parameter adjustments in [[fractal generating software]] may be made while viewing changes to the image in real time. == Rendering pipeline == The [[graphics pipeline|graphics rendering pipeline]] ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics.<ref name="retire">{{cite book|url={{google books |plainurl=y |id=g_PRBQAAQBAJ|page=11}}|title=Real-Time Rendering, Third Edition: Edition 3|last=Akenine-Möller|first=Tomas|author2=Eric Haines|author3=Naty Hoffman|publisher=CRC Press|year=2008|isbn=9781439865293|page=11|access-date=22 September 2017}}</ref> Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more. === Architecture === The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and [[rasterization]]. === Application stage === The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as [[collision detection]], speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via [[Geometric transformation|transforms]], and geometry morphing. Finally, it produces [[geometric primitive|primitives]] (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline. === Geometry stage === {{Main|Polygonal modeling}} The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs.<ref name="frpi">{{cite book|url={{google books |plainurl=y |id=W4bSBQAAQBA|page=5}}|title=Computer Graphics: From Pixels to Programmable Graphics Hardware|last=Boresko|first=Alexey|author2=Evgeniy Shikin|publisher=CRC Press|year=2013|isbn=9781482215571|page=5|access-date=22 September 2017}}{{dead link|date=September 2019}}</ref> Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages. ==== Model and view transformation ==== Before the final model is shown on the output device, the model is transformed onto multiple spaces or [[coordinate system]]s. Transformations move and manipulate objects by altering their vertices. ''Transformation'' is the general term for the four specific ways that manipulate the shape or position of a point, line or shape. ==== Lighting ==== In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a [[Cartesian coordinate system#Orientation and handedness|right-handed]] coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right. ==== Projection ==== {{Main|Graphical projection}} Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are [[orthographic projection]] (also called parallel) and [[Perspective (graphical)|perspective projection]]. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight. ==== Clipping ==== [[Clipping (computer graphics)|Clipping]] is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage. ==== Screen mapping ==== The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage. ==== Rasterizer stage ==== The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. == See also == {{Div col}} * [[Bounding interval hierarchy]] * [[Demoscene]] * [[Geometry instancing]] * [[Optical feedback]] * [[Quartz Composer]] * [[Real time (media)]] * [[Real-time raytracing]] * [[Tessellation (computer graphics)]] * [[Video art]] * [[Video display controller]] {{Div col end}} ==References== {{Reflist}} ==Bibliography== *{{cite book|author1=Möller, Tomas|author2=Haines, Eric|author-link2=Eric Haines|title=Real-Time Rendering|edition=1st|location=Natick, MA|publisher=A K Peters, Ltd.|year=1999}} * {{cite web|author=Salvator, Dave|title=3D Pipeline|website=Extremetech.com|date=21 June 2001|publisher=Extreme Tech|access-date=2 Feb 2007|url=http://www.extremetech.com/article2/|url-status=dead|archive-url=https://web.archive.org/web/20080517001222/http://www.extremetech.com/article2/|archive-date=17 May 2008}} * {{cite thesis|author=Malhotra, Priya|type=Master's|title=Issues involved in Real-Time Rendering of Virtual Environments|date=July 2002|pages=20–31|publisher=Virginia Tech|location=Blacksburg, VA|hdl=10919/35382 |access-date=31 January 2007|url=http://hdl.handle.net/10919/35382}} * {{cite web|author=Haines, Eric|author-link=Eric Haines|title=Real-Time Rendering Resources|date=1 February 2007|access-date=12 Feb 2007 |url=http://www.realtimerendering.com/}} == External links == * [http://www.realtimerendering.com/portal/ RTR Portal] – a trimmed-down "best of" set of links to resources {{Authority control}} [[Category:Computer graphics]] [[Category:Real-time computing|Computer graphics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:3D computer graphics
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite thesis
(
edit
)
Template:Cite web
(
edit
)
Template:Dead link
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Main
(
edit
)
Template:More citations needed
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:When
(
edit
)