Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Ray tracing (graphics)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Distinguish|Ray tracing (physics)}} {{Use mdy dates|date=October 2019}} {{short description|Rendering method}} [[File:Recursive raytrace of a sphere.png|thumb|This recursive ray tracing of reflective colored spheres on a white surface demonstrates the effects of shallow [[depth of field]], "area" light sources, and [[diffuse interreflection]]. ({{Circa|2008}})]] In [[3D computer graphics]], '''ray tracing''' is a technique for modeling [[Light transport theory|light transport]] for use in a wide variety of [[Rendering (computer graphics)|rendering]] algorithms for generating [[digital image|digital images]]. On a spectrum of [[Computation time|computational cost]] and visual fidelity, ray tracing-based rendering techniques, such as [[ray casting]], [[#Recursive ray tracing algorithm|recursive ray tracing]], [[Distributed ray tracing|distribution ray tracing]], [[photon mapping]] and [[path tracing]], are generally slower and higher fidelity than [[scanline rendering]] methods.<ref>{{cite book |last=Shirley |first=Peter |date=July 9, 2003 |title=Realistic Ray Tracing |publisher=A K Peters/CRC Press; 2nd edition |page= <!-- or pages= --> |isbn=978-1568814612 }}</ref> Thus, ray tracing was first deployed in applications where taking a relatively long time to render could be tolerated, such as still [[computer-generated imagery|CGI]] images, and film and television [[visual effects]] (VFX), but was less suited to [[real-time computer graphics|real-time]] applications such as [[video game]]s, where [[Frame rate|speed is critical]] in rendering each [[Film frame|frame]].<ref>{{Cite web|title=Sponsored Feature: Changing the Game - Experimental Cloud-Based Ray Tracing|url=https://www.gamasutra.com/view/feature/134692/sponsored_feature_changing_the_.php|archive-url=https://web.archive.org/web/20120509034427/http://www.gamasutra.com/view/feature/134692/sponsored_feature_changing_the_.php|url-status=dead|archive-date=May 9, 2012|access-date=2021-03-18|website=www.gamasutra.com|language=en}}</ref> Since 2018, however, [[Ray-tracing hardware|hardware acceleration for real-time ray tracing]] has become standard on new commercial graphics cards, and graphics APIs have followed suit, allowing developers to use hybrid ray tracing and [[rasterization]]-based rendering in games and other real-time applications with a lesser hit to frame render times. Ray tracing is capable of simulating a variety of [[Optics|optical]] effects,<ref>{{Cite web|title=Disney explains why its 2D animation looks so realistic|url=https://www.engadget.com/2016-08-02-disney-explains-hyperion-renderer.html|access-date=2021-03-18|website=Engadget|language=en-US}}{{dead link|date=February 2025}}</ref> such as [[Reflection (physics)|reflection]], [[refraction]], [[soft shadows]], [[Light scattering|scattering]], [[depth of field]], [[motion blur]], [[Caustic (optics)|caustics]], [[ambient occlusion]] and [[Dispersion (optics)|dispersion]] phenomena (such as [[chromatic aberration]]). It can also be used to trace the path of [[Sound|sound waves]] in a similar fashion to light waves, making it a viable option for more immersive sound design in video games by rendering realistic [[reverberation]] and [[echo]]es.<ref>{{Cite web|title=The Next Big Steps In Game Sound Design|url=https://www.gamedeveloper.com/audio/the-next-big-steps-in-game-sound-design|access-date=2021-03-18|last1=Kastbauer|first1=Damian|url-status=live|archive-date=9 May 2012|archive-url=https://web.archive.org/web/20120509055403/https://www.gamasutra.com/view/feature/132645/the_next_big_steps_in_game_sound_.php|website=Gamasutra|date=January 28, 2010|language=en}}</ref> In fact, any physical [[wave]] or [[particle]] phenomenon with approximately linear motion can be simulated with [[Ray tracing (physics)|ray tracing]]. Ray tracing-based rendering techniques that involve sampling light over a domain generate [[image noise]] artifacts that can be addressed by tracing a very large number of rays or using [[denoising]] techniques. ==History== [[File:Albrecht durer ray tracing enhanced.png|thumb|"Draughtsman Making a Perspective Drawing of a Reclining Woman" by Albrecht Dürer, possibly from 1532, shows a man using a grid layout to create an image. The German Renaissance artist is credited with first describing the technique.]] [[File:Perspective-projection-albrecht-drer-science-source.jpg|thumb|Dürer woodcut of Jacob de Keyser's invention. With de Keyser's device, the artist's viewpoint was fixed by an eye hook inserted in the wall. This was joined by a silk string to a gun-sight style instrument, with a pointed vertical element at the front and a peephole at the back. The artist aimed at the object and traced its outline on the glass, keeping the eyepiece aligned with the string to maintain the correct angle of vision.]] The idea of ray tracing comes from as early as the 16th century when it was described by [[Albrecht Dürer]], who is credited for its invention.<ref name="raytracing">{{Cite journal|title=Who invented ray tracing?|author=Georg Rainer Hofmann|journal=The Visual Computer|volume=6|issue=3|pages=120–124|year=1990|doi=10.1007/BF01911003|s2cid=26348610}}.</ref> Dürer described multiple techniques for projecting 3-D scenes onto an image plane. Some of these project chosen geometry onto the image plane, as is done with [[rasterization]] today. Others determine what geometry is visible along a given ray, as is done with ray tracing.<ref>{{Cite web|title=Dürer, drawing, and digital thinking - 2013 FATE Conference|url=http://www.brian-curtis.com/text/conferpape_steveluecking.html|author=Steve Luecking|date=2013|website=brian-curtis.com|access-date=2020-08-13}}</ref><ref>{{Cite web|title=Stephen J Luecking|url=https://www.academia.edu/34722794|author=Steve Luecking|access-date=2020-08-13}}</ref> Using a computer for ray tracing to generate shaded pictures was first accomplished by [[Arthur Appel]] in 1968.<ref>{{Cite book| last = Appel | first = Arthur | title = Proceedings of the April 30--May 2, 1968, spring joint computer conference on - AFIPS '68 (Spring) | chapter = Some techniques for shading machine renderings of solids | date = April 30, 1968 | pages = 37–45 | doi = 10.1145/1468075.1468082 | s2cid = 207171023 | url = http://graphics.stanford.edu/courses/Appel.pdf }}</ref> Appel used ray tracing for primary visibility (determining the closest surface to the camera at each image point) by tracing a ray through each point to be shaded into the scene to identify the visible surface. The closest surface intersected by the ray was the visible one. This non-recursive ray tracing-based rendering algorithm is today called "[[ray casting]]". His algorithm then traced secondary rays to the light source from each point being shaded to determine whether the point was in shadow or not. Later, in 1971, Goldstein and Nagel of [[Mathematical Applications Group|MAGI (Mathematical Applications Group, Inc.)]]<ref>{{Citation | last1 = Goldstein | first1 = Robert | last2 = Nagel | first2 = Roger | title = 3-D Visual simulation | journal = Simulation | volume = 16 | date = January 1971 | pages = 25–31 | issue = 1| doi = 10.1177/003754977101600104 | s2cid = 122824395 }}</ref> published "3-D Visual Simulation", wherein ray tracing was used to make shaded pictures of solids. At the ray-surface intersection point found, they computed the surface normal and, knowing the position of the light source, computed the brightness of the pixel on the screen. Their publication describes a short (30 second) film “made using the University of Maryland’s display hardware outfitted with a 16mm camera. The film showed the helicopter and a simple ground level gun emplacement. The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A ''[[CDC 6600]]'' computer was used. MAGI produced an animation video called ''MAGI/SynthaVision Sampler'' in 1974.<ref>{{cite AV media |url=https://archive.org/details/synthavisionsampler |title=Syntha Vision Sampler |date=1974 |via=[[Internet Archive]]}}</ref> [[File:Flip Book Movie v2.gif|thumb|right|Flip book created in 1976 at Caltech]]Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation in [[Bob Sproull]]'s computer graphics course at [[California Institute of Technology|Caltech]]. The scanned pages are shown as a video in the accompanying image. Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors. Of course, a ray could intersect multiple planes in space, but only the surface point closest to the camera was noted as visible. The platform was a DEC [[PDP-10]], a [[Tektronix]] storage-tube display, and a printer which would create an image of the display on rolling thermal paper. Roth extended the framework, introduced the term ''[[ray casting]]'' in the context of [[computer graphics]] and [[solid modeling]], and in 1982 published his work while at GM Research Labs.<ref>{{Citation | last1 = Roth | first1 = Scott D. | title = Ray Casting for Modeling Solids | journal = Computer Graphics and Image Processing | volume = 18 | date = February 1982 | pages = 109–144 | doi = 10.1016/0146-664X(82)90169-1 | issue = 2}}</ref> [[J. Turner Whitted|Turner Whitted]] was the first to show recursive ray tracing for mirror reflection and for refraction through translucent objects, with an angle determined by the solid's index of refraction, and to use ray tracing for [[anti-aliasing]].<ref>Whitted T. (1979) ''[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.1534 An Improved Illumination Model for Shaded Display]''. Proceedings of the 6th annual conference on Computer graphics and interactive techniques</ref> Whitted also showed ray traced shadows. He produced a recursive ray-traced film called ''The Compleat Angler''<ref>{{cite AV media|url=https://archive.org/details/thecompleatangler1978 |title=The Compleat Angler |date=1978 |publisher=Bell Laboratories |via=[[Internet Archive]]}}</ref> in 1979 while an engineer at Bell Labs. Whitted's deeply recursive ray tracing algorithm reframed rendering from being primarily a matter of surface visibility determination to being a matter of light transport. His paper inspired a series of subsequent work by others that included [[Distributed ray tracing|distribution ray tracing]] and finally [[Unbiased rendering|unbiased]] [[path tracing]], which provides the ''[[rendering equation]]'' framework that has allowed computer generated imagery to be faithful to reality. For decades, [[global illumination]] in major films using [[computer-generated imagery]] was approximated with additional lights. Ray tracing-based rendering eventually changed that by enabling physically-based light transport. Early feature films rendered entirely using path tracing include ''[[Monster House (film)|Monster House]]'' (2006), ''[[Cloudy with a Chance of Meatballs (film)|Cloudy with a Chance of Meatballs]]'' (2009),<ref>{{Cite web |title=Food for Laughs |url=https://www.cgw.com/Publications/CGW/2009/Volume-32-Issue-9-Sep-2009-/Food-for-Laughs.aspx |website=Computer Graphics World }}</ref> and ''[[Monsters University]]'' (2013).<ref>{{Cite web|title=This Animated Life: Pixar's Lightspeed Brings New Light to Monsters University|url=https://thisanimatedlife.blogspot.com/2013/05/pixars-chris-horne-sheds-new-light-on.html|last=M.s|date=2013-05-28|website=This Animated Life|access-date=2020-05-26}}</ref> ==Algorithm overview== [[File:Ray trace diagram.svg|right|thumb|300px|The ray-tracing algorithm builds an image by extending rays into a scene and bouncing them off surfaces and towards sources of light to approximate the color value of pixels.]] [[File:Ray Tracing Illustration First Bounce.png|left|thumb|300px|Illustration of the ray-tracing algorithm for one pixel (up to the first bounce)]] Optical ray tracing describes a method for producing visual images constructed in [[3-D computer graphics]] environments, with more photorealism than either [[ray casting]] or [[scanline rendering]] techniques. It works by tracing a path from an imaginary eye through each [[pixel]] in a virtual screen, and calculating the color of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist (normally using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography. Typically, each ray must be tested for [[intersection (Euclidean geometry)|intersection]] with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming [[Computer graphics lighting|light]] at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or "backward" to send rays ''away'' from the camera, rather than ''into'' it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. Therefore, the shortcut taken in ray tracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated. {{Clear}} === Calculate rays for rectangular viewport === On input we have (in calculation we use vector [[Euclidean vector#Length|normalization]] and [[cross product]]): * <math>E \in \mathbb{R^3}</math> eye position * <math>T \in \mathbb{R^3}</math> target position * <math>\theta \in [0,\pi] </math> [[field of view]] - for humans, we can assume <math>\approx \pi/2 \text{ rad}= 90^\circ</math> * <math>m,k \in \mathbb{N}</math> numbers of square pixels on viewport vertical and horizontal direction * <math>i,j \in \mathbb{N}, 1\leq i\leq k \land 1\leq j\leq m </math> numbers of actual pixel * <math>\vec v \in \mathbb{R^3}</math> vertical vector which indicates where is up and down, usually <math>\vec v = [0,1,0]</math> - [[:simple:Pitch, yaw, and roll|roll]] component which determine viewport rotation around point C (where the axis of rotation is the ET section) [[File:RaysViewportSchema.png|708px|Viewport schema with pixels, eye E and target T, viewport center C]] The idea is to find the position of each viewport pixel center <math>P_{ij}</math> which allows us to find the line going from eye <math>E</math> through that pixel and finally get the ray described by point <math>E</math> and vector <math>\vec R_{ij} = P_{ij} -E </math> (or its normalisation <math>\vec r_{ij}</math>). First we need to find the coordinates of the bottom left viewport pixel <math>P_{1m}</math> and find the next pixel by making a shift along directions parallel to viewport (vectors <math>\vec b_n</math> , <math>\vec v_n</math>) multiplied by the size of the pixel. Below we introduce formulas which include distance <math>d</math> between the eye and the viewport. However, this value will be reduced during ray normalization <math>\vec r_{ij}</math> (so you might as well accept that <math>d=1</math> and remove it from calculations). Pre-calculations: let's find and normalise vector <math>\vec t</math> and vectors <math>\vec b, \vec v</math> which are parallel to the viewport (all depicted on above picture) :<math> \vec t = T-E, \qquad \vec b = \vec t\times \vec v </math> :<math> \vec t_n = \frac{\vec t}{||\vec t||}, \qquad \vec b_n = \frac{\vec b}{||\vec b||}, \qquad \vec v_n = \vec t_n\times \vec b_n </math> note that viewport center <math>C=E+\vec t_nd</math>, next we calculate viewport sizes <math>h_x, h_y</math> divided by 2 including inverse [[aspect ratio]] <math>\frac{m-1}{k-1}</math> :<math> g_x=\frac{h_x}{2} =d \tan \frac{\theta}{2}, \qquad g_y =\frac{h_y}{2} = g_x \frac{m-1}{k-1} </math> and then we calculate next-pixel shifting vectors <math>q_x, q_y</math> along directions parallel to viewport (<math>\vec b,\vec v</math>), and left bottom pixel center <math>p_{1m}</math> :<math> \vec q_x = \frac{2g_x}{k-1}\vec b_n, \qquad \vec q_y = \frac{2g_y}{m-1}\vec v_n, \qquad \vec p_{1m} = \vec t_n d - g_x\vec b_n - g_y\vec v_n </math> Calculations: note <math>P_{ij} = E + \vec p_{ij}</math> and ray <math>\vec R_{ij} = P_{ij} -E = \vec p_{ij}</math> so :<math> \vec p_{ij} = \vec p_{1m} + \vec q_x(i-1) + \vec q_y(j-1) </math> :<math> \vec r_{ij} = \frac{\vec R_{ij}}{||\vec R_{ij}||} = \frac{\vec p_{ij}}{||\vec p_{ij}||} </math> ==Detailed description of ray tracing computer algorithm and its genesis== ===What happens in nature (simplified)=== {{see also|Electromagnetism|Quantum electrodynamics}} In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of [[photon]]s traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring [[general relativity|relativistic effects]]). Any combination of four things might happen with this light ray: [[Absorption (electromagnetic radiation)|absorption]], [[Reflection (physics)|reflection]], [[refraction]] and [[fluorescence]]. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any [[Transparency (optics)|transparent]] or [[Transparency (optics)|translucent]] properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the [[Visible spectrum|spectrum]] (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. ===Ray casting algorithm=== {{Main|Ray casting}} The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the [[shading]] of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3-D computer graphics shading models. One important advantage ray casting offered over older [[scanline rendering|scanline algorithms]] was its ability to easily deal with non-planar surfaces and solids, such as [[cone (geometry)|cones]] and [[sphere]]s. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using [[solid modeling]] techniques and easily rendered. ===Volume ray casting algorithm=== {{Main|Volume ray casting}} In the method of volume ray casting, each ray is traced so that color and/or density can be sampled along the ray and then be combined into a final pixel color. This is often used when objects cannot be easily represented by explicit surfaces (such as triangles), for example when rendering clouds or 3D medical scans. [[File:Visualization of SDF ray marching algorithm.png|thumb|Visualization of SDF ray marching algorithm]] ===SDF ray marching algorithm=== {{main|Ray marching#Distance-aided ray marching}} In SDF ray marching, or sphere tracing,<ref>{{Citation | last1 = Hart | first1 = John C. | title = Sphere Tracing: A Geometric Method for the Antialiased Ray Tracing of Implicit Surfaces | journal = The Visual Computer | date = June 1995 | url = http://graphics.stanford.edu/courses/cs348b-20-spring-content/uploads/hart.pdf}}</ref> each ray is traced in multiple steps to approximate an intersection point between the ray and a surface defined by a [[signed distance function]] (SDF). The SDF is evaluated for each iteration in order to be able take as large steps as possible without missing any part of the surface. A threshold is used to cancel further iteration when a point is reached that is close enough to the surface. This method is often used for 3-D fractal rendering.<ref>{{Citation | last1 = Hart | first1 = John C. | last2 = Sandin | first2 = Daniel J. | last3 = Kauffman | first3 = Louis H. | title = Ray Tracing Deterministic 3-D Fractals | journal = Computer Graphics | date = July 1989 | volume = 23 | issue = 3 | pages = 289–296 | doi = 10.1145/74334.74363 | url = http://graphics.stanford.edu/courses/cs348b-20-spring-content/uploads/hart.pdf}}</ref> ===Recursive ray tracing algorithm=== [[File:Glasses 800 edit.png|right|thumb|300px|Ray tracing can create photorealistic images.]] [[File:BallsRender.png|right|thumb|300px|In addition to the high degree of realism, ray tracing can simulate the [[Camera#Mechanics|effects of a camera]] due to [[depth of field]] and [[aperture]] shape (in this case a [[hexagon]]).]] [[File:Ray-traced steel balls.jpg|right|thumb|300px|The number of reflections, or bounces, a "ray" can make, and how it is affected each time it encounters a surface, is controlled by settings in the software. In this image, each ray was allowed to reflect up to 16 times. Multiple "reflections of reflections" can thus be seen in these spheres. (Image created with [[Cobalt (CAD program)|Cobalt]].)]] [[File:Glass ochem.png|right|thumb|300px|The number of [[refraction]]s a “ray” can make, and how it is affected each time it encounters a surface that permits the [[Transparency and translucency|transmission of light]], is controlled by settings in the software. Here, each ray was set to refract or reflect (the "depth") ''up to 9 times''. [[Fresnel reflection]]s were used and [[Caustic (optics)|caustics]] are visible. (Image created with [[V-Ray]].)]] Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.:<ref>{{cite journal | url = https://dip.felk.cvut.cz/browse/pdfcache/nikodtom_2010bach.pdf | title = Ray Tracing Algorithm For Interactive Applications | author = Tomas Nikodym | journal = Czech Technical University, FEE | date = June 2010 | archive-url = https://web.archive.org/web/20160303180450/https://dip.felk.cvut.cz/browse/pdfcache/nikodtom_2010bach.pdf | archive-date = March 3, 2016 }}</ref> * A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. *A refraction ray traveling through transparent material works similarly, with the addition that a refractive ray could be entering or exiting a material. [[Turner Whitted]] extended the mathematical logic for rays passing through a transparent solid to include the effects of refraction.<ref>{{cite book |last=Whitted |first=T. |year=1979 |chapter=An Improved Illumination Model for Shaded Display |title=Proceedings of the 6th annual conference on Computer graphics and interactive techniques |publisher=Association for Computing Machinery |citeseerx=10.1.1.156.1534 |isbn=0-89791-004-4 |chapter-url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.1534 }}</ref> * A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. These recursive rays add more realism to ray traced images. ===Advantages over other rendering methods=== Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of [[Computer graphics lighting|light transport]], as compared to other rendering methods, such as [[rasterisation|rasterization]], which focuses more on the realistic simulation of geometry. Effects such as reflections and [[shadow]]s, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of [[parallelization]],<ref>{{cite book |first1=A. |last1=Chalmers |first2=T. |last2=Davis |first3=E. |last3=Reinhard |title=Practical Parallel Rendering |isbn=1-56881-179-9 |publisher=AK Peters |year=2002 }}</ref> but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.<ref>{{cite book |last1=Aila |first1=Timo |first2=Samulii |last2=Laine |year=2009 |chapter=Understanding the Efficiency of Ray Traversal on GPUs |title=HPG '09: Proceedings of the Conference on High Performance Graphics 2009 |pages=145–149 |doi=10.1145/1572769.1572792 |isbn=9781605586038 |s2cid=15392840 }}</ref> === Disadvantages === A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform [[spatial anti-aliasing]] and improve image quality where needed. Whitted-style recursive ray tracing handles interreflection and optical effects such as refraction, but is not generally [[photorealistic rendering|photorealistic]]. Improved realism occurs when the [[rendering equation]] is fully evaluated, as the equation conceptually includes every physical effect of light flow. However, this is infeasible given the computing resources required, and the limitations on geometric and material modeling fidelity. [[Path tracing]] is an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting. ===Reversed direction of traversal of scene by the rays=== The process of shooting rays from the eye to the light source to render an image is sometimes called ''backwards ray tracing'', since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term ''backwards ray tracing'' to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish ''eye-based'' versus ''light-based'' ray tracing. While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. [[Caustic (optics)|Caustics]] are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.<ref>{{cite journal | url = http://www.graphics.cornell.edu/~eric/Portugal.html | title = Bi-Directional Path Tracing | author = Eric P. Lafortune and Yves D. Willems | journal = Proceedings of Compugraphics '93 | date = December 1993 | pages = 145–153}}</ref><ref>{{cite web | url = https://old.cescg.org/CESCG98/PDornbach/paper.pdf | title = Implementation of bidirectional ray tracing algorithm | author = Péter Dornbach | access-date = 2008-06-11 |date=1998 }}</ref> [[Photon mapping]] is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.<ref>[http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf Global Illumination using Photon Maps] {{webarchive|url=https://web.archive.org/web/20080808140048/http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf |date=2008-08-08 }}</ref><ref>{{cite web| url = http://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html| title = Photon Mapping - Zack Waters<!-- Bot generated title -->}}</ref> The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias. An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or [[torchiere]]s). In such cases, only a very small subset of paths will transport energy; [[Metropolis light transport]] is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.<ref>{{cite book |first1=Eric |last1=Veach |first2=Leonidas J. |last2=Guibas |chapter=Metropolis Light Transport |title=SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques |year=1997 |pages=65–76 |doi=10.1145/258734.258775 |isbn=0897918967 |s2cid=1832504 }}</ref> [[File:PathOfRays.svg|thumb|Image showing recursively generated rays from the "eye" (and through an image plane) to a light source after encountering two [[diffuse surface]]s]] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions. First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue. ===Example=== [[File:Parametric surface illustration (trefoil knot).png|thumb|[[Trefoil knot]], created with a [[parametric equation]] and ray traced in [[Python (programming language)|Python]].]] As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the [[line–sphere intersection]] and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used. In [[vector notation]], the equation of a sphere with center <math>\mathbf c</math> and radius <math>r</math> is :<math>\left\Vert \mathbf x - \mathbf c \right\Vert^2=r^2.</math> Any point on a ray starting from point <math>\mathbf s</math> with direction <math>\mathbf d</math> (here <math>\mathbf d</math> is a [[unit vector]]) can be written as :<math>\mathbf x=\mathbf s+t\mathbf d,</math> where <math> t</math> is its distance between <math>\mathbf x</math> and <math>\mathbf s</math>. In our problem, we know <math>\mathbf c</math>, <math>r</math>, <math>\mathbf s</math> (e.g. the position of a light source) and <math>\mathbf d</math>, and we need to find <math> t</math>. Therefore, we substitute for <math>\mathbf x</math>: :<math>\left\Vert\mathbf{s}+t\mathbf{d}-\mathbf{c}\right\Vert^{2}=r^2.</math> Let <math>\mathbf{v}\ \stackrel{\mathrm{def}}{=}\ \mathbf{s}-\mathbf{c}</math> for simplicity; then :<math>\left\Vert\mathbf{v}+t\mathbf{d}\right\Vert^{2}=r^{2}</math> :<math>\mathbf{v}^2+t^2\mathbf{d}^2+2\mathbf{v}\cdot t\mathbf{d}=r^2</math> :<math>(\mathbf{d}^2)t^2+(2\mathbf{v}\cdot\mathbf{d})t+(\mathbf{v}^2-r^2)=0.</math> Knowing that d is a unit vector allows us this minor simplification: :<math>t^2+(2\mathbf{v}\cdot\mathbf{d})t+(\mathbf{v}^2-r^2)=0.</math> This [[quadratic equation]] has solutions :<math>t=\frac{-(2\mathbf{v}\cdot\mathbf{d})\pm\sqrt{(2\mathbf{v}\cdot\mathbf{d})^2-4(\mathbf{v}^2-r^2)}}{2}=-(\mathbf{v}\cdot\mathbf{d})\pm\sqrt{(\mathbf{v}\cdot\mathbf{d})^2-(\mathbf{v}^2-r^2)}.</math> The two values of <math>t</math> found by solving this equation are the two ones such that <math>\mathbf s+t\mathbf d</math> are the points where the ray intersects the sphere. Any value which is negative does not lie on the ray, but rather in the opposite [[Line (mathematics)|half-line]] (i.e. the one starting from <math>\mathbf s</math> with opposite direction). If the quantity under the square root (the [[quadratic equation#Discriminant|discriminant]]) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let <math>t</math> be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of [[Reflection (physics)|reflection]] state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the [[surface normal|normal]] to the sphere. The normal to the sphere is simply :<math>\mathbf n=\frac{\mathbf y- \mathbf c}{\left\Vert\mathbf y- \mathbf c\right\Vert},</math> where <math>\mathbf y=\mathbf s+t\mathbf d</math> is the intersection point found before. The reflection direction can be found by a [[Reflection (mathematics)|reflection]] of <math>\mathbf d</math> with respect to <math>\mathbf n</math>, that is : <math>\mathbf r = \mathbf d - 2(\mathbf n \cdot \mathbf d ) \mathbf n.</math> Thus the reflected ray has equation : <math>\mathbf x = \mathbf y + u \mathbf r. \, </math> Now we only need to compute the intersection of the latter ray with our [[field of view]], to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and that of the sphere are combined by the reflection. ==Adaptive depth control== Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced. Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution. For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.<ref>{{Cite journal|last1=Hall|first1=Roy A.|last2=Greenberg|first2=Donald P.|date=November 1983|title=A Testbed for Realistic Image Synthesis|journal=IEEE Computer Graphics and Applications|volume=3|issue=8|pages=10–20|doi=10.1109/MCG.1983.263292|citeseerx=10.1.1.131.1958|s2cid=9594422}}</ref> ==Bounding volumes== Enclosing groups of objects in sets of [[Bounding volume hierarchy|bounding volume hierarchies]] (BVH) decreases the amount of computations required for ray tracing. A cast ray is first tested for an intersection with the [[bounding volume]], and then if there is an intersection, the volume is recursively divided until the ray hits the object. The best type of bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin, then a sphere will enclose mainly empty space compared to a box. Boxes are also easier to generate hierarchical bounding volumes. Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and result in a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Furthermore, this acceleration structure makes the ray-tracing computation [[Output-sensitive algorithm|output-sensitive]]. I.e. the complexity of the ray intersection calculations depends on the number of objects that actually intersect the rays and not (only) on the number of objects in the scene. Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: * Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects. * The volume of each node should be minimal. * The sum of the volumes of all bounding volumes should be minimal. * Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree. * The time spent constructing the hierarchy should be much less than the time saved by using it. ==Interactive ray tracing{{anchor|In real time}}== {{See also|Ray-tracing hardware}} The first implementation of an interactive ray tracer was the [[Supercomputing in Japan|LINKS-1 Computer Graphics System]] built in 1982 at [[Osaka University]]'s School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students.{{Citation needed|reason=There were other real-time ray tracing claims researched for and discussed at SIGGRAPH 2005, but none could be proven prior to 1986. This claim warrants proof of real-time update (e.g., over 1 frame/sec), not just high-speed as there were numerous fast parallel distributed systems in the early-mid 1980s. The provided LINKS-1 citation does not support a real-time claim.|date=January 2019}} It was a [[massively parallel]] processing [[computer]] system with 514 [[microprocessor]]s (257 [[Zilog Z8000|Zilog Z8001]]s and 257 [[iAPX 86]]s), used for [[3-D computer graphics]] with high-speed ray tracing. According to the [[Information Processing Society of Japan]]: "The core of 3-D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, [[Computer graphics lighting|light source]], and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3-D [[planetarium]]-like video of the [[Universe|heavens]] made completely with computer graphics. The video was presented at the [[Fujitsu]] pavilion at the 1985 International Exposition in [[Tsukuba]]."<ref>{{cite web |title=【Osaka University 】 LINKS-1 Computer Graphics System |url=http://museum.ipsj.or.jp/en/computer/other/0013.html |website=IPSJ Computer Museum |publisher=[[Information Processing Society of Japan]] |access-date=November 15, 2018}}</ref> It was the second system to do so after the [[Evans & Sutherland]] [[Digistar]] in 1982. The LINKS-1 was claimed by the designers to be the world's most powerful computer in 1984.<ref>{{cite book |last1=Defanti |first1=Thomas A. |title=Advances in computers. Volume 23 |publisher=[[Academic Press]] |isbn=0-12-012123-9 |page=121 |url=http://www.vasulka.org/archive/Writings/VideogameImpact.pdf#page=29|year=1984 }}</ref> The next interactive ray tracer, and the first known to have been labeled "real-time" was credited at the 2005 [[SIGGRAPH]] computer graphics conference as being the REMRT/RT tools developed in 1986 by [[Mike Muuss]] for the [[BRL-CAD]] solid modeling system. Initially published in 1987 at [[USENIX]], the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance.<ref>See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.</ref> This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit [[Constructive solid geometry|CSG]] geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today as [[Open-source software|open source]] software.<ref>{{cite web |url=http://brlcad.org/d/about |title=About BRL-CAD |access-date=2019-01-18 |archive-date=September 1, 2009 |archive-url=https://web.archive.org/web/20090901222818/http://brlcad.org/d/about |url-status=dead }}</ref> Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3-D graphics applications such as [[Demo (computer programming)|demoscene productions]], [[Video game|computer and video games]], and image rendering. Some real-time software 3-D engines based on ray tracing have been developed by hobbyist [[demoscene|demo programmers]] since the late 1990s.<ref>{{cite web |url=http://www.acm.org/tog/resources/RTNews/demos/overview.htm |title=The Realtime Raytracing Realm |author=Piero Foscari |work=ACM Transactions on Graphics |access-date=2007-09-17}}</ref> In 1999 a team from the [[University of Utah]], led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixel resolution, running at approximately 15 frames per second on 60 CPUs.<ref> {{cite book | last1 = Parker | first1 = Steven | last2 = Martin | first2 = William | title = Proceedings of the 1999 symposium on Interactive 3-D graphics | chapter = Interactive ray tracing | date = April 26, 1999 | chapter-url = https://dl.acm.org/citation.cfm?id=300537 | series = I3D '99 | volume = 5 | issue = April 1999 | pages = 119–126 | doi = 10.1145/300523.300537 | isbn = 1581130821 | citeseerx = 10.1.1.6.8426 | s2cid = 4522715 | access-date = October 30, 2019 }}</ref> The Open RT project included a highly optimized software core for ray tracing along with an [[OpenGL]]-like API in order to offer an alternative to the current [[rasterization]] based approach for interactive 3-D graphics. [[Ray-tracing hardware|Ray tracing hardware]], such as the experimental [[Ray Processing Unit]] developed by Sven Woop at the [[Saarland University]], was designed to accelerate some of the computationally intensive operations of ray tracing. [[File:Quake_Wars_Ray_Traced.ogv|thumb|''Quake Wars: Ray Traced'']] The idea that video games could ray trace their graphics in real time received media attention in the late 2000s. During that time, a researcher named Daniel Pohl, under the guidance of graphics professor Philipp Slusallek and in cooperation with the [[Erlangen University]] and [[Saarland University]] in Germany, equipped ''[[Quake III]]'' and ''[[Quake IV]]'' with an [[game engine|engine]] he programmed himself, which Saarland University then demonstrated at [[CeBIT]] 2007.<ref>{{cite news |url=http://news.bbc.co.uk/1/hi/technology/6457951.stm |title=Rays light up life-like graphics |author=Mark Ward |work=BBC News |date=March 16, 2007 |access-date=2007-09-17}}</ref> [[Intel]], a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray traced graphics, which it saw as justifying increasing the number of its processors' cores.<ref name=Peddie>{{cite book|url=https://books.google.com/books?id=CS2oDwAAQBAJ|title=Ray Tracing: A Tool for All|last=Peddie|first=Jon|publisher=[[Springer Nature Switzerland]]|date=2019|access-date=2022-11-02|isbn=978-3-030-17490-3}}</ref>{{Rp|99–100}}<ref name=Abi-Chahla>{{cite web|url=https://www.tomshardware.com/reviews/ray-tracing-rasterization,2351.html|title=When Will Ray Tracing Replace Rasterization?|last=Abi-Chahla|first=Fedy|work=[[Tom's Hardware]]|date=July 22, 2009|access-date=2022-11-04|archive-url=https://archive.today/20221103235551/https://www.tomshardware.com/reviews/ray-tracing-rasterization,2351.html|archive-date=2022-11-03|url-status=live}}</ref> On June 12, 2008, Intel demonstrated a special version of ''[[Enemy Territory: Quake Wars]]'', titled ''Quake Wars: Ray Traced'', using ray tracing for rendering, running in basic HD (720p) resolution. ''ETQW'' operated at 14–29 frames per second on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.<ref>{{cite web|url=http://www.tgdaily.com/html_tmp/content-view-37925-113.html|title=Intel converts ET: Quake Wars to ray tracing|last=Valich|first=Theo|publisher=TG Daily|date=2008-06-12|access-date=2008-06-16|archive-url=https://web.archive.org/web/20081002030022/http://www.tgdaily.com/html_tmp/content-view-37925-113.html|archive-date=2008-10-02|url-status=dead}}</ref> At SIGGRAPH 2009, Nvidia announced [[OptiX]], a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.<ref>{{cite web |url=http://www.nvidia.com/object/optix.html |title=Nvidia OptiX|author=Nvidia |publisher=Nvidia |date=October 18, 2009 |access-date=2009-11-06}}</ref> OptiX-based renderers are used in [[Autodesk]] Arnold, [[Adobe Systems|Adobe]] [[AfterEffects]], Bunkspeed Shot, [[Autodesk Maya]], [[3ds max]], and many other renderers. In 2014, a demo of the [[PlayStation 4]] video game ''[[The Tomorrow Children]]'', developed by [[Q-Games]] and [[Japan Studio]], demonstrated new [[Computer graphics|lighting]] techniques developed by Q-Games, notably cascaded [[voxel]] [[Cone tracing|cone]] ray tracing, which simulates lighting in real-time and uses more realistic [[Reflection (computer graphics)|reflections]] rather than [[Screen space ambient occlusion|screen space]] reflections.<ref name=ps10>{{cite web|url=http://blog.eu.playstation.com/2014/10/24/creating-striking-unusual-visuals-tomorrow-children-ps4-2/|title=Creating the beautiful, ground-breaking visuals of The Tomorrow Children on PS4|first=Dylan|last=Cuthbert|work=[[PlayStation Blog]]|date=October 24, 2015|access-date=December 7, 2015}}</ref> Nvidia introduced their [[GeForce 20 series|GeForce RTX]] and Quadro RTX GPUs September 2018, based on the [[Turing (microarchitecture)|Turing architecture]] that allows for hardware-accelerated ray tracing. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit features [[Bounding volume hierarchy|BVH]] traversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing.<ref>{{cite web|url=https://developer.nvidia.com/blog/nvidia-turing-architecture-in-depth|title=NVIDIA Turing Architecture In-Depth|last1=Kilgariff|first1=Emmett|last2=Moreton|first2=Henry|last3=Stam|first3=Nick|last4=Bell|first4=Brandon|website=Nvidia Developer|date=2018-09-14|access-date=2022-11-13|archive-url=https://web.archive.org/web/20221113010753/https://developer.nvidia.com/blog/nvidia-turing-architecture-in-depth/|archive-date=2022-11-13|url-status=live}}</ref> The GeForce RTX, in the form of models 2080 and 2080 Ti, became the first consumer-oriented brand of graphics card that can perform ray tracing in real time,<ref>{{cite web|url=https://venturebeat.com/games/nvidia-unveils-geforce-rtx-graphics-chips-for-real-time-ray-tracing-games|title=Nvidia unveils GeForce RTX graphics chips for real-time ray tracing games|last=Takahashi|first=Dean|work=[[VentureBeat]]|date=2018-08-20|access-date=2022-11-13|archive-url=https://web.archive.org/web/20221113013850/https://venturebeat.com/games/nvidia-unveils-geforce-rtx-graphics-chips-for-real-time-ray-tracing-games|archive-date=2022-11-13|url-status=live}}</ref> and, in November 2018, [[Electronic Arts]]' ''[[Battlefield V]]'' became the first game to take advantage of its ray tracing capabilities, which it achieves via Microsoft's new API, [[DirectX Raytracing]].<ref>{{cite web|url=https://www.pcworld.com/article/402902/battlefield-v-dxr-rtx-ray-tracing.html|title=RTX on! Battlefield V becomes the first game to support DXR real-time ray tracing|last=Chacos|first=Brad|work=[[PCWorld]]|date=2018-11-14|access-date=2018-11-13|archive-url=https://web.archive.org/web/20221113014909/https://www.pcworld.com/article/402902/battlefield-v-dxr-rtx-ray-tracing.html|archive-date=2022-11-13|url-status=live}}</ref> AMD, which already offered interactive ray tracing on top of [[OpenCL]] through its [[Radeon Pro#ProRender|Radeon ProRender]],<ref>{{cite web|url=https://gpuopen.com/announcing-real-time-ray-tracing|title=Real-Time Ray Tracing with Radeon ProRender|website=GPUOpen|date=2018-03-20|access-date=2022-11-13|archive-url=https://web.archive.org/web/20221113030034/https://gpuopen.com/announcing-real-time-ray-tracing|archive-date=2022-11-13|url-status=live}}</ref><ref>{{cite web|url=https://gpuopen.com/learn/radeon-prorender-2-0|title=Hardware-Accelerated Ray Tracing in AMD Radeon™ ProRender 2.0|last=Harada|first=Takahiro|website=GPUOpen|date=2020-11-23|access-date=2022-11-13|archive-url=https://web.archive.org/web/20221113025101/https://gpuopen.com/learn/radeon-prorender-2-0|archive-date=2022-11-13|url-status=live}}</ref> unveiled in October 2020 the [[Radeon RX 6000 series]], its [[RDNA 2|second generation]] Navi GPUs with support for hardware-accelerated ray tracing at an online event.<ref>{{cite web|url=https://www.tweaktown.com/news/75066/amd-to-reveal-next-gen-big-navi-rdna-2-graphics-cards-on-october-28/index.html|title=AMD to reveal next-gen Big Navi RDNA 2 graphics cards on October 28|last=Garreffa|first=Anthony|work=TweakTown|date=September 9, 2020|access-date=September 9, 2020}}</ref><ref>{{cite web|url=https://www.theverge.com/2020/9/9/21429127/amd-zen-3-cpu-big-navi-gpu-events-october|title=AMD's next-generation Zen 3 CPUs and Radeon RX 6000 'Big Navi' GPU will be revealed next month|last=Lyles|first=Taylor|work=[[The Verge]]|date=September 9, 2020|access-date=September 10, 2020}}</ref><ref>{{cite web|url=https://www.anandtech.com/show/16150/amd-teases-radeon-rx-6000-card-performance-numbers-aiming-for-3080|title=AMD Teases Radeon RX 6000 Card Performance Numbers: Aiming For 3080?| website=anandtech.com| publisher=[[AnandTech]]|date=2020-10-08|access-date=2020-10-25}}</ref><ref>{{cite web|url=https://www.anandtech.com/show/16077/amd-announces-ryzen-zen-3-and-radeon-rdna2-presentations-for-october-a-new-journey-begins|title=AMD Announces Ryzen "Zen 3" and Radeon "RDNA2" Presentations for October: A New Journey Begins|website=anandtech.com|publisher=[[AnandTech]]|date=2020-09-09|access-date=2020-10-25}}</ref><ref>{{cite web|url=https://www.eurogamer.net/articles/digitalfoundry-2020-10-28-amd-unveils-three-radeon-6000-graphics-cards-with-ray-tracing-and-impressive-performance|title=AMD unveils three Radeon 6000 graphics cards with ray tracing and RTX-beating performance|last=Judd|first=Will|work=Eurogamer|date=October 28, 2020|access-date=October 28, 2020}}</ref> Subsequent games that render their graphics by such means appeared since, which has been credited to the improvements in hardware and efforts to make more APIs and game engines compatible with the technology.<ref>{{cite book|title=Ray Tracing Gems II: Next Generation Real-Time Rendering with DXR, Vulkan, and OptiX|last1=Marrs|first1=Adam|last2=Shirley|first2=Peter|author2-link=Peter Shirley|last3=Wald|first3=Ingo|publisher=[[Apress]]|date=2021|pages=213–214, 791–792|isbn=9781484271858|hdl=20.500.12657/50334}}</ref> Current home gaming consoles implement dedicated [[Ray-tracing hardware|ray tracing hardware components]] in their GPUs for real-time ray tracing effects, which began with the [[ninth generation of video game consoles|ninth-generation]] consoles [[PlayStation 5]], [[Xbox Series X and Series S]].<ref>{{Cite web|url=https://www.theverge.com/2019/6/8/18658147/microsoft-xbox-scarlet-teaser-e3-2019|title=Microsoft hints at next-generation Xbox 'Scarlet' in E3 teasers|last=Warren|first=Tom|date=June 8, 2019|website=[[The Verge]]|access-date=October 8, 2019}}</ref><ref>{{cite web|url=https://www.theverge.com/2019/10/8/20904351/sony-ps5-playstation-5-confirmed-haptic-feedback-features-release-date-2020|title=Sony confirms PlayStation 5 name, holiday 2020 release date|last=Chaim|first=Gartenberg|work=[[The Verge]]|date=October 8, 2019|access-date=October 8, 2019}}</ref><ref>{{cite web|url=https://www.theverge.com/2020/2/24/21150578/microsoft-xbox-series-x-specs-performance-12-teraflops-gpu-details-features|title=Microsoft reveals more Xbox Series X specs, confirms 12 teraflops GPU|last=Warren|first=Tom|work=The Verge|date=February 24, 2020|access-date=February 25, 2020}}</ref><ref>{{cite web|url=https://www.theverge.com/2020/9/9/21428792/microsoft-xbox-series-s-specs-cpu-teraflops-performance-gpu|title=Microsoft reveals Xbox Series S specs, promises four times the processing power of Xbox One|last=Warren|first=Tom|work=The Verge|date=September 9, 2020|access-date=September 9, 2020}}</ref><ref>{{cite magazine|url=https://www.wired.co.uk/article/xbox-series-x-release-date-uk-price-specs|title=Making sense of the rampant Xbox Series X rumour mill|last=Vandervell|first=Andy|magazine=[[Wired (magazine)|Wired]]|date=2020-01-04|access-date=2022-11-13|archive-url=https://web.archive.org/web/20221113010340/https://www.wired.co.uk/article/xbox-series-x-release-date-uk-price-specs|archive-date=2022-11-13|url-status=live}}</ref> On 4 November, 2021, [[Imagination Technologies]] announced their IMG CXT GPU with hardware-accelerated ray tracing.<ref>{{Cite web |last=93digital |date=2021-11-04 |title=Imagination launches the most advanced ray tracing GPU |url=https://www.imaginationtech.com/news/imagination-launches-the-most-advanced-ray-tracing-gpu/ |access-date=2023-09-17 |website=Imagination |language=en-GB}}</ref><ref>{{Cite web |title=Ray Tracing |url=https://www.imaginationtech.com/products/ray-tracing/ |access-date=2023-09-17 |website=Imagination |language=en-GB}}</ref> On January 18, 2022, Samsung announced their [[Exynos#Exynos 2000 series|Exynos 2200]] AP SoC with hardware-accelerated ray tracing.<ref>{{Cite web |title=Samsung Introduces Game Changing Exynos 2200 Processor With Xclipse GPU Powered by AMD RDNA 2 Architecture |url=https://news.samsung.com/global/samsung-introduces-game-changing-exynos-2200-processor-with-xclipse-gpu-powered-by-amd-rdna-2-architecture |access-date=2023-09-17 |website=news.samsung.com |language=en}}</ref> On June 28, 2022, [[Arm (company)|Arm]] announced their [[Mali (processor)#Valhall 4th Gen|Immortalis-G715]] with hardware-accelerated ray tracing.<ref>{{Cite web |date=2022-06-28 |title=Gaming Performance Unleashed with Arm's new GPUs - Announcements - Arm Community blogs - Arm Community |url=https://community.arm.com/arm-community-blogs/b/announcements/posts/gaming-performance-unleashed |access-date=2023-09-17 |website=community.arm.com |language=en}}</ref> On November 16, 2022, [[Qualcomm]] announced their [[List of Qualcomm Snapdragon systems on chips#Snapdragon 8 Gen 2 (2023)|Snapdragon 8 Gen 2]] with hardware-accelerated ray tracing.<ref>{{Cite web |title=Snapdragon 8 Gen 2 Defines a New Standard for Premium Smartphones |url=https://www.qualcomm.com/news/releases/2022/11/snapdragon-8-gen-2-defines-a-new-standard-for-premium-smartphone |access-date=2023-09-17 |website=www.qualcomm.com |language=en}}</ref><ref>{{Cite web |title=New, Snapdragon 8 Gen 2: 8 extraordinary mobile experiences, unveiled |url=https://www.qualcomm.com/news/onq/2022/11/new-snapdragon-8-gen-2-8-extraordinary-mobile-experiences-unveiled |access-date=2023-09-17 |website=www.qualcomm.com |language=en}}</ref> On September 12, 2023 Apple introduced hardware-accelerated ray tracing in its chip designs, beginning with the [[Apple A17|A17 Pro chip]] for iPhone 15 Pro models.<ref>{{Cite web |last=Bonshor |first=Ryan Smith, Gavin |title=The Apple 2023 Fall iPhone Event Live Blog (Starts at 10am PT/17:00 UTC) |url=https://www.anandtech.com/show/20051/the-apple-2023-fall-iphone-event-live-blog |access-date=2023-09-17 |website=www.anandtech.com}}</ref><ref name=":apple0">{{Cite web |title=Apple unveils iPhone 15 Pro and iPhone 15 Pro Max |url=https://www.apple.com/newsroom/2023/09/apple-unveils-iphone-15-pro-and-iphone-15-pro-max/ |access-date=2024-10-27 |website=Apple Newsroom |language=en-US}}</ref> Later the same year, Apple released M3 family of processors with HW enabled ray tracing support.<ref name=":apple1">{{Cite web |title=Apple unveils M3, M3 Pro, and M3 Max, the most advanced chips for a personal computer |url=https://www.apple.com/newsroom/2023/10/apple-unveils-m3-m3-pro-and-m3-max-the-most-advanced-chips-for-a-personal-computer/ |access-date=2024-10-27 |website=Apple Newsroom |language=en-US}}</ref> Currently, this technology is accessible across iPhones, iPads, and Mac computers via the [[Metal (API)|Metal API]]. Apple reports up to a 4x performance increase over previous software-based ray tracing on the phone<ref name=":apple0" /> and up to 2.5x faster comparing M3 to M1 chips.<ref name=":apple1" /> The hardware implementation includes acceleration structure traversal and dedicated ray-box intersections, and the API supports RayQuery (Inline Ray Tracing) as well as RayPipeline features.<ref>{{Cite web |title=Discover ray tracing with Metal - WWDC20 - Videos |url=https://developer.apple.com/videos/play/wwdc2020/10012/ |access-date=2024-10-27 |website=Apple Developer |language=en}}</ref> == Computational complexity == Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows<ref>{{cite web |title=Computability and Complexity of Ray Tracing |url=https://www.cs.duke.edu/~reif/paper/tygar/raytracing.pdf |website=CS.Duke.edu }}</ref> – given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results: * Ray tracing in 3-D optical systems with a finite set of reflective or refractive objects represented by a system of rational quadratic inequalities is [[Undecidable problem|undecidable]]. * Ray tracing in 3-D optical systems with a finite set of refractive objects represented by a system of rational linear inequalities is undecidable. * Ray tracing in 3-D optical systems with a finite set of rectangular reflective or refractive objects is undecidable. * Ray tracing in 3-D optical systems with a finite set of reflective or partially reflective objects represented by a system of linear inequalities, some of which can be irrational is undecidable. * Ray tracing in 3-D optical systems with a finite set of reflective or partially reflective objects represented by a system of rational linear inequalities is [[PSPACE]]-hard. * For any dimension equal to or greater than 2, ray tracing with a finite set of parallel and perpendicular reflective surfaces represented by rational linear inequalities is in PSPACE. == Software architecture == === Middleware === * [[GPUOpen]] * [[Nvidia GameWorks]] === API === * [[Metal (API)]] * [[Vulkan]] * [[DirectX]] == See also == {{Div col|colwidth=20em}} * [[Beam tracing]] * [[Cone tracing]] * [[Distributed ray tracing]] * [[Global illumination]] * [[Gouraud shading]] * [[List of ray tracing software]] * [[Parallel computing]] * [[Path tracing]] * [[Phong shading]] * [[Progressive meshes]] * [[Shading]] * [[Specular reflection]] * [[Tessellation]] * [[Per-pixel lighting]] {{Div col end}} ==References== {{Reflist|30em}} == External links == * [https://web.archive.org/web/20190520020303/http://www.few.vu.nl/~kielmann/theses/avdploeg.pdf Interactive Ray Tracing: The replacement of rasterization?] * [https://www.youtube.com/watch?v=WV4qXzM641o The Compleat Angler (1978)] * [http://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing Writing a Simple Ray Tracer (scratchapixel)] {{Webarchive|url=https://web.archive.org/web/20150228122303/http://scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing |date=February 28, 2015 }} * [https://marcin-chwedczuk.github.io/ray-tracing-torus Ray tracing a torus] * [https://raytracing.github.io/ Ray Tracing in One Weekend Book Series] * [https://jacco.ompf2.com/2024/04/24/ray-tracing-with-voxels-in-c-series-part-1/ Ray Tracing with Voxels - Part 1] {{Computer graphics}} [[Category:Ray tracing (graphics)| ]] [[Category:Geometrical optics]] [[Category:Virtual reality]] [[Category:Global illumination algorithms]] [[Category:Computer graphics]] [[Category:3D computer graphics]] [[Category:Shading]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Circa
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite AV media
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite magazine
(
edit
)
Template:Cite news
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Computer graphics
(
edit
)
Template:Dead link
(
edit
)
Template:Distinguish
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Use mdy dates
(
edit
)
Template:Webarchive
(
edit
)