Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rendering (computer graphics)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Techniques == Choosing how to render a 3D scene usually involves trade-offs between speed, memory usage, and realism (although realism is not always desired). The '''{{visible anchor|algorithms}}''' developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased. Multiple techniques may be used for a single final image. An important distinction is between [[Image and object order rendering|image order]] algorithms, which iterate over pixels in the image, and [[Image and object order rendering|object order]] algorithms, which iterate over objects in the scene. For simple scenes, object order is usually more efficient, as there are fewer objects than pixels.{{r|n=Marschner2022|loc=Ch. 4}} ; [[vector graphics|2D vector graphics]] : The [[vector monitor|vector displays]] of the 1960s-1970s used deflection of an [[Cathode ray|electron beam]] to draw [[line segment]]s directly on the screen. Nowadays, [[vector graphics]] are rendered by [[rasterization]] algorithms that also support filled shapes. In principle, any 2D vector graphics renderer can be used to render 3D objects by first projecting them onto a 2D image plane. {{r|n=Foley82|p=93, 431, 505, 553}} ; [[Rasterisation#3D images|3D rasterization]] : Adapts 2D rasterization algorithms so they can be used more efficiently for 3D rendering, handling [[Hidden-surface determination|hidden surface removal]] via [[scanline rendering|scanline]] or [[Z-buffering|z-buffer]] techniques. Different realistic or stylized effects can be obtained by coloring the pixels covered by the objects in different ways. [[Computer representation of surfaces|Surfaces]] are typically divided into [[Polygon mesh|meshes]] of triangles before being rasterized. Rasterization is usually synonymous with "object order" rendering (as described above).{{r|n=Foley82|p=560-561, 575-590}}{{r|n=Raghavachary2005|loc=8.5}}{{r|n=Marschner2022|loc=Ch. 9}} ; [[Ray casting]] : Uses geometric formulas to compute the first object that a [[Line (geometry)#Ray|ray]] intersects.{{r|n=RayTracingGems_1|p=8}} It can be used to implement "image order" rendering by casting a ray for each pixel, and finding a corresponding point in the scene. Ray casting is a fundamental operation used for both graphical and non-graphical purposes,{{r|n=RealTimeRayTracing|p=6}} e.g. determining whether a point is in shadow, or checking what an enemy can see in a [[Artificial intelligence in video games|game]]. ; [[Ray tracing (graphics)|Ray tracing]] : Simulates the bouncing paths of light caused by [[specular reflection]] and [[refraction]], requiring a varying number of ray casting operations for each path. Advanced forms use [[Monte Carlo method|Monte Carlo techniques]] to render effects such as area lights, [[depth of field]], blurry reflections, and [[Umbra, penumbra and antumbra|soft shadows]], but computing [[global illumination]] is usually in the domain of path tracing.{{r|n=RayTracingGems_1|p=9-13}}{{r|IntroToRTCh5}} ; [[Radiosity (computer graphics)|Radiosity]] : A [[Finite element method|finite element analysis]] approach that breaks surfaces in the scene into pieces, and estimates the amount of light that each piece receives from light sources, or indirectly from other surfaces. Once the [[irradiance]] of each surface is known, the scene can be rendered using rasterization or ray tracing.{{r|Glassner95|p=888-890, 1044-1045}} ; [[Path tracing]] : Uses [[Monte Carlo method|Monte Carlo integration]] with a simplified form of ray tracing, computing the average brightness of a [[Sampling (statistics)|sample]] of the possible paths that a photon could take when traveling from a light source to the camera (for some images, thousands of paths need to be sampled per pixel{{r|n=RealTimeRayTracing|p=8}}). It was introduced as a [[Unbiased rendering|statistically unbiased]] way to solve the [[rendering equation]], giving ray tracing a rigorous mathematical foundation.{{r|Kajiya1986}}{{r|n=RayTracingGems_1|p=11-13}} Each of the above approaches has many variations, and there is some overlap. Path tracing may be considered either a distinct technique or a particular type of ray tracing.{{r|Glassner95|p=846, 1021}} Note that the [[usage (language)|usage]] of terminology related to ray tracing and path tracing has changed significantly over time.{{r|n=RayTracingGems_1|p=7}} [[File:Real-time_Raymarched_Terrain.png|thumb|Rendering of a fractal terrain by [[ray marching]]]] [[Ray marching]] is a family of algorithms, used by ray casting, for finding intersections between a ray and a complex object, such as a [[Volume ray casting|volumetric dataset]] or a surface defined by a [[signed distance function]]. It is not, by itself, a rendering method, but it can be incorporated into ray tracing and path tracing, and is used by rasterization to implement screen-space reflection and other effects.{{r|n=RayTracingGems_1|p=13}} A technique called [[photon mapping]] traces paths of photons from a light source to an object, accumulating data about [[irradiance]] which is then used during conventional ray tracing or path tracing.{{r|Glassner95|p=1037-1039}} Rendering a scene using only rays traced from the light source to the camera is impractical, even though it corresponds more closely to reality, because a huge number of photons would need to be simulated, only a tiny fraction of which actually hit the camera.{{r|IntroToRTCh1|p=7-9}}{{r|n=Foley82|p=587}} Some authors call conventional ray tracing "backward" ray tracing because it traces the paths of photons backwards from the camera to the light source, and call following paths from the light source (as in photon mapping) "forward" ray tracing.{{r|IntroToRTCh1|p=7-9}} However, sometimes the meaning of these terms is reversed.{{r|Arvo1986}} Tracing rays starting at the light source can also be called ''particle tracing'' or ''light tracing'', which avoids this ambiguity.{{r|Veach1997|p=92}}{{r|Dutré2015|loc=4.5.4}} Real-time rendering, including video game graphics, typically uses rasterization, but increasingly combines it with ray tracing and path tracing.{{r|RealTimeRayTracing|page=2}} To enable realistic [[global illumination]], real-time rendering often relies on pre-rendered ("baked") lighting for stationary objects. For moving objects, it may use a technique called ''light probes'', in which lighting is recorded by rendering omnidirectional views of the scene at chosen points in space (often points on a grid to allow easier [[interpolation]]). These are similar to [[reflection mapping|environment maps]], but typically use a very low resolution or an approximation such as [[spherical harmonics]].{{r|UnityLightProbes}} (Note: [[Blender (software)|Blender]] uses the term 'light probes' for a more general class of pre-recorded lighting data, including reflection maps.{{r|BlenderSettingsLightProbes}}) {{Gallery | title = Examples comparing different rendering techniques | mode = nolines | File:Rendering techniques example, rasterization, low quality, Blender EEVEE.png | A low quality rasterized image, rendered by [[Blender (software)|Blender]]'s EEVEE renderer with low [[shadow mapping|shadow map]] resolution and a low-resolution mesh| alt1 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The shadows of the cows are blocky (like blurry pixels) due to low quality settings in the renderer. | File:Rendering techniques example, path tracing, low quality, Blender Cycles.png | A low quality path traced image, rendered by Blender's Cycles renderer with only 16 sampled paths per pixel and a low-resolution mesh| alt2 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The image is speckled with many white dots, especially in the shadowed areas, due to low quality settings in the renderer. The reflection, transparency, and lighting are realistic, but the speckles distract from this. | File:Rendering techniques example, ray tracing, low quality, POV-Ray.png | A ray traced image, using the [[POV-Ray]] program (using only its ray tracing features) with a low-resolution mesh| alt3 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines are angular and there are some defects (due to the low-resolution mesh of the models), and the transparent cow has no shadow. | File:Rendering techniques example, rasterization, high quality, Blender EEVEE.png | A higher quality rasterized image, using [[Blender (software)|Blender]]'s EEVEE renderer with light probes| alt4 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects, and the reflection looks quite realistic, but the transparency does not look convincing, and the lighting in the shadowed areas of the cows is not quite realistic. | File:Rendering techniques example, path tracing, high quality, Blender Cycles.png | A higher quality path traced image, using [[Blender (software)|Blender]]'s Cycles renderer with 2000 sampled paths per pixel | alt5 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects. There are a few speckles of white pixels, but far fewer than in the low-quality image. The reflection, transparency, and lighting look realistic. | File:Rendering techniques example, ray tracing, radiosity, photon mapping, POV-Ray.png | An image rendered using [[POV-Ray]]'s [[ray tracing (graphics)|ray tracing]], [[radiosity (computer graphics)|radiosity]] and [[photon mapping]] features| alt6 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects. The lighting is realistic, including in the shadowed areas. The base surface is illuminated by bright spots and lines ("caustics") caused by light being focused by the reflective and transparent cows. | File:Rendering techniques example, path tracing, realistic, Blender Cycles.jpg | A more realistic path traced image, using [[Blender (software)|Blender]]'s Cycles renderer with [[image-based lighting]]| alt7 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a metalic surface, and the one on the right uses a transparent glass material. The cow in the center appears made of glazed porcelain. The cows are standing on a wooden table. Lights and other background details from a cafe environment are reflected in the slightly glossy table and the cows. | File:Rendering techniques example, spectral, photon mapping, POV-Ray.png | A [[spectral rendering|spectral rendered]] image, using [[POV-Ray]]'s [[ray tracing (graphics)|ray tracing]], [[radiosity (computer graphics)|radiosity]] and [[photon mapping]] features| alt8 = 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The base surface is illuminated by finely detailed bright spots and lines ("caustics") caused by light being focused by the reflective and transparent cows. The caustics are colorful in some places, due to chromatic dispersion. }} ===Rasterization<span class="anchor" id="Rasterization"></span><span class="anchor" id="Scanline rendering and rasterization"></span>=== {{main|Rasterization}} [[File:Latest Rendering of the E-ELT.jpg|thumb|An architectural visualization of the [[Extremely Large Telescope]] from 2009, likely rendered using a combination of techniques]] The term ''rasterization'' (in a broad sense) encompasses many techniques used for 2D rendering and [[Real-time computer graphics|real-time]] 3D rendering. 3D [[Computer animation|animated films]] were rendered by rasterization before [[Ray tracing (graphics)|ray tracing]] and [[path tracing]] became practical. A renderer combines rasterization with ''geometry processing'' (which is not specific to rasterization) and ''pixel processing'' which computes the [[RGB color model|RGB color values]] to be placed in the ''[[framebuffer]]'' for display.{{r|AkenineMöller2018|loc=2.1}}{{r|Marschner2022|loc=9}} The main tasks of rasterization (including pixel processing) are:{{r|AkenineMöller2018|loc=2, 3.8, 23.1.1}} * Determining which pixels are covered by each geometric shape in the 3D scene or 2D image (this is the actual rasterization step, in the strictest sense) * Blending between colors and depths defined at the [[Vertex (computer graphics)|vertices]] of shapes, e.g. using [[Barycentric coordinate system|barycentric coordinates]] (''interpolation'') * Determining if parts of shapes are hidden by other shapes, due to 2D layering or 3D depth (''[[Hidden-surface determination|hidden surface removal]]'') * Evaluating a function for each pixel covered by a shape (''[[shading]]'') * Smoothing edges of shapes so pixels are less visible (''[[Spatial anti-aliasing|anti-aliasing]]'') * Blending overlapping transparent shapes (''[[compositing]]'') 3D rasterization is typically part of a ''[[graphics pipeline]]'' in which an application provides [[Triangle mesh|lists of triangles]] to be rendered, and the rendering system transforms and [[3D projection|projects]] their coordinates, determines which triangles are potentially visible in the ''[[viewport]]'', and performs the above rasterization and pixel processing tasks before displaying the final result on the screen.{{r|AkenineMöller2018|loc=2.1}}{{r|Marschner2022|loc=9}} Historically, 3D rasterization used algorithms like the ''[[Warnock algorithm]]'' and ''[[scanline rendering]]'' (also called "scan-conversion"), which can handle arbitrary polygons and can rasterize many shapes simultaneously. Although such algorithms are still important for 2D rendering, 3D rendering now usually divides shapes into triangles and rasterizes them individually using simpler methods.{{r|Warnock1969}}{{r|Bouknight1970}}{{r|Foley82|pp=456, 561-569}} [[Digital differential analyzer (graphics algorithm)|High-performance algorithms]] exist for rasterizing [[Bresenham's line algorithm|2D lines]], including [[Xiaolin Wu's line algorithm|anti-aliased lines]], as well as [[Midpoint circle algorithm|ellipses]] and filled triangles. An important special case of 2D rasterization is [[Font rasterization|text rendering]], which requires careful anti-aliasing and rounding of coordinates to avoid distorting the [[letterform]]s and preserve spacing, density, and sharpness.{{r|Marschner2022|loc=9.1.1}}{{r|RasterTragedy}} After 3D coordinates have been [[3D projection|projected]] onto the [[image plane]], rasterization is primarily a 2D problem, but the 3rd dimension necessitates ''[[Hidden-surface determination|hidden surface removal]]''. Early computer graphics used [[Computational geometry|geometric algorithms]] or ray casting to remove the hidden portions of shapes, or used the ''[[painter's algorithm]]'', which sorts shapes by depth (distance from camera) and renders them from back to front. Depth sorting was later avoided by incorporating depth comparison into the [[scanline rendering]] algorithm. The ''[[Z-buffering|z-buffer]]'' algorithm performs the comparisons indirectly by including a depth or "z" value in the [[framebuffer]]. A pixel is only covered by a shape if that shape's z value is lower (indicating closer to the camera) than the z value currently in the buffer. The z-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes. Memory is now faster and more plentiful, and a z-buffer is almost always used for real-time rendering.{{r|Watkins1970}}{{r|Catmull1974}}{{r|Foley82|pp=553-570}}{{r|AkenineMöller2018|loc=2.5.2}} A drawback of the basic [[Z-buffering|z-buffer algorithm]] is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image. Early ''[[Spatial anti-aliasing|anti-aliasing]]'' approaches addressed this by detecting when a pixel is partially covered by a shape, and calculating the covered area. The [[A-buffer]] (and other [[supersampling]] and [[Multisample anti-aliasing|multi-sampling]] techniques) solve the problem less precisely but with higher performance. For real-time 3D graphics, it has become common to use [[Fast approximate anti-aliasing|complicated heuristics]] (and even [[Deep learning anti-aliasing|neural-networks]]) to perform anti-aliasing.{{r|Catmull1974}}{{r|Carpenter1984}}{{r|Marschner2022|loc=9.3}}{{r|AkenineMöller2018|loc=5.4.2}} In 3D rasterization, color is usually determined by a ''[[Shader#Pixel shaders|pixel shader]]'' or ''fragment shader'', a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like [[Shadow mapping|shadows]] and [[Reflection (computer graphics)|reflections]] using only [[texture mapping]] and multiple passes.{{r|Marschner2022|loc=17.8}} Older and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as ''[[Shading#Flat shading|flat shading]]'' (lighting is computed once for each triangle, which is then rendered entirely in one color), ''[[Gouraud shading]]'' (lighting is computed using [[Normal (geometry)|normal vectors]] defined at vertices and then colors are interpolated across each triangle), or ''[[Phong shading]]'' (normal vectors are interpolated across each triangle and lighting is computed for each pixel).{{r|Marschner2022|loc=9.2}} Until relatively recently, [[Pixar]] used rasterization for rendering its [[Computer animation|animated films]]. Unlike the renderers commonly used for real-time graphics, the [[Reyes rendering|Reyes rendering system]] in Pixar's [[Pixar RenderMan|RenderMan]] software was optimized for rendering very small (pixel-sized) polygons, and incorporated [[stochastic]] sampling techniques more typically associated with [[Ray tracing (graphics)|ray tracing]].{{r|Raghavachary2005|loc=2, 6.3}}{{r|Cook1987}} === Ray casting === {{Main|Ray casting}} One of the simplest ways to render a 3D scene is to test if a [[Line (geometry)#Ray|ray]] starting at the viewpoint (the "eye" or "camera") intersects any of the geometric shapes in the scene, repeating this test using a different ray direction for each pixel. This method, called ''ray casting'', was important in early computer graphics, and is a fundamental building block for more advanced algorithms. Ray casting can be used to render shapes defined by ''[[constructive solid geometry]]'' (CSG) operations.{{r|RayTracingGems_1|p=8-9}}{{r|IntroToRTCh6|pp=246-249}} Early ray casting experiments include the work of Arthur Appel in the 1960s. Appel rendered shadows by casting an additional ray from each visible surface point towards a light source. He also tried rendering the density of illumination by casting random rays from the light source towards the object and [[Plotter|plotting]] the intersection points (similar to the later technique called ''[[photon mapping]]'').{{r|Appel1968}} [[File:Mandelbulb_p8a.jpg|thumb|[[Ray marching]] can be used to find the first intersection of a ray with an intricate shape such as this [[Mandelbulb]] fractal.]] When rendering scenes containing many objects, testing the intersection of a ray with every object becomes very expensive. Special [[data structure]]s are used to speed up this process by allowing large numbers of objects to be excluded quickly (such as objects behind the camera). These structures are analogous to [[database index]]es for finding the relevant objects. The most common are the ''[[bounding volume hierarchy]]'' (BVH), which stores a pre-computed [[bounding volume|bounding box or sphere]] for each branch of a [[Tree (data structure)|tree]] of objects, and the ''[[k-d tree]]'' which [[Recursion (computer science)|recursively]] divides space into two parts. Recent [[GPU]]s include hardware acceleration for BVH intersection tests. K-d trees are a special case of ''[[binary space partitioning]]'', which was frequently used in early computer graphics (it can also generate a rasterization order for the [[painter's algorithm]]). ''[[Octree]]s'', another historically popular technique, are still often used for volumetric data.{{r|RealTimeRayTracing|pp=16-17}}{{r|RayTracingGems_Forword_Stich}}{{r|IntroToRTCh6}}{{r|Hughes2014|loc=36.2}} Geometric formulas are sufficient for finding the intersection of a ray with shapes like [[sphere]]s, [[polygon]]s, and [[polyhedron|polyhedra]], but for most curved surfaces there is no [[Closed-form expression#Analytic expression|analytic solution]], or the intersection is difficult to compute accurately using limited precision [[Floating-point arithmetic|floating point numbers]]. [[Root-finding algorithm]]s such as [[Newton's method]] can sometimes be used. To avoid these complications, curved surfaces are often approximated as [[Triangle mesh|meshes of triangles]]. [[Volume rendering]] (e.g. rendering clouds and smoke), and some surfaces such as [[fractal]]s, may require [[ray marching]] instead of basic ray casting.{{r|IntroToRTCh2}}{{r|RayTracingGems_1|p=13}}{{r|AkenineMöller2018|loc=14, 17.3}} === Ray tracing === {{Main|Ray tracing (graphics)}} [[Image:SpiralSphereAndJuliaDetail1.jpg|thumb|250px|''Spiral Sphere and Julia, Detail'', a computer-generated image created by visual artist Robert W. McGregor using only [[POV-Ray]] 3.6 and its built-in scene description language]] Ray casting can be used to render an image by tracing [[Ray (optics)|light rays]] backwards from a simulated camera. After finding a point on a surface where a ray originated, another ray is traced towards the light source to determine if anything is casting a shadow on that point. If not, a ''[[Bidirectional reflectance distribution function|reflectance model]]'' (such as [[Lambertian reflectance]] for [[Paint sheen|matte]] surfaces, or the [[Phong reflection model]] for glossy surfaces) is used to compute the probability that a [[photon]] arriving from the light would be reflected towards the camera, and this is multiplied by the brightness of the light to determine the pixel brightness. If there are multiple light sources, brightness contributions of the lights are added together. For color images, calculations are repeated for multiple [[Visible spectrum|wavelengths]] of light (e.g. red, green, and blue).{{r|AkenineMöller2018|loc=11.2.2}}{{r|RayTracingGems_1|p=8}} ''Classical ray tracing'' (also called ''Whitted-style'' or ''recursive'' ray tracing) extends this method so it can render mirrors and transparent objects. If a ray traced backwards from the camera originates at a point on a mirror, the [[Specular reflection|reflection formula]] from [[geometric optics]] is used to calculate the direction the reflected ray came from, and another ray is cast backwards in that direction. If a ray originates at a transparent surface, rays are cast backwards for both [[Specular reflection|reflected]] and [[Refraction|refracted]] rays (using [[Snell's law]] to compute the refracted direction), and so ray tracing needs to support a branching "tree" of rays. In simple implementations, a [[Recursion (computer science)|recursive function]] is called to trace each ray.{{r|AkenineMöller2018|loc=11.2.2}}{{r|RayTracingGems_1|p=9}} Ray tracing usually performs [[Spatial anti-aliasing|anti-aliasing]] by taking the average of multiple [[Sampling (statistics)|samples]] for each pixel. It may also use multiple samples for effects like [[depth of field]] and [[motion blur]]. If evenly-spaced ray directions or times are used for each of these features, many rays are required, and some aliasing will remain. ''Cook-style'', ''stochastic'', or ''Monte Carlo ray tracing'' avoids this problem by using [[Monte Carlo method|random sampling]] instead of evenly-spaced samples. This type of ray tracing is commonly called [[distributed ray tracing|''distributed ray tracing'', or ''distribution ray tracing'']] because it samples rays from [[probability distribution]]s. Distribution ray tracing can also render realistic "soft" shadows from large lights by using a random sample of points on the light when testing for shadowing, and it can simulate [[chromatic aberration]] by sampling multiple wavelengths from the [[Visible spectrum|spectrum of light]].{{r|RayTracingGems_1|p=10}}{{r|IntroToRTCh1|p=25}} Real surface materials reflect small amounts of light in almost every direction because they have small (or microscopic) bumps and grooves. A distribution ray tracer can simulate this by sampling possible ray directions, which allows rendering blurry reflections from glossy and metallic surfaces. However, if this procedure is repeated [[Recursion|recursively]] to simulate realistic indirect lighting, and if more than one sample is taken at each surface point, the tree of rays quickly becomes huge. Another kind of ray tracing, called ''path tracing'', handles indirect light more efficiently, avoiding branching, and ensures that the distribution of all possible paths from a light source to the camera is sampled in an [[Unbiased rendering|unbiased]] way.{{r|IntroToRTCh1|pp=25-27}}{{r|Kajiya1986}} Ray tracing was often used for rendering reflections in animated films, until path tracing became standard for film rendering. Films such as [[Shrek 2]] and [[Monsters University]] also used distribution ray tracing or path tracing to precompute indirect illumination for a scene or frame prior to rendering it using rasterization.{{r|Christensen2016|pp=118-121}} Advances in GPU technology have made real-time ray tracing possible in games, although it is currently almost always used in combination with rasterization.{{r|RealTimeRayTracing|page=2}} This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects,{{r|RayTracingGems_19|page=305}} and shadows that are accurate over a wide range of distances and surface orientations.{{r|n=RayTracingGems_13|p=159-160}} Ray tracing support is included in recent versions of the graphics APIs used by games, such as [[DirectX Raytracing|DirectX]], [[Metal (API)|Metal]], and [[Vulkan]].{{r|KhronosRTInVukan}} Ray tracing has been used to render simulated [[black hole]]s, and the appearance of objects moving at close to the speed of light, by taking [[curved spacetime|spacetime curvature]] and [[Special relativity|relativistic effects]] into account during light ray simulation.{{r|Riazuelo2019}}{{r|Howard1995}} === Radiosity === {{Main|Radiosity (computer graphics)}} [[File:Classical radiosity example, simple scene, no interpolation, direct only and full.png|thumb|Classical radiosity demonstration. Surfaces are divided into 16x16 or 16x32 meshes. Top: direct light only. Bottom: radiosity solution (for [[albedo]] 0.85).]] [[File:Classical radiosity comparison with path tracing, simple scene, interpolated.png|thumb|Top: the same scene with a finer radiosity mesh, smoothing the patches during final rendering using [[bilinear interpolation]]. Bottom: the scene rendered with path tracing (using the PBRT renderer).]] Radiosity (named after the [[Radiosity (radiometry)|radiometric quantity of the same name]]) is a method for rendering objects illuminated by light [[Diffuse reflection|bouncing off rough or matte surfaces]]. This type of illumination is called ''indirect light'', ''environment lighting'', ''diffuse lighting'', or ''diffuse interreflection'', and the problem of rendering it realistically is called ''global illumination''. Rasterization and basic forms of ray tracing (other than distribution ray tracing and path tracing) can only roughly approximate indirect light, e.g. by adding a uniform "ambient" lighting amount chosen by the artist. Radiosity techniques are also suited to rendering scenes with ''area lights'' such as rectangular fluorescent lighting panels, which are difficult for rasterization and traditional ray tracing. Radiosity is considered a [[Physically based rendering|physically-based method]], meaning that it aims to simulate the flow of light in an environment using equations and experimental data from physics, however it often assumes that all surfaces are opaque and perfectly [[Lambertian reflectance|Lambertian]], which reduces realism and limits its applicability.{{r|AkenineMöller2018|loc=10, 11.2.1}}{{r|Glassner95|p=888, 893}}{{r|Goral1984}}{{r|Cohen1993|p=6}} In the original radiosity method (first proposed in 1984) now called ''classical radiosity'', surfaces and lights in the scene are split into pieces called ''patches'', a process called ''[[Mesh generation|meshing]]'' (this step makes it a [[finite element method]]). The rendering code must then determine what fraction of the light being emitted or [[Diffuse reflection|diffusely reflected]] (scattered) by each patch is received by each other patch. These fractions are called ''form factors'' or ''[[view factor]]s'' (first used in engineering to model [[Thermal radiation|radiative heat transfer]]). The form factors are multiplied by the [[albedo]] of the receiving surface and put in a [[Matrix (mathematics)|matrix]]. The lighting in the scene can then be expressed as a matrix equation (or equivalently a [[system of linear equations]]) that can be solved by methods from [[linear algebra]].{{r|Goral1984}}{{r|Dutré2003|p=46}}{{r|Glassner95|p=888, 896}} Solving the radiosity equation gives the total amount of light emitted and reflected by each patch, which is divided by area to get a value called ''[[Radiosity (radiometry)|radiosity]]'' that can be used when rasterizing or ray tracing to determine the color of pixels corresponding to visible parts of the patch. For real-time rendering, this value (or more commonly the [[irradiance]], which does not depend on local surface albedo) can be pre-computed and stored in a texture (called an ''irradiance map'') or stored as vertex data for 3D models. This feature was used in architectural visualization software to allow real-time walk-throughs of a building interior after computing the lighting.{{r|Glassner95|p=890}}{{r|AkenineMöller2018|loc=11.5.1}}{{r|Cohen1993|p=332}} The large size of the matrices used in classical radiosity (the square of the number of patches) causes problems for realistic scenes. Practical implementations may use [[Jacobi method|Jacobi]] or [[Gauss–Seidel method|Gauss-Seidel]] iterations, which is equivalent (at least in the Jacobi case) to simulating the propagation of light one bounce at a time until the amount of light remaining (not yet absorbed by surfaces) is insignificant. The number of iterations (bounces) required is dependent on the scene, not the number of patches, so the total work is proportional to the square of the number of patches (in contrast, solving the matrix equation using [[Gaussian elimination]] requires work proportional to the cube of the number of patches). Form factors may be recomputed when they are needed, to avoid storing a complete matrix in memory.{{r|Glassner95|pp=901, 907}} The quality of rendering is often determined by the size of the patches, e.g. very fine meshes are needed to depict the edges of shadows accurately. An important improvement is ''hierarchical radiosity'', which uses a coarser mesh (larger patches) for simulating the transfer of light between surfaces that are far away from one another, and adaptively sub-divides the patches as needed. This allows radiosity to be used for much larger and more complex scenes.{{r|Glassner95|pp=975, 939}} Alternative and extended versions of the radiosity method support non-Lambertian surfaces, such as glossy surfaces and mirrors, and sometimes use volumes or "clusters" of objects as well as surface patches. Stochastic or [[Monte Carlo method|Monte Carlo]] radiosity uses [[Sampling (statistics)|random sampling]] in various ways, e.g. taking samples of incident light instead of integrating over all patches, which can improve performance but adds noise (this noise can be reduced by using deterministic iterations as a final step, unlike path tracing noise). Simplified and partially precomputed versions of radiosity are widely used for real-time rendering, combined with techniques such as ''[[octree]] radiosity'' that store approximations of the [[light field]].{{r|Glassner95|pp=979, 982}}{{r|Dutré2003|p=49}}{{r|Bekaert1999}}{{r|AkenineMöller2018|loc=11.5}} === Path tracing === {{Main|Path tracing}} As part of the approach known as ''[[physically based rendering]]'', '''[[path tracing]]''' has become the dominant technique for rendering realistic scenes, including effects for movies.{{r|Pharr2023_1_6}} For example, the popular open source 3D software [[Blender (software)|Blender]] uses path tracing in its Cycles renderer.{{r|BlenderCyclesIntro}} Images produced using path tracing for [[global illumination]] are generally noisier than when using [[Radiosity (computer graphics)|radiosity]] (the main competing algorithm for realistic lighting), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a [[Tessellation (computer graphics)|tessellated]] representation of [[irradiance]].{{r|Pharr2023_1_6}}{{r|Glassner95|p=975-976, 1045}} Like ''[[distributed ray tracing]]'', path tracing is a kind of ''[[stochastic]]'' or ''[[Randomized algorithm|randomized]]'' [[Ray tracing (graphics)|ray tracing]] that uses [[Monte Carlo integration|Monte Carlo]] or [[Quasi-Monte Carlo method|Quasi-Monte Carlo]] integration. It was proposed and named in 1986 by [[Jim Kajiya]] in the same paper as the [[rendering equation]]. Kajiya observed that much of the complexity of [[distributed ray tracing]] could be avoided by only tracing a single path from the camera at a time (in Kajiya's implementation, this "no branching" rule was broken by tracing additional rays from each surface intersection point to randomly chosen points on each light source). Kajiya suggested reducing the noise present in the output images by using ''[[stratified sampling]]'' and ''[[importance sampling]]'' for making random decisions such as choosing which ray to follow at each step of a path. Even with these techniques, path tracing would not have been practical for film rendering, using computers available at the time, because the computational cost of generating enough samples to reduce [[variance]] to an acceptable level was too high. [[Monster House (film)|Monster House]], the first feature film rendered entirely using path tracing, was not released until 20 years later.{{r|Kajiya1986}}{{r|Pharr2023_1_6}}{{r|Kulla2017}} In its basic form, path tracing is inefficient (requiring too many samples) for rendering [[Caustic (optics)|caustics]] and scenes where light enters indirectly through narrow spaces. Attempts were made to address these weaknesses in the 1990s. ''[[Path tracing#Bidirectional path tracing|Bidirectional path tracing]]'' has similarities to [[photon mapping]], tracing rays from the light source and the camera separately, and then finding ways to connect these paths (but unlike photon mapping it usually samples new light paths for each pixel rather than using the same cached data for all pixels). ''[[Metropolis light transport]]'' samples paths by modifying paths that were previously traced, spending more time exploring paths that are similar to other "bright" paths, which increases the chance of discovering even brighter paths. ''Multiple importance sampling'' provides a way to reduce [[variance]] when combining samples from more than one sampling method, particularly when some samples are much noisier than the others.{{r|Pharr2023_1_6}}{{r|Veach1997}} This later work was summarized and expanded upon in [[Eric Veach]]'s 1997 PhD thesis, which helped raise interest in path tracing in the computer graphics community. The [[Autodesk Arnold|Arnold renderer]], first released in 1998, proved that path tracing was practical for rendering frames for films, and that there was a demand for [[Unbiased rendering|unbiased]] and [[Physically based rendering|physically based]] rendering in the film industry; other commercial and open source path tracing renderers began appearing. Computational cost was addressed by rapid advances in [[CPU]] and [[Computer cluster|cluster]] performance.{{r|Pharr2023_1_6}} Path tracing's relative simplicity and its nature as a [[Monte Carlo method]] (sampling hundreds or thousands of paths per pixel) have made it attractive to implement on a [[Graphics processing unit|GPU]], especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's [[Nvidia RTX|RTX]] and [[OptiX]].{{r|Pharr2023_15}} However bidirectional path tracing and Metropolis light transport are more difficult to implement efficiently on a GPU.{{r|Otte2015}}{{r|Schmidt2016}} Research into improving path tracing continues. Many variations of bidirectional path tracing and Metropolis light transport have been explored, and ways of combining path tracing with photon mapping.{{r|Pharr2016_Ch16}}{{r|Pharr2023_13fr}} Recent ''path guiding'' approaches construct approximations of the [[light field]] probability distribution in each volume of space, so paths can be sampled more effectively.{{r|Pharr2023_13fr}} Techniques have been developed to [[Noise reduction|denoise]] the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small-scale artifacts that are more objectionable than noise;{{r|Pharr2023_5fr}}{{r|BlenderCyclesReducingNoise}} [[Artificial neural network|neural networks]] are now widely used for this purpose.{{r|BlenderSettingsDenoising}}{{r|OpenImageDenoise}}{{r|NvidiaOptiXDenoiser}} === Neural rendering === {{Multiple issues|section=yes| {{Expand section|small=no|date=February 2022}} {{Update section|date=January 2025}} }} '''Neural rendering''' is a rendering method using [[artificial neural network]]s.<ref name="Tewari">{{Cite journal|url=https://onlinelibrary.wiley.com/doi/am-pdf/10.1111/cgf.14022|doi=10.1111/cgf.14022|title=State of the Art on Neural Rendering|year=2020|last1=Tewari|first1=A.|last2=Fried|first2=O.|last3=Thies|first3=J.|last4=Sitzmann|first4=V.|last5=Lombardi|first5=S.|last6=Sunkavalli|first6=K.|last7=Martin-Brualla|first7=R.|last8=Simon|first8=T.|last9=Saragih|first9=J.|last10=Nießner|first10=M.|last11=Pandey|first11=R.|last12=Fanello|first12=S.|last13=Wetzstein|first13=G.|last14=Zhu|first14=J.-Y.|last15=Theobalt|first15=C.|last16=Agrawala|first16=M.|last17=Shechtman|first17=E.|last18=Goldman|first18=D. B.|last19=Zollhöfer|first19=M.|journal=Computer Graphics Forum|volume=39|issue=2|pages=701–727|arxiv=2004.03805|s2cid=215416317}}</ref><ref>{{Cite magazine|last=Knight|first=Will|title=A New Trick Lets Artificial Intelligence See in 3D|language=en-US|magazine=Wired|url=https://www.wired.com/story/new-way-ai-see-3d/|access-date=2022-02-08|issn=1059-1028|archive-date=2022-02-07|archive-url=https://web.archive.org/web/20220207230740/https://www.wired.com/story/new-way-ai-see-3d/|url-status=live}}</ref> Neural rendering includes [[image-based rendering]] methods that are used to [[3D reconstruction|reconstruct 3D models]] from 2-dimensional images.<ref name="Tewari"/> One of these methods are [[photogrammetry]], which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably [[Nvidia]], [[Google]] and various other companies.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)