Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rendering (computer graphics)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Rasterization<span class="anchor" id="Rasterization"></span><span class="anchor" id="Scanline rendering and rasterization"></span>=== {{main|Rasterization}} [[File:Latest Rendering of the E-ELT.jpg|thumb|An architectural visualization of the [[Extremely Large Telescope]] from 2009, likely rendered using a combination of techniques]] The term ''rasterization'' (in a broad sense) encompasses many techniques used for 2D rendering and [[Real-time computer graphics|real-time]] 3D rendering. 3D [[Computer animation|animated films]] were rendered by rasterization before [[Ray tracing (graphics)|ray tracing]] and [[path tracing]] became practical. A renderer combines rasterization with ''geometry processing'' (which is not specific to rasterization) and ''pixel processing'' which computes the [[RGB color model|RGB color values]] to be placed in the ''[[framebuffer]]'' for display.{{r|AkenineMöller2018|loc=2.1}}{{r|Marschner2022|loc=9}} The main tasks of rasterization (including pixel processing) are:{{r|AkenineMöller2018|loc=2, 3.8, 23.1.1}} * Determining which pixels are covered by each geometric shape in the 3D scene or 2D image (this is the actual rasterization step, in the strictest sense) * Blending between colors and depths defined at the [[Vertex (computer graphics)|vertices]] of shapes, e.g. using [[Barycentric coordinate system|barycentric coordinates]] (''interpolation'') * Determining if parts of shapes are hidden by other shapes, due to 2D layering or 3D depth (''[[Hidden-surface determination|hidden surface removal]]'') * Evaluating a function for each pixel covered by a shape (''[[shading]]'') * Smoothing edges of shapes so pixels are less visible (''[[Spatial anti-aliasing|anti-aliasing]]'') * Blending overlapping transparent shapes (''[[compositing]]'') 3D rasterization is typically part of a ''[[graphics pipeline]]'' in which an application provides [[Triangle mesh|lists of triangles]] to be rendered, and the rendering system transforms and [[3D projection|projects]] their coordinates, determines which triangles are potentially visible in the ''[[viewport]]'', and performs the above rasterization and pixel processing tasks before displaying the final result on the screen.{{r|AkenineMöller2018|loc=2.1}}{{r|Marschner2022|loc=9}} Historically, 3D rasterization used algorithms like the ''[[Warnock algorithm]]'' and ''[[scanline rendering]]'' (also called "scan-conversion"), which can handle arbitrary polygons and can rasterize many shapes simultaneously. Although such algorithms are still important for 2D rendering, 3D rendering now usually divides shapes into triangles and rasterizes them individually using simpler methods.{{r|Warnock1969}}{{r|Bouknight1970}}{{r|Foley82|pp=456, 561-569}} [[Digital differential analyzer (graphics algorithm)|High-performance algorithms]] exist for rasterizing [[Bresenham's line algorithm|2D lines]], including [[Xiaolin Wu's line algorithm|anti-aliased lines]], as well as [[Midpoint circle algorithm|ellipses]] and filled triangles. An important special case of 2D rasterization is [[Font rasterization|text rendering]], which requires careful anti-aliasing and rounding of coordinates to avoid distorting the [[letterform]]s and preserve spacing, density, and sharpness.{{r|Marschner2022|loc=9.1.1}}{{r|RasterTragedy}} After 3D coordinates have been [[3D projection|projected]] onto the [[image plane]], rasterization is primarily a 2D problem, but the 3rd dimension necessitates ''[[Hidden-surface determination|hidden surface removal]]''. Early computer graphics used [[Computational geometry|geometric algorithms]] or ray casting to remove the hidden portions of shapes, or used the ''[[painter's algorithm]]'', which sorts shapes by depth (distance from camera) and renders them from back to front. Depth sorting was later avoided by incorporating depth comparison into the [[scanline rendering]] algorithm. The ''[[Z-buffering|z-buffer]]'' algorithm performs the comparisons indirectly by including a depth or "z" value in the [[framebuffer]]. A pixel is only covered by a shape if that shape's z value is lower (indicating closer to the camera) than the z value currently in the buffer. The z-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes. Memory is now faster and more plentiful, and a z-buffer is almost always used for real-time rendering.{{r|Watkins1970}}{{r|Catmull1974}}{{r|Foley82|pp=553-570}}{{r|AkenineMöller2018|loc=2.5.2}} A drawback of the basic [[Z-buffering|z-buffer algorithm]] is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image. Early ''[[Spatial anti-aliasing|anti-aliasing]]'' approaches addressed this by detecting when a pixel is partially covered by a shape, and calculating the covered area. The [[A-buffer]] (and other [[supersampling]] and [[Multisample anti-aliasing|multi-sampling]] techniques) solve the problem less precisely but with higher performance. For real-time 3D graphics, it has become common to use [[Fast approximate anti-aliasing|complicated heuristics]] (and even [[Deep learning anti-aliasing|neural-networks]]) to perform anti-aliasing.{{r|Catmull1974}}{{r|Carpenter1984}}{{r|Marschner2022|loc=9.3}}{{r|AkenineMöller2018|loc=5.4.2}} In 3D rasterization, color is usually determined by a ''[[Shader#Pixel shaders|pixel shader]]'' or ''fragment shader'', a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like [[Shadow mapping|shadows]] and [[Reflection (computer graphics)|reflections]] using only [[texture mapping]] and multiple passes.{{r|Marschner2022|loc=17.8}} Older and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as ''[[Shading#Flat shading|flat shading]]'' (lighting is computed once for each triangle, which is then rendered entirely in one color), ''[[Gouraud shading]]'' (lighting is computed using [[Normal (geometry)|normal vectors]] defined at vertices and then colors are interpolated across each triangle), or ''[[Phong shading]]'' (normal vectors are interpolated across each triangle and lighting is computed for each pixel).{{r|Marschner2022|loc=9.2}} Until relatively recently, [[Pixar]] used rasterization for rendering its [[Computer animation|animated films]]. Unlike the renderers commonly used for real-time graphics, the [[Reyes rendering|Reyes rendering system]] in Pixar's [[Pixar RenderMan|RenderMan]] software was optimized for rendering very small (pixel-sized) polygons, and incorporated [[stochastic]] sampling techniques more typically associated with [[Ray tracing (graphics)|ray tracing]].{{r|Raghavachary2005|loc=2, 6.3}}{{r|Cook1987}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)