Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rendering (computer graphics)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Inputs == Before a 3D scene or 2D image can be rendered, it must be described in a way that the rendering software can understand. Historically, inputs for both 2D and 3D rendering were usually [[text file]]s, which are easier than binary files for humans to edit and debug. For 3D graphics, text formats have largely been supplanted by more efficient [[binary file|binary formats]], and by [[API]]s which allow interactive applications to communicate directly with a rendering component without generating a file on disk (although a scene description is usually still created in memory prior to rendering).{{r|n=Raghavachary2005|loc=1.2, 3.2.6, 3.3.1, 3.3.7}} Traditional rendering algorithms use geometric descriptions of 3D scenes or 2D images. Applications and algorithms that render [[Visualization (graphics)|visualizations]] of data scanned from the real world, or scientific [[Computer simulation|simulations]], may require different types of input data. The [[PostScript]] format (which is often credited with the rise of [[desktop publishing]]) provides a standardized, interoperable way to describe 2D graphics and [[page layout]]. The [[SVG|Scalable Vector Graphics (SVG)]] format is also text-based, and the [[PDF]] format uses the PostScript language internally. In contrast, although many 3D graphics file formats have been standardized (including text-based formats such as [[VRML]] and [[X3D]]), different rendering applications typically use formats tailored to their needs, and this has led to a proliferation of proprietary and open formats, with binary files being more common.{{r|n=Raghavachary2005|loc=3.2.3, 3.2.5, 3.3.7}}{{r|n=PSRef|p=vii}}{{r|n=MDN_SVG}}{{r|n=Hughes2014|loc=16.5.2.}}{{r|n=BlenderImportExport}} === 2D vector graphics === A [[vector graphics]] image description may include:{{r|n=PSRef}}{{r|n=MDN_SVG}} * [[Cartesian coordinate system|Coordinates]] and [[curvature]] information for [[line segments]], [[Circular arc|arcs]], and [[Bézier curve]]s (which may be used as boundaries of filled shapes) * Center coordinates, width, and height (or [[Minimum bounding rectangle|bounding rectangle]] coordinates) of [[geometric primitive|basic]] shapes such as [[rectangle]]s, [[circle]]s and [[ellipse]]s * Color, width and pattern (such as dashed or dotted) for rendering lines * Colors, patterns, and [[Color gradient|gradients]] for filling shapes * [[Bitmap]] image data (either embedded or in an external file) along with scale and position information * [[Font rasterization|Text to be rendered]] (along with size, position, orientation, color, and font) * [[Clipping (computer graphics)|Clipping]] information, if only part of a shape or bitmap image should be rendered * Transparency and [[compositing]] information for rendering overlapping shapes * [[Color space]] information, allowing the image to be rendered consistently on different displays and printers === 3D geometry === A geometric scene description may include:{{r|n=Raghavachary2005|loc=Ch. 4-7, 8.7}}{{r|n=pbrt4FF}} * Size, position, and orientation of [[geometric primitive]]s such as spheres and cones (which may be [[Constructive solid geometry|combined in various ways]] to create more complex objects) * [[Vertex (geometry)|Vertex]] [[Cartesian coordinate system|coordinates]] and [[Normal (geometry)|surface normal]] [[Euclidean vector|vectors]] for [[Polygon mesh|meshes]] of triangles or polygons (often rendered as smooth surfaces by [[Subdivision surface|subdividing]] the mesh) * [[Geometric transformation|Transformations]] for positioning, rotating, and scaling objects within a scene (allowing parts of the scene to use different local coordinate systems). * "Camera" information describing how the scene is being viewed (position, direction, [[focal length]], and [[field of view]]) * Light information (location, type, brightness, and color) * Optical properties of surfaces, such as [[albedo]], [[Surface roughness|roughness]], and [[refractive index]], * Optical properties of media through which light passes (transparent solids, liquids, clouds, smoke), e.g. [[Absorption cross section|absorption]] and [[Cross section (physics)#Scattering of light|scattering]] cross sections * [[Bitmap]] image data used as [[Texture mapping|texture maps]] for surfaces * Small scripts or programs for generating complex 3D shapes or scenes [[Procedural generation|procedurally]] * Description of how object and camera locations and other information change over time, for rendering an animation Many file formats exist for storing individual 3D objects or "[[3D modeling|models]]". These can be imported into a larger scene, or loaded on-demand by rendering software or games. A realistic scene may require hundreds of items like household objects, vehicles, and trees, and [[Environment artist|3D artists]] often utilize large libraries of models. In game production, these models (along with other data such as textures, audio files, and animations) are referred to as "[[Digital asset|assets]]".{{r|n=BlenderImportExport}}{{r|n=Dunlop2014|loc=Ch. 4}} === Volumetric data === Scientific and engineering [[Visualization (graphics)|visualization]] often requires rendering [[voxel|volumetric data]] generated by 3D scans or [[Computer simulation|simulations]]. Perhaps the most common source of such data is medical [[CT scan|CT]] and [[Magnetic resonance imaging|MRI]] scans, which need to be rendered for diagnosis. Volumetric data can be extremely large, and requires [[OpenVDB|specialized data formats]] to store it efficiently, particularly if the volume is ''[[Sparse matrix|sparse]]'' (with empty regions that do not contain data).{{r|n=AkenineMöller2018|loc=14.3.1}}{{r|n=OpenVDBAbout}}{{r|n=Museth2013}} Before rendering, [[level set]]s for volumetric data can be extracted and converted into a mesh of triangles, e.g. by using the [[marching cubes]] algorithm. Algorithms have also been developed that work directly with volumetric data, for example to render realistic depictions of the way light is scattered and absorbed by clouds and smoke, and this type of volumetric rendering is used extensively in visual effects for movies. When rendering lower-resolution volumetric data without interpolation, the individual cubes or "[[voxel]]s" may be visible, an effect sometimes used deliberately for game graphics.{{r|n=Bridson2015|loc=4.6}}{{r|n=AkenineMöller2018|loc=13.10, Ch. 14, 16.1}} === Photogrammetry and scanning === Photographs of real world objects can be incorporated into a rendered scene by using them as [[Texture mapping|textures]] for 3D objects. Photos of a scene can also be stitched together to create [[panorama|panoramic images]] or [[Reflection mapping|environment maps]], which allow the scene to be rendered very efficiently but only from a single viewpoint. Scanning of real objects and scenes using [[Structured-light 3D scanner|structured light]] or [[lidar]] produces [[point cloud]]s consisting of the coordinates of millions of individual points in space, sometimes along with color information. These point clouds may either be rendered directly or [[Point cloud#Conversion to 3D surfaces|converted into meshes]] before rendering. (Note: "point cloud" sometimes also refers to a minimalist rendering style that can be used for any 3D geometry, similar to wireframe rendering.){{r|n=AkenineMöller2018|loc=13.3, 13.9}}{{r|n=Raghavachary2005|loc=1.3}} === Neural approximations and light fields === A more recent, experimental approach is description of scenes using [[neural radiance field|radiance fields]] which define the color, intensity, and direction of incoming light at each point in space. (This is conceptually similar to, but not identical to, the [[light field]] recorded by a [[Holography|hologram]].) For any useful resolution, the amount of data in a radiance field is so large that it is impractical to represent it directly as volumetric data, and an [[approximation]] function must be found. [[Deep learning|Neural networks]] are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as "[[Training, validation, and test data sets#Training data set|training data]]".{{r|n=Schmid2023}}{{r|n=Mildenhall2020}} Algorithms related to neural networks have recently been used to find approximations of a scene as [[Gaussian splatting|3D Gaussians]]. The resulting representation is similar to a [[point cloud]], except that it uses fuzzy, partially-transparent blobs of varying dimensions and orientations instead of points. As with [[neural radiance field]]s, these approximations are often generated from photographs or video frames.{{r|n=Kerbl2023}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)