Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Ray casting
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Concept == [[File:Simple raycasting with fisheye correction.gif|thumb|400px|right|Demonstration of a ray casting sweep through a video game level]] Ray casting is the most basic of many [[computer graphics]] [[rendering (computer graphics)|rendering]] algorithms that use the geometric algorithm of [[Ray tracing (graphics)|ray tracing]]. Ray tracing-based rendering algorithms operate in [[Image and object order rendering|image order]] to render three-dimensional scenes to two-dimensional images. Geometric [[Ray (optics)|rays]] are traced from the [[projective geometry|eye]] of the observer to sample the light ([[radiance]]) travelling toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit. This eliminates the possibility of accurately rendering [[Reflection (physics)|reflections]], [[refractions]], or the natural falloff of [[shadow]]s; however all of these elements can be faked to a degree, by creative use of [[Texture (computer graphics)|texture]] maps or other methods. The high speed of calculation made ray casting a handy [[#Ray_casting_in_early_computer_games|rendering method in early real-time 3D video games]]. The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray—think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older [[scanline rendering|scanline algorithms]] was its ability to easily deal with non-planar surfaces and solids, such as [[Cone (geometry)|cones]] and [[sphere]]s. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using [[solid modelling]] techniques and easily rendered. From the abstract for the paper "Ray Casting for Modeling Solids":<ref>{{cite web |author1=Scott D Roth |title=Ray casting for modeling solids |url=https://www.sciencedirect.com/science/article/abs/pii/0146664X82901691 |website=Science Direct |publisher=Elsevier |access-date=20 January 2025 |ref=ref-ray-casting-for-modeling-solids-1982 |pages=109–144 |language=English |doi=10.1016/0146-664X(82)90169-1 |date=1982}}</ref> <blockquote style="font-style:italic"> To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding line-surface intersection points. So, surfaces as planes, quadrics, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. </blockquote> [[File:Camera models.jpg|left|500x200px|Camera models]] Light rays and the camera geometry form the basis for all geometric reasoning here. This figure shows a pinhole camera model for perspective effect in image processing and a parallel camera model for mass analysis. The simple pinhole camera model consists of a focal point (or eye point) and a square pixel array (or screen). Straight light rays pass through the pixel array to connect the focal point with the scene, one ray per pixel. To shade pictures, the rays’ intensities are measured and stored as pixels. The reflecting surface responsible for a pixel’s value intersects the pixel’s ray. When the focal length, distance between focal point and screen, is infinite, then the view is called “parallel” because all light rays are parallel to each other, perpendicular to the screen. Although the perspective view is natural for making pictures, some applications need rays that can be uniformly distributed in space. === Concept Model === For modeling convenience, a typical standard coordinate system for the camera has the screen in the X–Y plane, the scene in the +Z half space, and the focal point on the −Z axis. [[File:Cameras local coordinate system.jpg|thumb|Camera local coordinate system with the "screen" in the Z=0 plane]] A <dfn style="font-style:italic">ray</dfn> is simply a straight line in the 3D space of the camera model. It is best defined in parameterized form as a point vector (X<sub>0</sub>, Y<sub>0</sub>, Z<sub>0</sub>) and a direction vector (D<sub>x</sub>, D<sub>y</sub>, D<sub>z</sub>). In this form, points on the line are ordered and accessed via a single parameter <var>t</var>. For every value of <var>t</var>, a corresponding point (<var>X</var>, <var>Y</var>, <var>Z</var>) on the line is defined: X = X<sub>0</sub> + t · D<sub>x</sub> Y = Y<sub>0</sub> + t · D<sub>y</sub> Z = Z<sub>0</sub> + t · D<sub>z</sub> If the vector is normalized, then the parameter <var>t</var> is distance along the line. The vector can be normalized easily with the following computation: Dist = √(D<sub>x</sub><sup>2</sup> + D<sub>y</sub><sup>2</sup> + D<sub>z</sub><sup>2</sup>) D<sup>'</sup><sub>x</sub> = D<sub>x</sub> / Dist D<sup>'</sup><sub>y</sub> = D<sub>y</sub> / Dist D<sup>'</sup><sub>z</sub> = D<sub>z</sub> / Dist Given geometric definitions of the objects, each bounded by one or more surfaces, the result of computing one ray’s intersection with all bounded surfaces in the screen is defined by two arrays: '''Ray parameters:''' ''t''[1], ''t''[2], …, ''t''[n] '''Surface pointers:''' S[1], S[2], …, S[n] Where <var>n</var> is the number of ray-surface intersections. The ordered list of ray parameters, <var>t[i]</var>, denote the enter–exit points. The ray enters a solid at point <var>t</var>[1], exits at <var>t</var>[2], enters a solid at <var>t</var>[3], etc. Point <var>t</var>[1] is closest to the camera and <var>t[n]</var> is furthest. In association with the ray parameters, the surface pointers contain a unique address for the intersected surface’s information. The surface can have various properties such as color, specularity, transparency with/without refraction, translucency, etc. The solid associated with the surface may have its own physical properties such as density. This could be useful, for instance, when an object consists of an assembly of different materials and the overall center of mass and moments of inertia are of interest.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)