Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Volume rendering
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Direct volume rendering== A direct volume renderer<ref name=_188>Marc Levoy, "Display of Surfaces from Volume Data", [[IEEE]] CG&A, May 1988. [http://graphics.stanford.edu/papers/volume-cga88/ Archive of Paper]</ref><ref name="dch88">{{Cite journal|doi = 10.1145/378456.378484|title = Volume rendering|year = 1988|last1 = Drebin|first1 = Robert A.|last2 = Carpenter|first2 = Loren|last3 = Hanrahan|first3 = Pat|journal = ACM SIGGRAPH Computer Graphics|volume = 22|issue = 4|pages = 65–74}} {{Cite book|doi = 10.1145/54852.378484|title = Proceedings of the 15th annual conference on Computer graphics and interactive techniques - SIGGRAPH '88|year = 1988|last1 = Drebin|first1 = Robert A.|last2 = Carpenter|first2 = Loren|last3 = Hanrahan|first3 = Pat|isbn = 978-0897912754|pages = 65| s2cid=17982419 }}</ref> requires every sample value to be mapped to opacity and a color. This is done with a "[[transfer function]]" which can be a simple ramp, a [[piecewise linear function]] or an arbitrary table. Once converted to an [[RGBA color model]] (for red, green, blue, alpha) value, the composed RGBA result is projected on the corresponding pixel of the frame buffer. The way this is done depends on the rendering technique. A combination of these techniques is possible. For instance, a shear warp implementation could use texturing hardware to draw the aligned slices in the [[off-screen buffer]]. ===Volume ray casting=== {{Main|Volume ray casting}} [[Image:Croc.5.3.10.a gb1.jpg|thumb|300px|Volume Ray Casting. Crocodile mummy provided by the Phoebe A. Hearst Museum of Anthropology, UC Berkeley. CT data was acquired by Rebecca Fahrig, Department of Radiology, Stanford University, using a Siemens SOMATOM Definition, Siemens Healthcare. The image was rendered by Fovia's High Definition Volume Rendering® engine.]] The technique of volume ray casting can be derived directly from the [[rendering equation]]. It provides results of very high quality, usually considered to provide the best image quality. Volume ray casting is classified as image based volume rendering technique, as the computation emanates from the output image, not the input volume data as is the case with object based techniques. In this technique, a ray is generated for each desired image pixel. Using a simple camera model, the ray starts at the center of projection of the camera (usually the eye point) and passes through the image pixel on the imaginary image plane floating in between the camera and the volume to be rendered. The ray is clipped by the boundaries of the volume in order to save time. Then the ray is sampled at regular or adaptive intervals throughout the volume. The data is interpolated at each sample point, the transfer function applied to form an RGBA sample, the sample is composited onto the accumulated RGBA of the ray, and the process repeated until the ray exits the volume. The RGBA color is converted to an RGB color and deposited in the corresponding image pixel. The process is repeated for every pixel on the screen to form the completed image. ===Splatting=== {{main|Gaussian splatting}} This is a technique which trades quality for speed. Here, every volume element is [[Texture splatting|splatted]], as Lee Westover said, like a snow ball, on to the viewing surface in back to front order. These splats are rendered as disks whose properties (color and transparency) vary diametrically in normal ([[Gaussian distribution|Gaussian]]) manner. Flat disks and those with other kinds of property distribution are also used depending on the application.<ref name=splatting>{{cite web|last=Westover|first=Lee Alan|title=SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm|url=http://www.cs.unc.edu/techreports/91-029.pdf|archive-url=https://web.archive.org/web/20140222005427/http://www.cs.unc.edu/techreports/91-029.pdf|url-status=dead|archive-date=February 22, 2014|access-date=28 June 2012|date=July 1991}}</ref><ref name=fastsplat>{{cite web|last=Huang|first=Jian|title=Splatting|url=http://web.eecs.utk.edu/~huangj/CS594S02/splatting.ppt|access-date=5 August 2011|format=PPT|date=Spring 2002}}</ref> ===Shear warp=== [[Image:volRenderShearWarp.gif|thumb|250px|Example of a mouse skull (CT) rendering using the shear warp algorithm]] The shear warp approach to volume rendering was developed by Cameron and Undrill, popularized by Philippe Lacroute and [[Marc Levoy]].<ref>{{Cite book|url = http://graphics.stanford.edu/papers/shear/|publisher = ACM|date = 1994-01-01|location = New York, NY, USA|isbn = 978-0897916677|pages = 451–458|doi = 10.1145/192161.192283|first1 = Philippe|last1 = Lacroute|first2 = Marc|last2 = Levoy| title=Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94 | chapter=Fast volume rendering using a shear-warp factorization of the viewing transformation |citeseerx = 10.1.1.75.7117| s2cid=1266012 }}</ref> In this technique, the [[viewing transformation]] is transformed such that the nearest face of the volume becomes axis aligned with an off-screen image [[data buffer]] with a fixed scale of voxels to pixels. The volume is then rendered into this buffer using the far more favorable memory alignment and fixed scaling and blending factors. Once all slices of the volume have been rendered, the buffer is then warped into the desired orientation and scaled in the displayed image. This technique is relatively fast in software at the cost of less accurate sampling and potentially worse image quality compared to ray casting. There is memory overhead for storing multiple copies of the volume, for the ability to have near axis aligned volumes. This overhead can be mitigated using [[run length encoding]]. ===Texture-based volume rendering=== [[Image:CTSkullImage.png|250px|thumb| A volume rendered cadaver head using view-aligned [[texture mapping]] and [[diffuse reflection]]]] Many 3D graphics systems use [[texture mapping]] to apply images, or textures, to geometric objects. Commodity PC [[graphics cards]] are fast at texturing and can efficiently render slices of a 3D volume, with real time interaction capabilities. [[Workstation]] [[GPU]]s are even faster, and are the basis for much of the production volume visualization used in [[medical imaging]], oil and gas, and other markets (2007). In earlier years, dedicated 3D texture mapping systems were used on graphics systems such as [[Silicon Graphics]] [[InfiniteReality]], [[Hewlett-Packard|HP]] Visualize FX [[graphics accelerator]], and others. This technique was first described by [[Bill Hibbard]] and Dave Santek.<ref name=HS89>Hibbard W., Santek D., [http://www.ssec.wisc.edu/~billh/p39-hibbard.pdf "Interactivity is the key"], ''Chapel Hill Workshop on Volume Visualization'', University of North Carolina, Chapel Hill, 1989, pp. 39–43.</ref> These slices can either be aligned with the volume and rendered at an angle to the viewer, or aligned with the viewing plane and sampled from unaligned slices through the volume. Graphics hardware support for 3D textures is needed for the second technique. Volume aligned texturing produces images of reasonable quality, though there is often a noticeable transition when the volume is rotated.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)