Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Volume rendering
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Representing a 3D-modeled object or dataset as a 2D projection}} {{for|[[Rendering (computer graphics)|rendering]] of 3D [[wire frame model]]s|3D rendering}} {{3D computer graphics}} [[File:Image of 3D volumetric QCT scan.jpg|thumb|Multiple [[CT scan|X-ray tomographs]] (with [[Quantitative computed tomography|quantitative mineral density calibration]]) stacked to form a 3D model]] [[Image:CTWristImage.png|250px|thumb| Volume rendered [[Computed tomography|CT]] scan of a forearm with different color schemes for muscle, fat, bone, and blood]] In [[scientific visualization]] and [[computer graphics]], '''volume rendering''' is a set of techniques used to display a 2D projection of a 3D discretely [[Sampling (signal processing)|sampled]] [[data set]], typically a 3D [[scalar field]]. A typical 3D data set is a group of 2D slice images acquired by a [[computed axial tomography|CT]], [[magnetic resonance imaging|MRI]], or [[Microtomography|MicroCT]] [[Image scanner|scanner]]. Usually these are acquired in a regular pattern (e.g., one slice for each millimeter of depth) and usually have a regular number of image [[pixel]]s in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or [[voxel]] represented by a single value that is obtained by sampling the immediate area surrounding the voxel. To render a 2D projection of the 3D data set, one first needs to define a [[Virtual camera|camera]] in space relative to the volume. Also, one needs to define the [[opacity (optics)|opacity]] and color of every voxel. This is usually defined using an [[RGBA color space|RGBA]] (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value. For example, a volume may be viewed by extracting [[isosurface]]s (surfaces of equal values) from the volume and rendering them as [[Polygon mesh|polygonal meshes]] or by rendering the volume directly as a block of data. The [[marching cubes]] algorithm is a common technique for extracting an isosurface from volume data. Direct volume rendering is a computationally intensive task that may be performed in several ways. Another method of volume rendering is [[Ray marching]]. ==Scope== [[File:CT presentation as thin slice, projection and volume rendering.jpg|thumb|Types of presentations of [[CT scan]]s, with two examples of volume rendering]] Volume rendering is distinguished from thin slice [[tomography]] presentations, and is also generally distinguished from projections of 3D models, including [[maximum intensity projection]].<ref name="FishmanNey2006">{{cite journal|author-link1=Elliot K. Fishman|last1=Fishman|first1=Elliot K.|last2=Ney|first2=Derek R.|last3=Heath|first3=David G.|last4=Corl|first4=Frank M.|last5=Horton|first5=Karen M.|last6=Johnson|first6=Pamela T.|title=Volume Rendering versus Maximum Intensity Projection in CT Angiography: What Works Best, When, and Why|journal=RadioGraphics|volume=26|issue=3|year=2006|pages=905โ922|issn=0271-5333|doi=10.1148/rg.263055186|pmid=16702462|doi-access=free}}</ref> Still, technically, all volume renderings become projections when viewed on a [[Display device#Full-area 2-dimensional displays|2-dimensional display]], making the distinction between projections and volume renderings a bit vague. Nevertheless, the epitomes of volume rendering models feature a mix of for example coloring<ref name="SilversteinParsad2008">{{cite journal|last1=Silverstein|first1=Jonathan C.|last2=Parsad|first2=Nigel M.|last3=Tsirline|first3=Victor|title=Automatic perceptual color map generation for realistic volume visualization|journal=Journal of Biomedical Informatics|volume=41|issue=6|year=2008|pages=927โ935|issn=1532-0464|doi=10.1016/j.jbi.2008.02.008|pmid=18430609|pmc=2651027}}</ref> and shading<ref>[https://books.google.com/books?id=zndnSzkfkXwC&pg=PA185 Page 185] in {{cite book|title=Vision, Modeling, and Visualization 2006: Proceedings, November 22-24|author=Leif Kobbelt|publisher=IOS Press|year=2006|isbn=9783898380812}}</ref> in order to create realistic and/or observable representations. ==Direct volume rendering== A direct volume renderer<ref name=_188>Marc Levoy, "Display of Surfaces from Volume Data", [[IEEE]] CG&A, May 1988. [http://graphics.stanford.edu/papers/volume-cga88/ Archive of Paper]</ref><ref name="dch88">{{Cite journal|doi = 10.1145/378456.378484|title = Volume rendering|year = 1988|last1 = Drebin|first1 = Robert A.|last2 = Carpenter|first2 = Loren|last3 = Hanrahan|first3 = Pat|journal = ACM SIGGRAPH Computer Graphics|volume = 22|issue = 4|pages = 65โ74}} {{Cite book|doi = 10.1145/54852.378484|title = Proceedings of the 15th annual conference on Computer graphics and interactive techniques - SIGGRAPH '88|year = 1988|last1 = Drebin|first1 = Robert A.|last2 = Carpenter|first2 = Loren|last3 = Hanrahan|first3 = Pat|isbn = 978-0897912754|pages = 65| s2cid=17982419 }}</ref> requires every sample value to be mapped to opacity and a color. This is done with a "[[transfer function]]" which can be a simple ramp, a [[piecewise linear function]] or an arbitrary table. Once converted to an [[RGBA color model]] (for red, green, blue, alpha) value, the composed RGBA result is projected on the corresponding pixel of the frame buffer. The way this is done depends on the rendering technique. A combination of these techniques is possible. For instance, a shear warp implementation could use texturing hardware to draw the aligned slices in the [[off-screen buffer]]. ===Volume ray casting=== {{Main|Volume ray casting}} [[Image:Croc.5.3.10.a gb1.jpg|thumb|300px|Volume Ray Casting. Crocodile mummy provided by the Phoebe A. Hearst Museum of Anthropology, UC Berkeley. CT data was acquired by Rebecca Fahrig, Department of Radiology, Stanford University, using a Siemens SOMATOM Definition, Siemens Healthcare. The image was rendered by Fovia's High Definition Volume Renderingยฎ engine.]] The technique of volume ray casting can be derived directly from the [[rendering equation]]. It provides results of very high quality, usually considered to provide the best image quality. Volume ray casting is classified as image based volume rendering technique, as the computation emanates from the output image, not the input volume data as is the case with object based techniques. In this technique, a ray is generated for each desired image pixel. Using a simple camera model, the ray starts at the center of projection of the camera (usually the eye point) and passes through the image pixel on the imaginary image plane floating in between the camera and the volume to be rendered. The ray is clipped by the boundaries of the volume in order to save time. Then the ray is sampled at regular or adaptive intervals throughout the volume. The data is interpolated at each sample point, the transfer function applied to form an RGBA sample, the sample is composited onto the accumulated RGBA of the ray, and the process repeated until the ray exits the volume. The RGBA color is converted to an RGB color and deposited in the corresponding image pixel. The process is repeated for every pixel on the screen to form the completed image. ===Splatting=== {{main|Gaussian splatting}} This is a technique which trades quality for speed. Here, every volume element is [[Texture splatting|splatted]], as Lee Westover said, like a snow ball, on to the viewing surface in back to front order. These splats are rendered as disks whose properties (color and transparency) vary diametrically in normal ([[Gaussian distribution|Gaussian]]) manner. Flat disks and those with other kinds of property distribution are also used depending on the application.<ref name=splatting>{{cite web|last=Westover|first=Lee Alan|title=SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm|url=http://www.cs.unc.edu/techreports/91-029.pdf|archive-url=https://web.archive.org/web/20140222005427/http://www.cs.unc.edu/techreports/91-029.pdf|url-status=dead|archive-date=February 22, 2014|access-date=28 June 2012|date=July 1991}}</ref><ref name=fastsplat>{{cite web|last=Huang|first=Jian|title=Splatting|url=http://web.eecs.utk.edu/~huangj/CS594S02/splatting.ppt|access-date=5 August 2011|format=PPT|date=Spring 2002}}</ref> ===Shear warp=== [[Image:volRenderShearWarp.gif|thumb|250px|Example of a mouse skull (CT) rendering using the shear warp algorithm]] The shear warp approach to volume rendering was developed by Cameron and Undrill, popularized by Philippe Lacroute and [[Marc Levoy]].<ref>{{Cite book|url = http://graphics.stanford.edu/papers/shear/|publisher = ACM|date = 1994-01-01|location = New York, NY, USA|isbn = 978-0897916677|pages = 451โ458|doi = 10.1145/192161.192283|first1 = Philippe|last1 = Lacroute|first2 = Marc|last2 = Levoy| title=Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94 | chapter=Fast volume rendering using a shear-warp factorization of the viewing transformation |citeseerx = 10.1.1.75.7117| s2cid=1266012 }}</ref> In this technique, the [[viewing transformation]] is transformed such that the nearest face of the volume becomes axis aligned with an off-screen image [[data buffer]] with a fixed scale of voxels to pixels. The volume is then rendered into this buffer using the far more favorable memory alignment and fixed scaling and blending factors. Once all slices of the volume have been rendered, the buffer is then warped into the desired orientation and scaled in the displayed image. This technique is relatively fast in software at the cost of less accurate sampling and potentially worse image quality compared to ray casting. There is memory overhead for storing multiple copies of the volume, for the ability to have near axis aligned volumes. This overhead can be mitigated using [[run length encoding]]. ===Texture-based volume rendering=== [[Image:CTSkullImage.png|250px|thumb| A volume rendered cadaver head using view-aligned [[texture mapping]] and [[diffuse reflection]]]] Many 3D graphics systems use [[texture mapping]] to apply images, or textures, to geometric objects. Commodity PC [[graphics cards]] are fast at texturing and can efficiently render slices of a 3D volume, with real time interaction capabilities. [[Workstation]] [[GPU]]s are even faster, and are the basis for much of the production volume visualization used in [[medical imaging]], oil and gas, and other markets (2007). In earlier years, dedicated 3D texture mapping systems were used on graphics systems such as [[Silicon Graphics]] [[InfiniteReality]], [[Hewlett-Packard|HP]] Visualize FX [[graphics accelerator]], and others. This technique was first described by [[Bill Hibbard]] and Dave Santek.<ref name=HS89>Hibbard W., Santek D., [http://www.ssec.wisc.edu/~billh/p39-hibbard.pdf "Interactivity is the key"], ''Chapel Hill Workshop on Volume Visualization'', University of North Carolina, Chapel Hill, 1989, pp. 39โ43.</ref> These slices can either be aligned with the volume and rendered at an angle to the viewer, or aligned with the viewing plane and sampled from unaligned slices through the volume. Graphics hardware support for 3D textures is needed for the second technique. Volume aligned texturing produces images of reasonable quality, though there is often a noticeable transition when the volume is rotated. ==Hardware-accelerated volume rendering== Due to the extremely parallel nature of direct volume rendering, special purpose volume rendering hardware was a rich research topic before [[GPU]] volume rendering became fast enough. The most widely cited technology was the VolumePro real-time ray-casting system, developed by [[Hanspeter Pfister]] and scientists at [[Mitsubishi Electric Research Laboratories]],<ref name="phi99">{{Cite book|doi = 10.1145/311535.311563|last1 = Pfister|first1 = Hanspeter|last2 = Hardenbergh|first2 = Jan|last3 = Knittel|first3 = Jim|last4 = Lauer|first4 = Hugh|last5 = Seiler|first5 = Larry| title=Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99 | chapter=The VolumePro real-time ray-casting system | date=1999 |isbn = 978-0201485608|pages = 251โ260|citeseerx = 10.1.1.471.9205| s2cid=7673547 }}</ref> which used high memory bandwidth and brute force to render using the ray casting algorithm. The technology was transferred to TeraRecon, Inc. and two generations of ASICs were produced and sold. The VP1000<ref name="VP1000">{{cite book |last1=Wu |first1=Yin |last2=Bhatia |first2=Vishal |last3=Lauer |first3=Hugh |last4=Seiler |first4=Larry |title=Proceedings of the 2003 symposium on Interactive 3D graphics |chapter=Shear-image order ray casting volume rendering |date=2003 |page=152 |doi=10.1145/641480.641510 |isbn=978-1581136456 |s2cid=14641432 }}</ref> was released in 2002 and the VP2000<ref>{{cite web |last1=TeraRecon |title=Product Announcement |url=https://www.healthimaging.com/topics/diagnostic-imaging/terarecon-its-beginning-end-dedicated-diagnostic-advanced-visualization |website=healthimaging.com |date=7 December 2006 |access-date=27 August 2018}}</ref> in 2007. [[Pixel shader]]s are able to read and write randomly from video memory and perform some basic mathematical and logical calculations. These [[Single instruction, multiple data|SIMD]] processors were used to perform general calculations such as rendering polygons and signal processing. In recent [[GPU]] generations, the pixel shaders now are able to function as [[Multiple instruction, multiple data|MIMD]] processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point formats. With such power, virtually any algorithm with steps that can be performed in parallel, such as [[volume ray casting]] or [[tomographic reconstruction]], can be performed with tremendous acceleration. The programmable [[pixel shaders]] can be used to simulate variations in the characteristics of lighting, shadow, [[Reflection (computer graphics)|reflection]], emissive color and so forth. Such simulations can be written using high level [[shading language]]s. ==Optimization techniques== The primary goal of optimization is to skip as much of the volume as possible. A typical medical data set can be 1 GB in size. To render that at 30 frame/s requires an extremely fast memory bus. Skipping voxels means less information needs to be processed. ===Empty space skipping=== Often, a volume rendering system will have a system for identifying regions of the volume containing no visible material. This information can be used to avoid rendering these transparent regions.<ref name=shn03>Sherbondy A., Houston M., Napel S.: ''Fast volume segmentation with simultaneous visualization using programmable graphics hardware.'' In Proceedings of [[IEEE]] Visualization (2003), pp. 171โ176.</ref> ===Early ray termination=== This is a technique used when the volume is rendered in front to back order. For a ray through a pixel, once sufficient dense material has been encountered, further samples will make no significant contribution to the pixel and so may be neglected. ===Octree and BSP space subdivision=== The use of hierarchical structures such as [[octree]] and [[Binary space partitioning|BSP]]-tree could be very helpful for both compression of volume data and speed optimization of volumetric ray casting process. ===Volume segmentation=== {{Anchor|Bone removal}}[[File:CT angiography of the head without and with bone removal.jpg|thumb|Volume segmentation include automatic bone removal, such as used in the right image in this [[Computed tomography angiography|CT angiography]].]] [[File:3D CT of thorax.jpg|thumb|Volume segmentation of a 3D-rendered [[CT scan]] of the [[thorax]]: The anterior thoracic wall, the airways and the pulmonary vessels anterior to the root of the lung have been digitally removed in order to visualize thoracic contents: <br>- {{color|blue|blue}}: [[pulmonary arteries]] <br>- {{color|red|red}}: [[pulmonary veins]] (and also the [[abdominal wall]])<br>- {{color|yellow|yellow}}: the [[mediastinum]] <br>- {{color|violet|violet}}: the [[Thoracic diaphragm|diaphragm]] ]] [[File:Visible-human-leonardo-1.jpg|thumb|Visualization of the inner organs from the segmented [[Visible Human Project|Visible Human]] data set rendered by [[Voxel-Man]], aside with a drawing of [[Leonardo da Vinci]] (1998)]] [[Image segmentation]] is a manual or automatic procedure that can be used to section out large portions of the volume that one considers uninteresting before rendering, the amount of calculations that have to be made by ray casting or texture blending can be significantly reduced. This reduction can be as much as from O(n) to O(log n) for n sequentially indexed voxels. Volume segmentation also has significant performance benefits for other [[ray tracing (graphics)|ray tracing]] algorithms. Volume segmentation can subsequently be used to highlight or expose<ref>Tiede U., Schiemann T., Hoehne K.: ''High quality rendering of attributed volume data'' In Proceedings of [[IEEE]] Visualization (1998), pp. 255-262.</ref> structures of interest. ===Multiple and adaptive resolution representation=== By representing less interesting regions of the volume in a coarser resolution, the data input overhead can be reduced. On closer observation, the data in these regions can be populated either by reading from memory or disk, or by [[interpolation]]. The coarser resolution volume is resampled to a smaller size in the same way as a 2D mipmap image is created from the original. These smaller volume are also used by themselves while rotating the volume to a new orientation. ===Pre-integrated volume rendering=== Pre-integrated volume rendering<ref name=mhc90>Max N., Hanrahan P., Crawfis R.: ''[http://www.sci.utah.edu/~jmk/papers/VolViz90-MAX.pdf Area and volume coherence for efficient visualization of 3D scalar functions].'' In Computer Graphics (San Diego Workshop on Volume Visualization, 1990) vol. 24, pp. 27โ33.</ref> is a method that can reduce sampling artifacts by pre-computing much of the required data. It is especially useful in hardware-accelerated applications<ref name="eke01">{{Cite book|doi = 10.1145/383507.383515|last1 = Engel|first1 = Klaus|last2 = Kraus|first2 = Martin|last3 = Ertl|first3 = Thomas| title=Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware | chapter=High-quality pre-integrated volume rendering using hardware-accelerated pixel shading | date=2001 |isbn = 978-1581134070|pages = 9โ16|citeseerx = 10.1.1.458.1814| s2cid=14409951 }}</ref><ref name=lwm04>Lum E., Wilson B., Ma K.: ''[http://www.cg.tuwien.ac.at/courses/Seminar/WS2004/papers/%5BLum2004%5D%20High-Quality%20Lighting%20and%20Efficient%20Pre-Integration%20for%20Volume%20Rendering.pdf High-Quality Lighting and Efficient Pre-Integration for Volume Rendering].'' In Eurographics/[[IEEE]] Symposium on Visualization 2004.</ref> because it improves quality without a large performance impact. Unlike most other optimizations, this does not skip voxels. Rather it reduces the number of samples needed to accurately display a region of voxels. The idea is to render the intervals between the samples instead of the samples themselves. This technique captures rapidly changing material, for example the transition from muscle to bone with much less computation. ===Image-based meshing=== [[Image-based meshing]] is the automated process of creating computer models from 3D image data (such as [[MRI]], [[X-ray computed tomography|CT]], [[industrial CT scanning|Industrial CT]] or [[microtomography]]) for computational analysis and design, e.g. CAD, CFD, and FEA. ===Temporal reuse of voxels=== For a complete display view, only one voxel per pixel (the front one) is required to be shown (although more can be used for smoothing the image), if animation is needed, the front voxels to be shown can be cached and their location relative to the camera can be recalculated as it moves. Where display voxels become too far apart to cover all the pixels, new front voxels can be found by ray casting or similar, and where two voxels are in one pixel, the front one can be kept. == List of related software == ; Open source * [[3D Slicer]] โ a software package for scientific visualization and image analysis * [[ClearVolume]] โ a GPU ray-casting based, live 3D visualization library designed for high-end volumetric light sheet microscopes. * [[ParaView]] โ a cross-platform, large data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. ParaView is built on VTK (below). * [[Studierfenster|Studierfenster (StudierFenster)]] โ a free, non-commercial Open Science client/server-based Medical Imaging Processing (MIP) online framework. * [[Vaa3D]] โ a 3D, 4D and 5D volume rendering and image analysis platform for gigabytes and terabytes of large images (based on OpenGL) especially in the microscopy image field. Also cross-platform with Mac, Windows, and Linux versions. Include a comprehensive plugin interface and 100 plugins for image analysis. Also render multiple types of surface objects. * [[VisIt]] โ a cross-platform interactive parallel visualization and graphical analysis tool for viewing scientific data. * [[Volume cartography]] โ an open source software used in recovering the [[En-Gedi Scroll]]. * [[Voreen]] โ a cross-platform rapid application development framework for the interactive visualization and analysis of multi-modal volumetric data sets. It provides GPU-based volume rendering and data analysis techniques * [[VTK]] โ a general-purpose C++ toolkit for data processing, visualization, 3D interaction, computational geometry, with Python and Java bindings. Also, VTK.js provides a JavaScript implementation. ; Commercial * [[Amira (Software)|Amira]] โ a 3D visualization and analysis software for scientists and researchers (in life sciences and biomedical) * [[Imaris]] โ a scientific software module that delivers all the necessary functionality for data management, visualization, analysis, segmentation and interpretation of 3D and 4D microscopy datasets * [[MeVisLab]] โ cross-platform software for medical image processing and visualization (based on OpenGL and Open Inventor) * [[Open Inventor]] โ a high-level 3D API for 3D graphics software development (C++, .NET, Java) * [[ScanIP]] โ an image processing and [[image-based meshing]] platform that can render scan data (MRI, CT, Micro-CT...) in 3D directly after import. [[Image:V3d-display 01.png|thumb|250px|Example of a fly brain rendered with its compartments' surface models using Vaa3D]] * [[tomviz]] โ a 3D visualization platform for scientists and researchers that can utilize Python scripts for advanced 3D data processing. * [[VoluMedic]] โ a volume slicing and rendering software == See also == {{commons category}} * [[Cinematic rendering]], an alternative visualization technique * [[Isosurface]], a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space * [[Flow visualization]], a technique for the visualization of [[vector field]]s * [[Volume mesh]], a polygonal representation of the interior volume of an object ==References== {{reflist|2}} ==Further reading== * M. Ikits, J. Kniss, A. Lefohn and C. Hansen: [https://developer.nvidia.com/gpugems/gpugems/part-vi-beyond-triangles/chapter-39-volume-rendering-techniques ''Volume Rendering Techniques'']. In: ''GPU Gems'', Chapter 39 (online-version in the developer zone of Nvidia). * [http://www.byclb.com/TR/Tutorials/volume_rendering/Index.aspx Volume Rendering], Volume Rendering Basics Tutorial by Ph.D. รmer Cengiz รELEBฤฐ * Barthold Lichtenbelt, Randy Crane, Shaz Naqvi, ''Introduction to Volume Rendering'' (Hewlett-Packard Professional Books), Hewlett-Packard Company 1998. * Peng H., Ruan, Z, Long, F, Simpson, JH, Myers, EW: ''V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.'' Nature Biotechnology, 2010 {{doi|10.1038/nbt.1612}} [http://www.nature.com/nbt/journal/vaop/ncurrent/full/nbt.1612.html Volume Rendering of large high-dimensional image data]. * {{cite book|author=Daniel Weiskopf|title=GPU-Based Interactive Visualization Techniques|year=2006|publisher=Springer Science & Business Media|isbn=978-3-540-33263-3}}<!-- the book's title is a little unfortunate, but it is primarily about the topic of this article --> {{Visualization}} {{Computer graphics}} [[Category:3D rendering]] [[Category:Implicit surface modeling]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:3D computer graphics
(
edit
)
Template:Anchor
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Color
(
edit
)
Template:Commons category
(
edit
)
Template:Computer graphics
(
edit
)
Template:Doi
(
edit
)
Template:For
(
edit
)
Template:Ifsubst
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Visualization
(
edit
)