Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Light field
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== [[Image:Gershun-light-field-fig24.png|right|thumb|200px| A downward-facing light source (F-F') induces a light field whose irradiance vectors curve outwards. Using calculus, Gershun could compute the irradiance falling on points (P<sub>1</sub>, P<sub>2</sub>) on a surface.<ref>Gershun, fig 24</ref>)]] ===Illumination engineering=== Gershun's reason for studying the light field was to derive (in closed form) illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface.<ref>Ashdown 1993</ref> The branch of optics devoted to illumination engineering is [[nonimaging optics]].<ref>Chaves 2015; Winston 2005</ref> It extensively uses the concept of flow lines (Gershun's flux lines) and vector flux (Gershun's light vector). However, the light field (in this case the positions and directions defining the light rays) is commonly described in terms of [[phase space]] and [[Hamiltonian optics]]. ===Light field rendering=== Extracting appropriate 2D slices from the 4D light field of a scene, enables novel views of the scene.<ref>Levoy 1996; Gortler 1996</ref> Depending on the parameterization of the light field and slices, these views might be [[Perspective projection|perspective]], [[Orthographic projection (geometry)|orthographic]], crossed-slit,<ref>Zomet 2003</ref> general linear cameras,<ref>Yu and McMillan 2004</ref> multi-perspective,<ref>Rademacher 1998</ref> or another type of projection. Light field rendering is one form of [[Image-Based Modeling And Rendering|image-based rendering]]. ===Synthetic aperture photography=== Integrating an appropriate 4D subset of the samples in a light field can approximate the view that would be captured by a camera having a finite (i.e., non-[[pinhole]]) aperture. Such a view has a finite [[depth of field]]. Shearing or warping the light field before performing this integration can focus on different fronto-parallel<ref>Isaksen 2000</ref> or oblique<ref>Vaish 2005</ref> planes. Images captured by digital cameras that capture the light field<ref name=ng/> can be refocused. ===3D display=== Presenting a light field using technology that maps each sample to the appropriate ray in physical space produces an [[autostereoscopy|autostereoscopic]] visual effect akin to viewing the original scene. Non-digital technologies for doing this include [[integral photography]], [[Volumetric display|parallax panoramagrams]], and [[holography]]; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. An array of video cameras can capture and display a time-varying light field. This essentially constitutes a [[3D television]] system.<ref>Javidi 2002; Matusik 2004</ref> Modern approaches to light-field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits.<ref>Wetzstein 2012, 2011; Lanman 2011, 2010</ref> ===Brain imaging=== Neural activity can be recorded optically by genetically encoding neurons with reversible fluorescent markers such as [[GCaMP]] that indicate the presence of [[calcium ions]] in real time. Since [[light field microscopy]] captures full volume information in a single frame, it is possible to monitor neural activity in individual neurons randomly distributed in a large volume at video framerate.<ref>Grosenick, 2009, 2017; Perez, 2015</ref> Quantitative measurement of neural activity can be done despite optical aberrations in brain tissue and without reconstructing a volume image,<ref>Pegard, 2016</ref> and be used to monitor activity in thousands of neurons.<ref>Grosenick, 2017</ref> ===Generalized scene reconstruction (GSR)=== This is a method of creating and/or refining a scene model representing a generalized light field and a relightable matter field.<ref name="auto">Leffingwell, 2018</ref> Data used in reconstruction includes images, video, object models, and/or scene models. The generalized light field represents light flowing in the scene. The relightable matter field represents the light interaction properties and emissivity of matter occupying the scene. Scene data structures can be implemented using Neural Networks,<ref>Mildenhall, 2020</ref><ref>{{Cite journal |last1=Rudnev |first1=Viktor |last2=Elgharib |first2=Mohamed |last3=Smith |first3=William |last4=Liu |first4=Lingjie |last5=Golyanik |first5=Vladislav |last6=Theobalt |first6=Christian |date=21 Jul 2022 |title=NeRF for Outdoor Scene Relighting |journal=European Conference on Computer Vision (ECCV) 2022 |pages=1–22 |arxiv=2112.05140}}</ref><ref>{{Cite journal |last1=Srinivasan |first1=Pratual |last2=Deng |first2=Boyang |last3=Zhang |first3=Xiuming |last4=Tancik |first4=Matthew |last5=Mildenhall |first5=Ben |last6=Barron |first6=Jonathan |date=7 Dec 2020 |title=NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis |journal=CVPR |pages=1–12 |arxiv=2012.03927}}</ref> and Physics-based structures,<ref>Yu & Fridovich-Keil, 2021</ref><ref>{{cite arXiv |last1=Kerbl |first1=Bernhard |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |date=2023-08-08 |eprint=2308.04079 |last2=Kopanas |first2=Georgios |last3=Leimkühler |first3=Thomas |last4=Drettakis |first4=George|class=cs.GR }}</ref> among others.<ref name="auto" /> The light and matter fields are at least partially disentangled.<ref name="auto" /><ref>{{Cite arXiv |last1=Zhang |first1=Jingyang |last2=Yao |first2=Yao |last3=Li |first3=Shiwei |last4=Liu |first4=Jingbo |last5=Fang |first5=Tian |last6=McKinnon |first6=David |last7=Tsin |first7=Yanghai |last8=Quan |first8=Long |date=30 Mar 2023 |title=NeILF++: Inter-Reflectable Light Fields for Geometry and Material Estimation |pages=1–5 |class=cs.CV |eprint=2303.17147 }}</ref> ===Holographic stereograms=== Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields.<ref>Halle 1991, 1994</ref> ===Glare reduction=== [[Glare (vision)|Glare]] arises due to multiple scattering of light inside the camera body and lens optics that reduces image contrast. While glare has been analyzed in 2D image space,<ref>Talvala 2007</ref> it is useful to identify it as a 4D ray-space phenomenon.<ref name=Raskar>Raskar 2008</ref> Statistically analyzing the ray-space inside a camera allows the classification and removal of glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling can be used to reduce glare without significantly compromising image resolution.<ref name=Raskar/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)