Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Light field
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Vector function in optics}} A '''light field''', or '''lightfield''', is a [[vector-valued function|vector function]] that describes the amount of [[light]] flowing in every direction through every point in a space. The space of all possible ''[[light rays]]'' is given by the [[Five-dimensional space|five-dimensional]] '''plenoptic function''', and the magnitude of each ray is given by its [[radiance]]. [[Michael Faraday]] was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working.<ref>{{cite journal|last1=Faraday|first1=Michael|date=30 April 2009|title=LIV. Thoughts on ray-vibrations|url=https://www-spof.gsfc.nasa.gov/Education/wfarad1846.html|journal=Philosophical Magazine|series=Series 3|volume=28|issue=188|pages=345–350|doi=10.1080/14786444608645431|archiveurl=https://web.archive.org/web/20130218141803/https://www-spof.gsfc.nasa.gov/Education/wfarad1846.html|archivedate=2013-02-18|url-access=subscription}}</ref> The term ''light field'' was coined by [[Andrey Aleksandrovich Gershun|Andrey Gershun]] in a classic 1936 paper on the radiometric properties of light in three-dimensional space. The term "radiance field" may also be used to refer to similar, or identical <ref>{{Cite journal |last=Mildenhall |first=Ben |last2=Srinivasan |first2=Pratul P |last3=Tancik |first3=Matthew |last4=Barron |first4=Jonathan T |last5=Ramamoorthi |first5=Ravi |last6=Ng |first6=Ren |date=2021-12-17 |title=NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis |url=https://doi.org/10.1145/3503250 |journal=Communications of the ACM |volume=65 |issue=1 |pages=99-106}}</ref> concepts. The term is used in modern research such as [[neural radiance field]]s ==The plenoptic function== [[Image:Plenoptic-function-a.png|right|frame|Radiance ''L'' along a ray can be thought of as the amount of light traveling along all possible straight lines through a tube whose size is determined by its solid angle and cross-sectional area.]] For geometric [[optics]]—i.e., to [[Coherence (physics)|incoherent]] light and to objects larger than the wavelength of light—the fundamental carrier of light is a [[ray (optics)|ray]]. The measure for the amount of light traveling along a ray is [[radiance]], denoted by ''L'' and measured in {{nowrap|W·sr<sup>−1</sup>·m<sup>−2</sup>}}; i.e., [[watt]]s (W) per [[steradian]] (sr) per square meter (m<sup>2</sup>). The steradian is a measure of [[solid angle]], and meters squared are used as a measure of cross-sectional area, as shown at right. [[Image:Plenoptic function b.svg|left|frame|Parameterizing a ray in [[three-dimensional space|3D]] space by position (''x'', ''y'', ''z'') and direction (''θ'', ''ϕ'').]] The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function.<ref>Adelson 1991</ref> The plenoptic illumination function is an idealized function used in [[computer vision]] and [[computer graphics]] to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is not used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics.<ref>Wong 2002</ref> Since rays in space can be parameterized by three coordinates, ''x'', ''y'', and ''z'' and two angles ''θ'' and ''ϕ'', as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional [[manifold]] equivalent to the product of 3D [[Euclidean space]] and the [[2-sphere]]. [[Image:Gershun-light-field-fig17.png|right|thumb|175px|Summing the irradiance vectors '''D'''<sub>1</sub> and '''D'''<sub>2</sub> arising from two light sources I<sub>1</sub> and I<sub>2</sub> produces a resultant vector '''D''' having the magnitude and direction shown.<ref>Gershun, fig 17</ref>]] The light field at each point in space can be treated as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances. Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value—the total irradiance at that point, and a resultant direction. The figure shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of [[Three-dimensional space|3D space]] is called the vector irradiance field.<ref>Arvo, 1994</ref> The vector direction at each point in the field can be interpreted as the orientation of a flat surface placed at that point to most brightly illuminate it. ===Higher dimensionality=== Time, [[wavelength]], and [[Polarization (waves)|polarization]] angle can be treated as additional dimensions, yielding higher-dimensional functions, accordingly. ==The 4D light field== [[Image:Plenoptic-function-c.png|left|frame|Radiance along a ray remains constant if there are no blockers.]] In a plenoptic function, if the region of interest contains a [[concave polygon|concave]] object (e.g., a cupped hand), then light leaving one point on the object may travel only a short distance before another point on the object blocks it. No practical device could measure the function in such a region. However, for locations outside the object's [[convex hull]] (e.g., shrink-wrap), the plenoptic function can be measured by capturing multiple images. In this case the function contains redundant information, because the radiance along a ray remains constant throughout its length. The redundant information is exactly one dimension, leaving a four-dimensional function variously termed the photic field, the 4D light field<ref>Levoy 1996</ref> or lumigraph.<ref>Gortler 1996</ref> Formally, the field is defined as radiance along rays in empty space. The set of rays in a light field can be parameterized in a variety of ways. The most common is the two-plane parameterization. While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it relates closely to the [[analytic geometry]] of perspective imaging. A simple way to think about a two-plane light field is as a collection of perspective images of the ''st'' plane (and any objects that may lie astride or beyond it), each taken from an observer position on the ''uv'' plane. A light field parameterized this way is sometimes called a light slab. [[Image:Light-field-parameterizations.png|left|frame| Some alternative parameterizations of the 4D light field, which represents the flow of light through an empty region of three-dimensional space. Left: points on a plane or curved surface and directions leaving each point. Center: pairs of points on the surface of a sphere. Right: pairs of points on two planes in general (meaning any) position.]] {{clear}} ===Sound analog=== The analog of the 4D light field for sound is the sound field or wave field'','' as in [[wave field synthesis]], and the corresponding parametrization is the [[Kirchhoff–Helmholtz integral]], which states that, in the absence of obstacles, a sound field over time is given by the pressure on a plane. Thus this is two dimensions of information at any point in time, and over time, a 3D field. This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays (0D at a point in time, 1D over time), while by the [[Huygens–Fresnel principle]], a sound [[wave front]] can be modeled as spherical waves (2D at a point in time, 3D over time): light moves in a single direction (2D of information), while sound expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension. == Image refocusing == Because light field provides spatial and angular information, we can alter the position of focal planes after exposure, which is often termed ''refocusing''. The principle of refocusing is to obtain conventional 2-D photographs from a light field through the integral transform. The transform takes a lightfield as its input and generates a photograph focused on a specific plane. Assuming <math>L_{F}(s,t,u,v)</math> represents a 4-D light field that records light rays traveling from position <math>(u,v)</math> on the first plane to position <math>(s,t)</math> on the second plane, where <math>F</math> is the distance between two planes, a 2-D photograph at any depth <math>\alpha F</math> can be obtained from the following integral transform:<ref name="renng">{{Cite book|last=Ng|first=Ren|title=ACM SIGGRAPH 2005 Papers |chapter=Fourier slice photography |date=2005|chapter-url=http://dx.doi.org/10.1145/1186822.1073256|pages=735–744 |location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1186822.1073256|isbn=9781450378253 |s2cid=1806641 }}</ref> :<math> \mathcal{P}_{\alpha}\left[L_{F}\right](s, t) = {1 \over \alpha^2 F^2}\iint L_F\left(u\left(1 - \frac{1}{\alpha}\right) + \frac{s}{\alpha}, v\left(1 - \frac{1}{\alpha}\right) + \frac{t}{\alpha}, u, v\right)~dudv </math>, or more concisely, :<math>\mathcal{P}_{\alpha}\left[L_{F}\right](\boldsymbol{s})=\frac{1}{\alpha^{2} F^{2}} \int L_{F}\left(\boldsymbol{u}\left(1-\frac{1}{\alpha}\right)+\frac{\boldsymbol{s}}{\alpha}, \boldsymbol{u}\right) d \boldsymbol{u}</math>, where <math>\boldsymbol{s}=(s,t)</math>, <math>\boldsymbol{u}=(u,v)</math>, and <math>\mathcal{P}_{\alpha}\left[\cdot\right]</math> is the photography operator. In practice, this formula cannot be directly used because a plenoptic camera usually captures discrete samples of the lightfield <math>L_{F}(s,t,u,v)</math>, and hence resampling (or interpolation) is needed to compute <math display="inline"> L_{F}\left(\boldsymbol{u}\left(1-\frac{1}{\alpha}\right)+\frac{\boldsymbol{s}}{\alpha}, \boldsymbol{u}\right)</math>. Another problem is high computational complexity. To compute an <math>N\times N</math> 2-D photograph from an <math>N\times N\times N\times N</math> 4-D light field, the complexity of the formula is <math>O(N^4)</math>.<ref name="renng" /> === Fourier slice photography === One way to reduce the complexity of computation is to adopt the concept of [[Projection-slice theorem|Fourier slice theorem]]:<ref name="renng" /> The photography operator <math>\mathcal{P}_{\alpha}\left[\cdot\right]</math> can be viewed as a shear followed by projection. The result should be proportional to a dilated 2-D slice of the 4-D Fourier transform of a light field. More precisely, a refocused image can be generated from the [[Light field microscopy|4-D Fourier spectrum]] of a light field by extracting a 2-D slice, applying an inverse 2-D transform, and scaling. The asymptotic complexity of the algorithm is <math>O(N^2 \operatorname{log}N)</math>. === Discrete focal stack transform === Another way to efficiently compute 2-D photographs is to adopt discrete focal stack transform (DFST).<ref>{{Cite journal|last1=Nava|first1=F. Pérez|last2=Marichal-Hernández|first2=J.G.|last3=Rodríguez-Ramos|first3=J.M.|date=August 2008|title=The Discrete Focal Stack Transform|url=https://ieeexplore.ieee.org/document/7080334|journal=2008 16th European Signal Processing Conference|pages=1–5}}</ref> DFST is designed to generate a collection of refocused 2-D photographs, or so-called [[Focus stacking|Focal Stack]]. This method can be implemeted by fast [[Fractional Fourier transform|fractional fourier transform]] (FrFT). The discrete photography operator <math>\mathcal{P}_{\alpha}\left[\cdot\right]</math> is defined as follows for a lightfield <math>L_{F}(\boldsymbol {s},\boldsymbol {u})</math> sampled in a 4-D grid <math>\boldsymbol {s} = \Delta s \tilde{\boldsymbol {s}},</math> <math>\tilde{\boldsymbol {s}} = -\boldsymbol {n}_{\boldsymbol {s}}, ..., \boldsymbol {n}_{\boldsymbol {s}}</math>, <math>\boldsymbol {u} = \Delta u \tilde{\boldsymbol {u}}, \tilde{\boldsymbol {u}}=-\boldsymbol {n}_{\boldsymbol {u}},...,\boldsymbol {n}_{\boldsymbol {u}}</math>: :<math>\mathcal{P}_{q}[L](\boldsymbol{s})= \sum_{\tilde{\boldsymbol{u}}=-\boldsymbol{n}_{\boldsymbol{u}}}^{\boldsymbol{n}_{\boldsymbol{u}}} L(\boldsymbol{u} q+\boldsymbol{s}, \boldsymbol{u}) \Delta \boldsymbol{u}, \quad \Delta \boldsymbol{u}=\Delta u\Delta v, \quad q=\left(1-\frac{1}{\alpha}\right)</math> Because <math>(\boldsymbol{u} q+\boldsymbol{s}, \boldsymbol{u}) </math> is usually not on the 4-D grid, DFST adopts [[trigonometric interpolation]] to compute the non-grid values. The algorithm consists of these steps: * Sample the light field <math>L_{F}(\boldsymbol {s},\boldsymbol {u})</math> with the sampling period <math>\Delta s</math> and <math>\Delta u</math> and get the discretized light field <math>L^d_{F}(\boldsymbol {s},\boldsymbol {u})</math>. * Pad <math>L^d_{F}(\boldsymbol {s},\boldsymbol {u})</math> with zeros such that the signal length is enough for FrFT without aliasing. * For every <math>\boldsymbol {u}</math>, compute the [[Discrete Fourier transform]] of <math>L^d_{F}(\boldsymbol {s},\boldsymbol {u})</math>, and get the result <math>R1</math>. * For every focal length <math>\alpha F</math>, compute the [[Fractional Fourier transform|fractional fourier transform]] of <math>R1</math>, where the order of the transform depends on <math>\alpha</math>, and get the result <math>R2</math>. * Compute the inverse Discrete Fourier transform of <math>R2</math>. * Remove the marginal pixels of <math>R2</math> so that each 2-D photograph has the size <math>(2{n}_{\boldsymbol {s}}+1)</math> by <math>(2{n}_{\boldsymbol {s}}+1)</math> ==Methods to create light fields== In computer graphics, light fields are typically produced either by [[rendering (computer graphics)|rendering]] a [[3D model]] or by photographing a real scene. In either case, to produce a light field, views must be obtained for a large collection of viewpoints. Depending on the parameterization, this collection typically spans some portion of a line, circle, plane, sphere, or other shape, although unstructured collections are possible.<ref>Buehler 2001</ref> Devices for capturing [[light-field photography|light fields photographically]] may include a moving handheld camera or a robotically controlled camera,<ref>Levoy 2002</ref> an arc of cameras (as in the [[bullet time]] effect used in ''[[The Matrix]]''), a dense array of cameras,<ref>Kanade 1998; Yang 2002; Wilburn 2005</ref> [[light-field camera|handheld camera]]s,<ref name=ng>[[Ren Ng|Ng]] 2005</ref><ref>Georgiev 2006; Marwah 2013</ref> microscopes,<ref>Levoy 2006</ref> or other optical system.<ref>Bolles 1987</ref> The number of images in a light field depends on the application. A light field capture of [[Michelangelo]]'s statue of ''[[Night (Michelangelo)|Night]]''<ref>{{Cite web|title=A light field of Michelangelo's statue of Night|url=https://accademia.stanford.edu/mich/lightfield-of-night/|access-date=2022-02-08|website=accademia.stanford.edu}}</ref> contains 24,000 1.3-megapixel images, which is considered large as of 2022. For light field rendering to completely capture an opaque object, images must be taken of at least the front and back. Less obviously, for an object that lies astride the ''st'' plane, finely spaced images must be taken on the ''uv'' plane (in the two-plane parameterization shown above). The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field.<ref>Chai (2000)</ref> Also of interest are the effects of occlusion,<ref>Durand (2005)</ref> lighting and reflection.<ref>Ramamoorthi (2006)</ref> ==Applications== [[Image:Gershun-light-field-fig24.png|right|thumb|200px| A downward-facing light source (F-F') induces a light field whose irradiance vectors curve outwards. Using calculus, Gershun could compute the irradiance falling on points (P<sub>1</sub>, P<sub>2</sub>) on a surface.<ref>Gershun, fig 24</ref>)]] ===Illumination engineering=== Gershun's reason for studying the light field was to derive (in closed form) illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface.<ref>Ashdown 1993</ref> The branch of optics devoted to illumination engineering is [[nonimaging optics]].<ref>Chaves 2015; Winston 2005</ref> It extensively uses the concept of flow lines (Gershun's flux lines) and vector flux (Gershun's light vector). However, the light field (in this case the positions and directions defining the light rays) is commonly described in terms of [[phase space]] and [[Hamiltonian optics]]. ===Light field rendering=== Extracting appropriate 2D slices from the 4D light field of a scene, enables novel views of the scene.<ref>Levoy 1996; Gortler 1996</ref> Depending on the parameterization of the light field and slices, these views might be [[Perspective projection|perspective]], [[Orthographic projection (geometry)|orthographic]], crossed-slit,<ref>Zomet 2003</ref> general linear cameras,<ref>Yu and McMillan 2004</ref> multi-perspective,<ref>Rademacher 1998</ref> or another type of projection. Light field rendering is one form of [[Image-Based Modeling And Rendering|image-based rendering]]. ===Synthetic aperture photography=== Integrating an appropriate 4D subset of the samples in a light field can approximate the view that would be captured by a camera having a finite (i.e., non-[[pinhole]]) aperture. Such a view has a finite [[depth of field]]. Shearing or warping the light field before performing this integration can focus on different fronto-parallel<ref>Isaksen 2000</ref> or oblique<ref>Vaish 2005</ref> planes. Images captured by digital cameras that capture the light field<ref name=ng/> can be refocused. ===3D display=== Presenting a light field using technology that maps each sample to the appropriate ray in physical space produces an [[autostereoscopy|autostereoscopic]] visual effect akin to viewing the original scene. Non-digital technologies for doing this include [[integral photography]], [[Volumetric display|parallax panoramagrams]], and [[holography]]; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. An array of video cameras can capture and display a time-varying light field. This essentially constitutes a [[3D television]] system.<ref>Javidi 2002; Matusik 2004</ref> Modern approaches to light-field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits.<ref>Wetzstein 2012, 2011; Lanman 2011, 2010</ref> ===Brain imaging=== Neural activity can be recorded optically by genetically encoding neurons with reversible fluorescent markers such as [[GCaMP]] that indicate the presence of [[calcium ions]] in real time. Since [[light field microscopy]] captures full volume information in a single frame, it is possible to monitor neural activity in individual neurons randomly distributed in a large volume at video framerate.<ref>Grosenick, 2009, 2017; Perez, 2015</ref> Quantitative measurement of neural activity can be done despite optical aberrations in brain tissue and without reconstructing a volume image,<ref>Pegard, 2016</ref> and be used to monitor activity in thousands of neurons.<ref>Grosenick, 2017</ref> ===Generalized scene reconstruction (GSR)=== This is a method of creating and/or refining a scene model representing a generalized light field and a relightable matter field.<ref name="auto">Leffingwell, 2018</ref> Data used in reconstruction includes images, video, object models, and/or scene models. The generalized light field represents light flowing in the scene. The relightable matter field represents the light interaction properties and emissivity of matter occupying the scene. Scene data structures can be implemented using Neural Networks,<ref>Mildenhall, 2020</ref><ref>{{Cite journal |last1=Rudnev |first1=Viktor |last2=Elgharib |first2=Mohamed |last3=Smith |first3=William |last4=Liu |first4=Lingjie |last5=Golyanik |first5=Vladislav |last6=Theobalt |first6=Christian |date=21 Jul 2022 |title=NeRF for Outdoor Scene Relighting |journal=European Conference on Computer Vision (ECCV) 2022 |pages=1–22 |arxiv=2112.05140}}</ref><ref>{{Cite journal |last1=Srinivasan |first1=Pratual |last2=Deng |first2=Boyang |last3=Zhang |first3=Xiuming |last4=Tancik |first4=Matthew |last5=Mildenhall |first5=Ben |last6=Barron |first6=Jonathan |date=7 Dec 2020 |title=NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis |journal=CVPR |pages=1–12 |arxiv=2012.03927}}</ref> and Physics-based structures,<ref>Yu & Fridovich-Keil, 2021</ref><ref>{{cite arXiv |last1=Kerbl |first1=Bernhard |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |date=2023-08-08 |eprint=2308.04079 |last2=Kopanas |first2=Georgios |last3=Leimkühler |first3=Thomas |last4=Drettakis |first4=George|class=cs.GR }}</ref> among others.<ref name="auto" /> The light and matter fields are at least partially disentangled.<ref name="auto" /><ref>{{Cite arXiv |last1=Zhang |first1=Jingyang |last2=Yao |first2=Yao |last3=Li |first3=Shiwei |last4=Liu |first4=Jingbo |last5=Fang |first5=Tian |last6=McKinnon |first6=David |last7=Tsin |first7=Yanghai |last8=Quan |first8=Long |date=30 Mar 2023 |title=NeILF++: Inter-Reflectable Light Fields for Geometry and Material Estimation |pages=1–5 |class=cs.CV |eprint=2303.17147 }}</ref> ===Holographic stereograms=== Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields.<ref>Halle 1991, 1994</ref> ===Glare reduction=== [[Glare (vision)|Glare]] arises due to multiple scattering of light inside the camera body and lens optics that reduces image contrast. While glare has been analyzed in 2D image space,<ref>Talvala 2007</ref> it is useful to identify it as a 4D ray-space phenomenon.<ref name=Raskar>Raskar 2008</ref> Statistically analyzing the ray-space inside a camera allows the classification and removal of glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling can be used to reduce glare without significantly compromising image resolution.<ref name=Raskar/> ==See also== * [[Angle–sensitive pixel]] * [[Dual photography]] * [[Light-field camera]] * [[Lytro]] * [[Raytrix]] * [[Reflectance paper]] ==Notes== {{reflist}} ==References== ===Theory=== * Adelson, E.H., Bergen, J.R. (1991). [http://persci.mit.edu/pub_pdfs/elements91.pdf#search=%22adelson%20plenoptic%20function%20elements%22 "The Plenoptic Function and the Elements of Early Vision"], In ''Computation Models of Visual Processing'', M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991, pp. 3–20. * Arvo, J. (1994). [http://portal.acm.org/citation.cfm?id=192250&coll=portal&dl=ACM&CFID=1089013&CFTOKEN=60772597 "The Irradiance Jacobian for Partially Occluded Polyhedral Sources"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 335–342. * Bolles, R.C., Baker, H. H., Marimont, D.H. (1987). [https://link.springer.com/article/10.1007/BF00128525# "Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion"], ''International Journal of Computer Vision'', Vol. 1, No. 1, 1987, Kluwer Academic Publishers, pp 7–55. * Faraday, M., [http://www-spof.gsfc.nasa.gov/Education/wfarad1846.html "Thoughts on Ray Vibrations"] {{Webarchive|url=https://web.archive.org/web/20130218141803/http://www-spof.gsfc.nasa.gov/Education/wfarad1846.html |date=2013-02-18 }}, ''Philosophical Magazine'', S.3, Vol XXVIII, N188, May 1846. * Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in ''Journal of Mathematics and Physics'', Vol. XVIII, MIT, 1939, pp. 51–151. * Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M. (1996). [http://portal.acm.org/citation.cfm?id=237200 "The Lumigraph"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 43–54. * Levoy, M., Hanrahan, P. (1996). [http://graphics.stanford.edu/papers/light/ "Light Field Rendering"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 31–42. * Moon, P., Spencer, D.E. (1981). ''The Photic Field'', MIT Press. * Wong, T.T., Fu, C.W., Heng, P.A., Leung C.S. (2002). [https://ieeexplore.ieee.org/document/1040963 "The Plenoptic-Illumination Function"], ''IEEE Trans. Multimedia'', Vol. 4, No. 3, pp. 361–371. ===Analysis=== * G. Wetzstein, I. Ihrke, W. Heidrich (2013) [http://www.cs.ubc.ca/labs/imager/tr/2010/TheoryOfPlenopticMultiplexing/PlenopticMultiplexing-IJCV2012.pdf "On Plenoptic Multiplexing and Reconstruction"], ''International Journal of Computer Vision (IJCV)'', Volume 101, Issue 2, pp. 384–400. * Ramamoorthi, R., Mahajan, D., Belhumeur, P. (2006). [https://web.archive.org/web/20060828222733/http://www1.cs.columbia.edu/~ravir/papers/firstorder/index.html "A First Order Analysis of Lighting, Shading, and Shadows"], ''ACM TOG''. * Zwicker, M., Matusik, W., Durand, F., Pfister, H. (2006). [http://people.csail.mit.edu/wojciech/DispAntiAlias/ "Antialiasing for Automultiscopic 3D Displays"], ''Eurographics Symposium on Rendering, 2006''. * Ng, R. (2005). [http://graphics.stanford.edu/papers/fourierphoto/ "Fourier Slice Photography"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 735–744. * Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, F. X. (2005). [http://people.csail.mit.edu/fredo/PUBLI/Fourier/ "A Frequency Analysis of Light Transport"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 1115–1126. * Chai, J.-X., Tong, X., Chan, S.-C., Shum, H. (2000). [http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm "Plenoptic Sampling"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 307–318. * Halle, M. (1994) [http://www.spl.harvard.edu/~halazar/pubs/discrete_spie94_preprint.pdf "Holographic Stereograms as Discrete imaging systems"]{{Dead link|date=February 2020 |bot=InternetArchiveBot |fix-attempted=yes }}, in ''SPIE Proc. Vol. #2176: Practical Holography VIII'', S.A. Benton, ed., pp. 73–84. * Yu, J., McMillan, L. (2004). [http://www.csbio.unc.edu/mcmillan/pubs/eccv2004_Yu.pdf "General Linear Cameras"], ''Proc. ECCV 2004'', Lecture Notes in Computer Science, pp. 14–27. ===Cameras=== * Marwah, K., Wetzstein, G., Bando, Y., Raskar, R. (2013). [http://web.media.mit.edu/~gordonw/CompressiveLightFieldPhotography/ "Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections"], ''ACM Transactions on Graphics (SIGGRAPH)''. * Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H. H. (2008). [https://web.archive.org/web/20080513175740/http://mpac.ee.ntu.edu.tw/~chiakai/pap/ "Programmable Aperture Photography:Multiplexed Light Field Acquisition"], ''Proc. ACM SIGGRAPH''. * Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J. (2007). [http://web.media.mit.edu/~raskar/Mask/ "Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing"], ''Proc. ACM SIGGRAPH''. * Georgiev, T., Zheng, C., Nayar, S., Curless, B., Salesin, D., Intwala, C. (2006). [http://www.tgeorgiev.net/Spatioangular.pdf "Spatio-angular Resolution Trade-offs in Integral Photography"], ''Proc. EGSR 2006''. * Kanade, T., Saito, H., Vedula, S. (1998). [https://www.cs.cmu.edu/~virtualized-reality/3DRoom/TR.htm "The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams"], Tech report CMU-RI-TR-98-34, December 1998. * Levoy, M. (2002). [http://graphics.stanford.edu/projects/gantry/ Stanford Spherical Gantry]. * Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006). [http://graphics.stanford.edu/papers/lfmicroscope/ "Light Field Microscopy"], ''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 25, No. 3. * Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P. (2005). [http://graphics.stanford.edu/papers/lfcamera/ "Light Field Photography with a Hand-Held Plenoptic Camera"], ''Stanford Tech Report'' CTSR 2005–02, April, 2005. * Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M. (2005). [http://graphics.stanford.edu/papers/CameraArray/ "High Performance Imaging Using Large Camera Arrays"], ''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 24, No. 3, pp. 765–776. * Yang, J.C., Everett, M., Buehler, C., McMillan, L. (2002). [http://portal.acm.org/citation.cfm?id=581907&dl=ACM&coll=&CFID=15151515&CFTOKEN=6184618 "A Real-Time Distributed Light Field Camera"], ''Proc. Eurographics Rendering Workshop 2002''. * [http://www.cafadis.ull.es "The CAFADIS camera"] ===Displays=== * Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. (2012). [http://web.media.mit.edu/%7Egordonw/TensorDisplays/TensorDisplays.pdf "Tensor Displays: Compressive Light Field Display using Multilayer Displays with Directional Backlighting"], ''ACM Transactions on Graphics (SIGGRAPH)'' * Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R. (2011). [http://www.cs.ubc.ca/labs/imager/tr/2011/Wetzstein_SIG2011_Layered3D/TomographicLightFieldSynthesis.pdf "Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays"], ''ACM Transactions on Graphics (SIGGRAPH)'' * Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R. (2011). [http://alumni.media.mit.edu/~dlanman/research/polarization-fields/polarization-fields.pdf "Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs"], ''ACM Transactions on Graphics (SIGGRAPH Asia)'' * Lanman, D., Hirsch, M. Kim, Y., Raskar, R. (2010). [http://web.media.mit.edu/~mhirsch/hr3d/content-adaptive.pdf "HR3D: Glasses-free 3D Display using Dual-stacked LCDs High-Rank 3D Display using Content-Adaptive Parallax Barriers"], ''ACM Transactions on Graphics (SIGGRAPH Asia)'' * Matusik, W., Pfister, H. (2004). [http://portal.acm.org/citation.cfm?id=1015805&dl=ACM&coll=&CFID=15151515&CFTOKEN=6184618 "3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes"], ''Proc. ACM SIGGRAPH'', ACM Press. * Javidi, B., Okano, F., eds. (2002). ''[https://books.google.com/books?id=y-xS4MyF-TQC&q=%22light+field%22 Three-Dimensional Television, Video and Display Technologies]'', Springer-Verlag. * Klug, M., Burnett, T., Fancello, A., Heath, A., Gardner, K., O'Connell, S., Newswanger, C. (2013). [http://onlinelibrary.wiley.com/doi/10.1002/j.2168-0159.2013.tb06234.x/abstract "A Scalable, Collaborative, Interactive Light-field Display System"], ''SID Symposium Digest of Technical Papers'' * Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., Beausoleil, R. (2013). [https://www.nature.com/articles/nature11972 "A multi-directional backlight for a wide-angle, glasses-free three-dimensional display"], ''Nature 495, 348–351'' ===Archives=== * [http://lightfield.stanford.edu/ "The Stanford Light Field Archive"] * [https://web.archive.org/web/20080521094824/http://vision.ucsd.edu/datasets/lfarchive/ "UCSD/MERL Light Field Repository"] * [https://web.archive.org/web/20140111183716/http://hci.iwr.uni-heidelberg.de/HCI/Research/LightField/lf_benchmark.php "The HCI Light Field Benchmark"] * [http://web.media.mit.edu/~gordonw/SyntheticLightFields/index.php "Synthetic Light Field Archive"] ===Applications=== * Grosenick, L., Anderson, T., Smith S. J. (2009) [http://www.columbia.edu/~lmg2200/WWW/pubs/LightField_ISBI.pdf "Elastic Source Selection for in vivo imaging of neuronal ensembles."] From Nano to Macro, 6th IEEE International Symposium on Biomedical Imaging. (2009) 1263–1266. * Grosenick, L., Broxton, M., Kim, C. K., Liston, C., Poole, B., Yang, S., Andalman, A., Scharff, E., Cohen, N., Yizhar, O., Ramakrishnan, C., Ganguli, S., Suppes, P., Levoy, M., Deisseroth, K. (2017) [https://www.biorxiv.org/content/biorxiv/early/2017/05/01/132688.full.pdf "Identification of cellular-activity dynamics across large tissue volumes in the mammalian brain"] bioRxiv 132688; doi: [https://doi.org/10.1101/132688 Identification of cellular-activity dynamics across large tissue volumes in the mammalian brain]. * Heide, F., Wetzstein, G., Raskar, R., Heidrich, W. (2013) [https://web.archive.org/web/20130703184026/http://adaptiveimagesynthesis.com/ "Adaptive Image Synthesis for Compressive Displays"], ACM Transactions on Graphics (SIGGRAPH) * Wetzstein, G., Raskar, R., Heidrich, W. (2011) [http://www.cs.ubc.ca/labs/imager/tr/2011/LFBOS/index.html "Hand-Held Schlieren Photography with Light Field Probes"], IEEE International Conference on Computational Photography (ICCP) * Pérez, F., Marichal, J. G., Rodriguez, J.M. (2008). [http://www.eurasip.org/Proceedings/Eusipco/Eusipco2008/papers/1569101893.pdf "The Discrete Focal Stack Transform"], ''Proc. EUSIPCO'' * Raskar, R., Agrawal, A., Wilson, C., Veeraraghavan, A. (2008). [https://web.archive.org/web/20080829185928/http://www.merl.com/people/agrawal/sig08/index.html "Glare Aware Photography: 4D Ray Sampling for Reducing Glare Effects of Camera Lenses"], ''Proc. ACM SIGGRAPH.'' * Talvala, E-V., Adams, A., Horowitz, M., Levoy, M. (2007). [http://graphics.stanford.edu/papers/glare_removal/ "Veiling Glare in High Dynamic Range Imaging"], ''Proc. ACM SIGGRAPH.'' * Halle, M., Benton, S., Klug, M., Underkoffler, J. (1991). [http://www.spl.harvard.edu/~halazar/pubs/ultragram_spie91_preprint.pdf "The UltraGram: A Generalized Holographic Stereogram"]{{Dead link|date=February 2020 |bot=InternetArchiveBot |fix-attempted=yes }}, ''SPIE Vol. 1461, Practical Holography V'', S.A. Benton, ed., pp. 142–155. * Zomet, A., Feldman, D., Peleg, S., Weinshall, D. (2003). [http://www2.computer.org/portal/web/csdl/doi/10.1109/TPAMI.2003.1201823 "Mosaicing New Views: The Crossed-Slits Projection"], ''IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)'', Vol. 25, No. 6, June 2003, pp. 741–754. * Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. (2005). [http://portal.acm.org/citation.cfm?id=1099539.1100041&coll=&dl=GUIDE&CFID=15151515&CFTOKEN=6184618 "Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform"], ''Proc. Workshop on Advanced 3D Imaging for Safety and Security'', in conjunction with CVPR 2005. *Bedard, N., Shope, T., Hoberman, A., Haralam, M. A., Shaikh, N., Kovačević, J., Balram, N., Tošić, I. (2016). [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5231297/ "Light field otoscope design for 3D in vivo imaging of the middle ear"]. ''Biomedical optics express'', ''8''(1), pp. 260–272. *Karygianni, S., Martinello, M., Spinoulas, L., Frossard, P., Tosic, I. (2018). "[https://ieeexplore.ieee.org/abstract/document/8451719/ Automated eardrum registration from light-field data]". IEEE International Conference on Image Processing (ICIP) * Rademacher, P., Bishop, G. (1998). [http://portal.acm.org/citation.cfm?id=280871&coll=portal&dl=ACM "Multiple-Center-of-Projection Images"], ''Proc. ACM SIGGRAPH'', ACM Press. * Isaksen, A., McMillan, L., Gortler, S.J. (2000). [http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=344929 "Dynamically Reparameterized Light Fields"], ''Proc. ACM SIGGRAPH'', ACM Press, pp. 297–306. * Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M. (2001). [http://portal.acm.org/citation.cfm?id=383309&dl=ACM&coll=&CFID=15151515&CFTOKEN=6184618 "Unstructured Lumigraph Rendering"], ''Proc. ACM SIGGRAPH'', ACM Press. * Ashdown, I. (1993). [http://citeseer.ist.psu.edu/ashdown92nearfield.html "Near-Field Photometry: A New Approach"], ''Journal of the Illuminating Engineering Society'', Vol. 22, No. 1, Winter, 1993, pp. 163–180. * Chaves, J. (2015) [https://books.google.com/books?id=e11ECgAAQBAJ "Introduction to Nonimaging Optics, Second Edition"], CRC Press * Winston, R., Miñano, J.C., Benitez, P.G., Shatz, N., Bortz, J.C., (2005) [https://books.google.com/books?id=MliJHWwTnVQC "Nonimaging Optics"], Academic Press * Pégard, N. C., Liu H.Y., Antipa, N., Gerlock M., Adesnik, H., and Waller, L.. ''Compressive light-field microscopy for 3D neural activity recording.'' Optica 3, no. 5, pp. 517–524 (2016). * Leffingwell, J., Meagher, D., Mahmud, K., Ackerson, S. (2018). [https://arxiv.org/abs/1803.08496v3 "Generalized Scene Reconstruction."] arXiv:1803.08496v3 [cs.CV], pp. 1–13. * Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). [https://doi.org/10.1007/978-3-030-58452-8_24 “NeRF: Representing scenes as neural radiance fields for view synthesis.”] Computer Vision – ECCV 2020, 405–421. * Yu, A., Fridovich-Keil, S., Tancik, M., Chen, Q., Recht, B., Kanazawa, A. (2021). [https://arxiv.org/abs/2111.11215 "Plenoxels: Radiance Fields without Neural Networks."] arXiv:2111.11215, pp. 1–25 * {{cite journal|last1=Perez|first1=CC|last2=Lauri|first2=A|last3=Symvoulidis|first3=P|last4=Cappetta|first4=M|last5=Erdmann|first5=A|last6=Westmeyer|first6=GG|display-authors=2|title=Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.|journal=Journal of Biomedical Optics|date=September 2015|volume=20|issue=9|pages=096009|doi=10.1117/1.JBO.20.9.096009|pmid=26358822|bibcode=2015JBO....20i6009C|doi-access=free}} *Perez, C. C., Lauri, A., Symvoulidis, P., Cappetta, M., Erdmann, A., & Westmeyer, G. G. (2015). Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera. Journal of Biomedical Optics, 20(9), 096009-096009. * León, K., Galvis, L., and Arguello, H. (2016). [http://aprendeenlinea.udea.edu.co/revistas/index.php/ingenieria/article/view/24643 "Reconstruction of multispectral light field (5d plenoptic function) based on compressive sensing with colored coded apertures from 2D projections"] Revista Facultad de Ingeniería Universidad de Antioquia 80, pp. 131. [[Category:Optics]] [[Category:3D computer graphics]] [[Category:3D display]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Dead link
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)