Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multi-exposure HDR capture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History == === Mid-19th century === [[File:Gustave Le Gray - Brig upon the Water - Google Art Project.jpg|thumb|right|upright=1.2|An 1856 photo by [[Gustave Le Gray]]]] The idea of using several exposures to adequately reproduce a too-extreme range of [[luminance]] was pioneered as early as the 1850s by [[Gustave Le Gray]] to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.<ref>{{cite web |url= http://www.getty.edu/art/exhibitions/le_gray |title=Gustave Le Gray, Photographer |date=July 9 – September 29, 2002 |access-date=September 14, 2008 |work=Getty.edu |publisher=[[J. Paul Getty Museum]]}}</ref> === Mid-20th century === {{external media |image1=[https://web.archive.org/web/20170315033441/http://www.cybergrain.com/tech/hdr/images1/eugene_smith.jpg Schweitzer at the Lamp], by [[W. Eugene Smith]]<ref>{{cite web |url=http://www.cybergrain.com/tech/hdr/ |title=The Future of Digital Imaging – High Dynamic Range Photography |first=Jon |last=Meyer |date=February 2004}}</ref><ref name="durand">{{cite web |url=http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/ |title=4.209: The Art and Science of Depiction |first1=Frédo |last1=Durand |first2=Julie |last2=Dorsey |author2-link=Julie Dorsey}}[http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/12_Contrast/contrast.html Limitations of the Medium: Compensation and accentuation – The Contrast is Limited], lecture of Monday, April 9. 2001, [http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/12_Contrast/contrast6.pdf slide 57–59]; image on slide 57, depiction of dodging and burning on slide 58</ref> }} Manual tone mapping was accomplished by [[dodging and burning]] – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph ''Schweitzer at the Lamp'' by [[W. Eugene Smith]], from his 1954 [[photo essay]] ''A Man of Mercy'' on [[Albert Schweitzer]] and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.<ref name="durand" /> [[Ansel Adams]] elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called ''The Print'', which prominently features dodging and burning, in the context of his [[Zone System]].<ref>{{cite book |url=https://archive.org/details/The_Print |title=The Print |author=Adams, Ansel |author-link=Ansel Adams |date=1980 |isbn=0-8212-1526-4 |publisher=Little, Brown and Company |location=New York, New York |edition=3rd |series=The Ansel Adams photography series |volume=3}}</ref> With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.{{citation needed|date=November 2013}} [[File:Wyckoff HDR Curve.tif|thumb|left|upright=2.4|Exposure/density characteristics of [[Charles Wyckoff|Wyckoff's]] extended exposure response film. One can note that each curve has a [[sigmoid function|sigmoidal shape]] and follows a [[hyperbolic tangent]], or a [[logistic function]] characterized by an induction period (initiation), a quasi-linear propagation, and a saturation plateau ([[asymptote]]).]] Color film capable of directly recording high-dynamic-range images was developed by [[Charles Wyckoff]] and [[EG&G]] "in the course of a contract with the [[United States Air Force|Department of the Air Force]]".<ref>{{cite patent |inventor1-last=Wyckoff |inventor1-first=Charles W. |inventor1-link=Charles Wyckoff |inventor2=EG&G Inc., assignee |inventor2-link=EG&G |fdate=1961-03-24 |pubdate=1969-09-17 |title=Silver Halide Photographic Film having Increased Exposure-response Characteristics |country=US -number=3450536 |url= http://www.google.com/patents?hl=en&lr=&vid=USPAT3450536&id=43RzAAAAEBAJ&oi=fnd&dq=%22Extended+exposure%22+Wyckoff&printsec=abstract#v=onepage&q=%22Extended%20exposure%22%20Wyckoff&f=false}}</ref> This XR film had three [[Photographic emulsion|emulsion]] layers, an upper layer having an [[Film speed#ASA|ASA]] speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to [[Color photography#"Modern" color film|color films]], and each layer produced a different color.<ref>{{cite journal |first1=Charles W. |last1=Wyckoff |author-link=Charles Wyckoff |title=Experimental extended exposure response film |journal=Society of Photographic Instrumentation Engineers Newsletter |date=June–July 1962 |pages=16–20}}</ref> The dynamic range of this extended range film has been estimated as 1:10<sup>8</sup>.<ref>{{cite web |first1=Michael |last1=Goesele |display-authors=etal |title=High Dynamic Range Techniques in Graphics: from Acquisition to Display |url= http://www.mpi-inf.mpg.de/resources/tmo/EG05_HDRTutorial_Complete.pdf |work=Eurographics 2005 Tutorial T7 |publisher=Max Planck Institute for Informatics}}</ref> It has been used to photograph nuclear explosions,<ref>{{cite web |url= http://www.fas.org/irp/threat/mctl98-2/p2sec05.pdf |title=The Militarily Critical Technologies List |date=1998 |pages=II-5-100, II-5-107 |work=FAS.org |publisher=Intelligence Resource Program, [[Federation of American Scientists]] |access-date=June 12, 2020}}</ref> for astronomical photography,<ref>{{cite book |first1=Andrew T. |last1=Young | first2=Harold Jr. |last2=Boeschenstein |title=Isotherms in the Region of Proclus at a Phase Angle of 9.8 Degrees |series=Scientific Report series |volume=5 |publisher=College Observatory, Harvard University |location=Cambridge, Massachusetts |date=1964}}</ref> for spectrographic research,<ref>{{cite journal |first1=R. L. |last1=Bryant |first2=G. J. |last2=Troup |first3=R. G. |last3=Turner |title=The use of a high-intensity-range photographic film for recording extended diffraction patterns and for spectrographic work |journal=Journal of Scientific Instruments |volume=42 |issue=2 |date=1965 |page=116 |doi=10.1088/0950-7671/42/2/315|bibcode=1965JScI...42..116B }}</ref> and for medical imaging.<ref>{{cite journal |first1=Leslie M. |last1=Eber |first2=Haervey M. |last2=Greenberg |first3=John M. |last3=Cooke |first4=Richard |last4=Gorlin |title=Dynamic Changes in Left Ventricular Free Wall Thickness in the Human Heart |journal=Circulation |volume=39 |date=1969 |issue=4 |pages=455–464 |doi=10.1161/01.CIR.39.4.455 |pmid=5778246 |doi-access=free}}</ref> Wyckoff's detailed pictures of nuclear explosions appeared on the cover of ''[[Life (magazine)|Life]]'' magazine in the mid-1950s. === Late 20th century === Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor,<ref>{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19910924&CC=US&NR=5051770A&KC=A# |title=Image processing device for controlling the transfer function of an optical system |work=Worldwide.Espacenet.com |publisher=[[Espacenet]] }}</ref> increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the [[Technion]] in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for a patent on this concept in 1991,<ref>{{cite patent |country=US |status=patent |number=5144442 |title=Wide dynamic range camera |pubdate=1992-09-01 |fdate=1991-11-21 |inventor1-last=Ginosar |inventor1-first=Ran |inventor2-last=Hilsenrath |inventor2-first=Oliver |inventor3-last=Zeevi |inventor3-first=Yehoshua Y.}}</ref> and several related patents in 1992 and 1993.<ref name="Technion">{{cite web |last1=Ginosar |first1=Ran |last2=Zinaty |first2=Ofra |last3=Sorek |first3=Noam |last4=Genossar |first4=Tamar |last5=Zeevi |first5=Yehoshua Y. |last6=Kligler |first6=Daniel J. |last7=Hilsenrath |first7=Oliver |title=Adaptive Sensitivity |url= http://visl.technion.ac.il/research/isight/AS/ |date=1993 |work=VISL.Technion.ac.il |publisher=Vision and Image Sciences Laboratory, [[Technion]], [[Israel Institute of Technology]] |access-date=January 27, 2019 |archive-url= https://web.archive.org/web/20140907142738/http://visl.technion.ac.il/research/isight/AS/ |archive-date=September 7, 2014 |url-status=dead}}</ref> In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor<ref name="espacenet1">{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19970610&CC=US&NR=5638119A&KC=A# |title=Device for increasing the dynamic range of a camera |work=Worldwide.Espacenet.com |publisher=Espacenet }}</ref> or simultaneously<ref>{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19970610&CC=US&NR=5638119A&KC=A# |title=Camera with very wide dynamic range |work=Worldwide.Espacenet.com |publisher=Espacenet }}</ref> by two sensors of the camera. This process is known as [[bracketing]] used for a video stream. In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols. Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera:<ref name="espacenet1" /> for low-light environments, several successive images are accumulated, thus increasing the [[Signal-to-noise ratio (imaging)|signal-to-noise ratio]]. In 1993, another commercial medical camera producing an HDR video image, by the Technion.<ref name="Technion" /> Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then [[tone mapping]] the result. Global HDR was first introduced in 1993<ref name="mann1993">{{cite conference |title=Compositing Multiple Pictures of the Same Scene |first=Steve |last=Mann |publisher=Society for Imaging Science and Technology |isbn=0892081716 |conference=46th Annual Conference |location=Cambridge, Massachusetts |date=May 9–14, 1993}}</ref> resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by [[Steve Mann (inventor)|Steve Mann]] and [[Rosalind Picard]].<ref name="mann1995">{{cite web |url= http://wearcam.org/is_t95_myversion.pdf |title=On Being 'Undigital' with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures |first1=S. |last1=Mann |first2=R. W. |last2=Picard}}</ref> On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of [[STS-95]] on the launch pad at [[NASA]]'s [[Kennedy Space Center]]. It consisted of four film images of the [[Space Shuttle|space shuttle]] at night that were [[Digital compositing|digitally composited]] with additional digital graphic elements. The image was first exhibited at [[NASA Headquarters]] Great Hall, Washington DC, in 1999 and then published in ''Hasselblad Forum''.<ref>{{cite book |work=Hasselblad Forum |date=1999 |volume=35 |issue=3 |issn=0282-5449 |title=Ben Sarao, Trenton, NJ |first=Ben M. |last=Sarao |editor-first=S. |editor-last=Gunnarsson}}<!--Someone put a 1993 date on that, but that is not possible for a 1998 image.--></ref> The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. [[Steve Mann (inventor)|Steve Mann]] developed and patented the global-HDR method for producing digital images having extended dynamic range at the [[MIT Media Lab]].<ref name="MannPatent">{{cite patent |country=US |number=5828793 |status=application |title=Method and apparatus for producing digital images having extended dynamic ranges |pubdate=1998-10-27 |fdate=1996-05-06 |inventor-first=Steve |inventor-last=Mann |inventor-link=Steve Mann (inventor)}}</ref> Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a ''lightspace image'', ''lightspace picture'', or ''radiance map''. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for [[computer vision]], and other [[Digital image processing|image processing]] operations.<ref name="MannPatent" /> === 21st century === In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to the naked eye.<ref>{{Cite web|url=http://www.digitalsecrets.net/secrets/DynamicRanger.html|title = Dynamic Ranger}}</ref> In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras.<ref>{{cite book|last1=Kang|first1=Sing Bing|author1-link=Sing Bing Kang|last2=Uyttendaele|first2=Matthew|last3=Winder|first3=Simon|last4=Szeliski|first4=Richard|title=ACM SIGGRAPH 2003 Papers |chapter=High dynamic range video |year=2003|isbn=978-1-58113-709-5|at=ch. High dynamic range video (pages 319–325)|doi=10.1145/1201775.882270|s2cid=13946222}}</ref> A few companies such as [[Red Digital Cinema|RED]] and [[Arri]] have been developing digital sensors capable of a higher dynamic range.<ref>{{Cite web|title=RED Digital Cinema | 8K & 5K Professional Cameras|url=https://www.red.com/|url-status=live|archive-url=https://web.archive.org/web/20160727044301/http://www.red.com/|archive-date=27 July 2016|access-date=27 July 2016}}</ref><ref>{{Cite web|title=ARRI | Inspiring your Vision|url=http://www.arridigital.com|url-status=live|archive-url=https://web.archive.org/web/20110908052342/http://www.arridigital.com/|archive-date=8 September 2011|access-date=23 January 2021}}</ref> RED EPIC-X can capture time-sequential HDRx images<ref name=":0">{{Cite web|date=2016-10-12|title=Sony IMX378: Comprehensive Breakdown of the Google Pixel's Sensor and its Features|url=https://www.xda-developers.com/sony-imx378-comprehensive-breakdown-of-the-google-pixels-sensor-and-its-features/|url-status=live|archive-url=https://web.archive.org/web/20190401203436/https://www.xda-developers.com/sony-imx378-comprehensive-breakdown-of-the-google-pixels-sensor-and-its-features/|archive-date=2019-04-01|access-date=2016-10-17|website=xda-developers|language=en-US}}</ref> with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with the normal channel in post production software. The [[Arri Alexa]] camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time.<ref name=":1">{{Cite web|title=ARRI Group: ALEXA's Sensor|url=https://www.arri.com/camera/alexa/technology/arri_imaging_technology/alexas_sensor/|url-status=live|archive-url=https://web.archive.org/web/20160801182433/http://www.arri.com/camera/alexa/technology/arri_imaging_technology/alexas_sensor/|archive-date=1 August 2016|access-date=2 July 2016|website=www.arri.com}}</ref> With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR [[Time-lapse photography|time-lapse]] videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a [[beam splitter]] and consumer grade HD video cameras.<ref>{{cite web|title=HDR video accomplished using dual 5D Mark IIs, is exactly what it sounds like|url=https://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/|url-status=live|archive-url=https://web.archive.org/web/20170614065214/https://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/|archive-date=14 June 2017|access-date=29 August 2017|work=Engadget|date=9 September 2010 }}</ref> Similar methods have been described in the academic literature in 2001 and 2007.<ref>{{cite web|title=A Real Time High Dynamic Range Light Probe|url=http://gl.ict.usc.edu/Research/rtlp/|url-status=live|archive-url=https://web.archive.org/web/20160617205759/http://gl.ict.usc.edu/Research/rtlp/|archive-date=17 June 2016|access-date=27 July 2016}}</ref><ref>{{cite journal|last1=McGuire|first1=Morgan|last2=Matusik|first2=Wojciech|last3=Pfister|first3=Hanspeter|last4=Chen|first4=Billy|last5=Hughes|first5=John|last6=Nayar|first6=Shree|year=2007|title=Optical Splitting Trees for High-Precision Monocular Imaging|url=http://nrs.harvard.edu/urn-3:HUL.InstRepos:4101892|url-status=live|journal=IEEE Computer Graphics and Applications|volume=27|issue=2|pages=32–42|doi=10.1109/MCG.2007.45|pmid=17388201|archive-url=https://web.archive.org/web/20210123115720/https://dash.harvard.edu/handle/1/4101892|archive-date=23 January 2021|access-date=14 July 2019|s2cid=3055332}}</ref> In 2005, [[Adobe Systems]] introduced several new features in [[Photoshop CS2]] including ''Merge to HDR'', 32 bit floating point image support, and HDR tone mapping.<ref>{{cite web |url= http://luminous-landscape.com/tutorials/hdr.shtml |archive-url= https://web.archive.org/web/20100102063950/http://luminous-landscape.com/tutorials/hdr.shtml |url-status=dead |archive-date=January 2, 2010 |title=Merge to HDR in Photoshop CS2: A First Look |first=Michael |last=Reichmann |work=The Luminous Landscape |date=2005 |access-date=August 27, 2009}}</ref> On June 30, 2016, [[Microsoft]] added support for the digital compositing of HDR images to [[Windows 10]] using the [[Universal Windows Platform]].<ref>{{cite news |title=Microsoft talks up the advantages of HDR photography and videography in Universal Windows Platform apps |first=Kareem |last=Anderson |work=OnMSFT.com |url= https://www.onmsft.com/news/microsoft-talks-advantages-hdr-photography-videography-universal-windows-platform-apps |date=June 30, 2016 |access-date=June 12, 2020}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)