Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multi-exposure HDR capture
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Technique to capture HDR images and videos}} {{Multiple issues| {{Technical|date=February 2020}} {{Update|date=June 2020|reason=Most of the material and sourcing in this dates to around the 2008–2012 period.}} }} [[File:St Kentigerns Church HDR (8226826999).jpg|thumb|upright=1.2|right|Tone mapped high-dynamic-range (HDR) image of St. Kentigern's Church in [[Blackpool]], Lancashire, England]] In [[photography]] and [[videography]], '''multi-exposure HDR capture''' is a technique that creates [[high dynamic range]] (HDR) images (or extended [[dynamic range]] images) by taking and combining multiple exposures of the same subject matter at different [[Exposure (photography)|exposures]]. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on [[computational imaging]] techniques to capture and combine multiple exposures. A single image captured by a camera provides a finite range of [[Luminosity function|luminosity]] inherent to the medium, whether it is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the [[dynamic range]]. In photography, dynamic range is measured in [[exposure value]] (EV) differences, also known as ''stops''. The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is [[Weber–Fechner law#Vision|approximately logarithmic]].<ref name=bhatia> {{cite book | author = V. B. Bhatia | title = Astronomy and astrophysics with elements of cosmology | publisher = CRC Press | year = 2001 | isbn = 978-0-8493-1013-3 | page = 20 | url = https://books.google.com/books?id=k4XRQpKV9kgC&pg=PA20 }}</ref><ref> {{cite journal |doi=10.1007/s00245-005-0850-1 |author1=Jianhong (Jackie) Shen |author2=Yoon-Mo Jung |title=Weberized Mumford-Shah model with Bose-Einstein photon noise |journal=Appl. Math. Optim. |volume=53 |issue=3 |pages=331–358 |year=2006 |citeseerx=10.1.1.129.1834 |s2cid=18794171 }} </ref> Human eyes [[Pupillary light reflex|adapt fairly rapidly to changes in light]] levels. HDR can thus produce images that look more like what a human sees when looking at the subject. This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing a wider range of brightness than can be captured directly, typically both bright, direct sunlight and deep shadows.<ref name="mann1993" /><ref name="mann1995" /><ref>{{cite book |last1=Reinhard |first1=Erik |last2=Ward |first2=Greg |last3=Pattanaik |first3=Sumanta |last4=Debevec |first4=Paul |title=High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting |date=2005 |quote=Images that store a depiction of the scene in a range of intensities commensurate with the scene are what we call HDR, or 'radiance maps'. On the other hand, we call images suitable for display with current display technology LDR. |publisher=[[Elsevier]] / Morgan Kaufmann |location=Amsterdam |page=7 |isbn=978-0-12-585263-0}}</ref><ref>{{cite book |last1=Banterle |first1=Francesco |last2=Artusi |first2=Alessandro |last3=Debattista |first3=Kurt |last4=Chalmers |first4=Alan |title=Advanced High Dynamic Range Imaging: Theory and Practice |date=2011 |publisher=AK Peters / CRC Press |isbn=978-156881-719-4}}</ref> Due to the limitations of printing and [[display contrast]], the extended dynamic range of HDR images must be compressed to the range that can be displayed. The method of rendering a high dynamic range image to a standard monitor or printing device is called [[tone mapping]]; it reduces the overall contrast of an HDR image to permit display on devices or prints with lower dynamic range. == Benefits == One aim of HDR is to present a similar range of [[luminance]] to that experienced through the human [[visual system]]. The human eye, through non-linear response, [[Adaptation (eye)|adaptation]] of the [[Iris (anatomy)|iris]], and other methods, adjusts constantly to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions. {|class="wikitable unsortable floatright" |+ Dynamic ranges of common devices |- ! Device ! Stops ! Contrast ratio |- style="background-color:#DDF;" | colspan="4" | Single exposure |- | Human eye: close objects | {{0}}7.5 | {{0|00}}150...200 |- | Human eye: 4° angular separation | 13 | {{0}}8000...10000 |- | Human eye (static) | 10...14 <ref>{{cite web |url= http://www.cambridgeincolour.com/tutorials/dynamic-range.htm |title=Dynamic Range in Digital Photography |work=Cambridge in Colour |editor-first=Sean |editor-last=McHugh |date=2005 |access-date=December 30, 2010}}</ref> | {{0}}1000...15000 |- | Negative film ([[List of motion picture film stocks#VISION3 color negative (ECN-2 process 2007–present)|Kodak VISION3]]) | 13 <ref>{{cite web |url= http://motion.kodak.com/motion/About/The_Storyboard/17788/index.htm |title=Dynamic Range}}{{dead link|date=November 2017 |bot=InternetArchiveBot |fix-attempted=yes}}<!--Tried manually searching around for it, too. No dice.--></ref> | {{0}}8000 |- | 1/1.7" camera ([[Nikon Coolpix]] P340) | 11.9 <ref name="DXOMark" /> | {{0}}3800 |- | 1" camera ([[Canon PowerShot G7 X]]) | 12.7 <ref name="DXOMark" /> | {{0}}6600 |- | Four-thirds DSLR camera ([[Panasonic Lumix DC-GH5]]) | 13.0 <ref name="DXOMark" /> | {{0}}8200 |- | APS DSLR camera ([[Nikon D7200]]) | 14.6 <ref name="DXOMark">{{cite web |url= http://www.dxomark.com/Cameras/Camera-Sensor-Ratings/%28type%29/usecase_landscape |title=Camera Sensor Ratings |work=DxOMark |date=2015 |publisher=[[DxO Labs]] |access-date=February 2, 2015}}</ref> | 24800 |- | Full-frame DSLR camera ([[Nikon D810]]) | 14.8 <ref name="DXOMark" /> | 28500 <!-- Calculated from formula: Contrast Ratio = 2^(Dynamic Range) --> |- style="background-color:#DDF;" |} Most cameras are limited to a much narrower range of exposure values within a single image, due to the dynamic range of the capturing medium. With a limited dynamic range, tonal differences can be captured only within a certain range of brightness. Outside of this range, no details can be distinguished: when the tone being captured exceeds the range in bright areas, these tones appear as pure white, and when the tone being captured does not meet the minimum threshold, these tones appear as pure black. Images captured with non-HDR cameras that have a limited exposure range (low dynamic range, LDR), may lose detail in highlights or [[Shadow#Photography|shadows]]. Modern [[CMOS]] [[image sensor]]s have improved dynamic range and can often capture a wider range of tones in a single exposure<ref name=":2" /> reducing the need to perform multi-exposure HDR. Color film negatives and slides consist of multiple film layers that respond to light differently. Original film (especially negatives versus transparencies or slides) feature a very high dynamic range (in the order of 8 for negatives and 4 to 4.5 for positive transparencies). Multi-exposure HDR is used in photography and also in extreme dynamic range applications such as welding or automotive work. In security cameras the term "wide dynamic range" is used instead of HDR. === {{anchor|Ghosting}}Limitations === [[File:Hdr capture golf swing ghost effect.jpg|thumb|upright=1.8|This composited multi-exposure HDR capture shows the correct exposure for both the shaded grass and the bright sky, but the fast-moving golf swing led to a "ghost" club.]] [[File:HDR ghosting from motion - playground - HDR on.jpg|thumb|HDR ghosting from spinning carousel]] A fast-moving subject, or camera movement between the multiple exposures, will generate a "ghost" effect or a staggered-blur strobe effect due to the merged images not being identical. Unless the subject is static and the camera mounted on a tripod there may be a tradeoff between extended dynamic range and sharpness. Sudden changes in the lighting conditions (strobed LED light) can also interfere with the desired results, by producing one or more HDR layers that do have the luminosity expected by an automated HDR system, though one might still be able to produce a reasonable HDR image manually in software by rearranging the image layers to merge in order of their actual luminosity. Because of the nonlinearity of some sensors image artifacts can be common. Camera characteristics such as [[Gamma correction|gamma curves]], sensor resolution, noise, [[Photometry (optics)|photometric]] calibration and [[color calibration]] affect resulting high-dynamic-range images.<ref>{{cite book|last1=Sá|first1=Asla M.|url=https://books.google.com/books?id=mDsFgWPhWWYC|title=High Dynamic Range|last2=Carvalho|first2=Paulo Cezar|last3=Velho|first3=Luiz|date=2007|publisher=Focal Press|isbn=978-1-59829-562-7|page=11}}</ref> == Process == High-dynamic-range photographs are generally composites of multiple standard dynamic range images, often captured using [[Bracketing#Exposure bracketing|exposure bracketing]]. Afterwards, [[photo manipulation]] software [[Exposure fusion|merges the input files]] into a single HDR image, which is then also [[tone mapping|tone mapped]] in accordance with the limitations of the planned output or display. === Capturing multiple images (exposure bracketing) === [[File:Einfluss der Zeit auf die Belichtung.jpg|thumb|right|Exposure bracketing by varying the [[shutter speed]] from {{frac|500}} to 30 seconds]] {{Main|Bracketing#Exposure bracketing}} Any camera that allows manual exposure control can perform multi-exposure HDR image capture, although one equipped with [[Autobracketing|automatic exposure bracketing (AEB)]] facilitates the process. Some cameras have an AEB feature that spans a far greater dynamic range than others, from ±0.6 in simpler cameras to ±18 EV in top professional cameras, {{as of|lc=y|2020|post=.}}<ref>{{cite web |title=Auto Exposure Bracketing Settings by Camera Model |url= http://hdr-photography.com/aeb.html |work=HDR Photography Resources |date=February 28, 2016<!--Last-updated date, bottom of page.--> |access-date=June 12, 2020}}</ref> The exposure value (EV) refers to the amount of light applied to the light-sensitive detector, whether film or digital sensor such as a [[Charge-coupled device|CCD]]. An increase or decrease of one stop is defined as a doubling or halving of the amount of light captured. Revealing detail in the darkest of shadows requires an increased EV, while preserving detail in very bright situations requires very low EVs. EV is controlled using one of two photographic controls: varying either the size of the [[aperture]] or the exposure time. A set of images with multiple EVs intended for HDR processing should be captured only by altering the exposure time; altering the aperture size also would affect the [[depth of field]] and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image. Multi-exposure HDR photography generally is limited to still scenes because any movement between successive images will impede or prevent success in combining them afterward. Also, because the photographer must capture three or more images to obtain the desired [[luminance]] range, taking such a full set of images takes extra time. Photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is advised to minimize framing differences between exposures. === Merging the images into an HDR image === [[File:Dynamic Range Increase.jpg|thumb|Highlight areas from the window (upper right) are extracted from an underexposed image (upper left) and composited with a scene-averaged exposure (bottom left) to produce a HDR image (bottom right).]] {{See also|Exposure fusion}} Tonal information and details from shadow areas can be recovered from images that are deliberately overexposed (i.e., with positive EV compared to the correct scene exposure), while similar tonal information from highlight areas can be recovered from images that are deliberately underexposed (negative EV). The process of selecting and extracting shadow and highlight information from these over/underexposed images and then combining them with image(s) that are exposed correctly for the overall scene is known as [[exposure fusion]]. Exposure fusion can be performed manually, relying on the HDR operator's judgment, experience, and training, but usually, fusion is performed automatically by software. === Storing === {{See also|High dynamic range#Storage}} Information stored in high-dynamic-range images typically corresponds to the physical values of [[luminance]] or [[radiance]] that can be observed in the real world. This is different from traditional [[digital images]], which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called ''scene-referred'', in contrast to traditional digital images, which are ''device-referred'' or ''output-referred''. Furthermore, traditional images are usually encoded for the human [[visual system]] (maximizing the visual information stored in the fixed number of bits), which is usually called ''gamma encoding'' or ''[[gamma correction]]''. The values stored for HDR images are often gamma compressed using mathematical functions such as [[power law]]s [[logarithm]]s, or [[floating point]] linear values, since [[Fixed-point arithmetic|fixed-point]] linear encodings are increasingly inefficient over higher dynamic ranges.<ref name="gregward">{{cite web|last=Ward|first=Greg|title=High Dynamic Range Image Encodings|url=http://www.anyhere.com/gward/hdrenc/hdr_encodings.html|work=Anyhere.com|publisher=Anyhere Software}}</ref><ref>{{cite web|title=The Radiance Picture File Format|url=http://radsite.lbl.gov/radiance/refer/Notes/picture_format.html|url-status=dead|archive-url=https://web.archive.org/web/20190128023610/http://radsite.lbl.gov/radiance/refer/Notes/picture_format.html|archive-date=January 28, 2019|access-date=June 12, 2020|work=RadSite.LBL.gov|publisher=[[Lawrence Berkeley National Laboratory]]}}</ref><ref>{{cite book|last=Fernando|first=Randima|url=http://http.developer.nvidia.com/GPUGems/gpugems_ch26.html|title=GPU Gems|date=2004|publisher=Addison-Wesley|isbn=0-321-22832-4|location=Boston|chapter=26.5 Linear Pixel Values|archive-url=https://web.archive.org/web/20100412001848/http://http.developer.nvidia.com/GPUGems/gpugems_ch26.html|archive-date=April 12, 2010|url-status=dead|via=Developer.Nvidia.com}}</ref> HDR images often do not use fixed ranges per color [[Channel (digital image)|channel]], other than traditional images, to represent many more colors over a much wider dynamic range (multiple channels). For that purpose, they do not use integer values to represent the single color channels (e.g., 0–255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common values are 16-bit ([[half precision]]) or 32-bit [[Floating point|floating-point]] numbers to represent HDR pixels. However, when the appropriate [[transfer function]] is used, HDR pixels for some applications can be represented with a [[color depth]] that has as few as 10 to 12 bits ({{#expr:2^10}} to {{#expr:2^12}} values) for luminance and 8 bits ({{#expr:2^8}} values) for [[chrominance]] without introducing any visible quantization [[Visual artifact|artifact]]s.<ref name="gregward" /><ref>{{cite web|last1=Mantiuk|first1=Rafal|last2=Krawczyk|first2=Grzegorz|last3=Myszkowski|first3=Karol|last4=Seidel|first4=Hans-Peter|title=Perception-motivated High Dynamic Range Video Encoding|url=http://resources.mpi-inf.mpg.de/hdrvideo/|work=Resources.MPI-Inf.MPG.de|publisher=[[Max Planck Institute for Informatics]]}}</ref> === Tone mapping === {{Main article|Tone mapping}} Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDR files by the same software package. Tone mapping is often needed because the dynamic range that can be displayed is often lower than the dynamic range of the captured or processed image.<ref name=":2">{{cite book|last=Darmont|first=Arnaud|url=http://spie.org/x648.html?product_id=903927|title=High Dynamic Range Imaging: Sensors and Architectures|date=2012|publisher=SPIE press|isbn=978-0-81948-830-5|edition=First}}</ref> [[High-dynamic-range video|HDR displays]] can display image at a higher dynamic range than [[Standard-dynamic-range video|SDR displays]], reducing the need for tone mapping. === Types of HDR === HDR can be done via several methods: * DOL: Digital overlap<ref name=":0" /> * BME: Binned multiplexed exposure<ref name=":0" /> * SME: Spatially multiplexed exposure<ref name=":0" /> * QBC: Quad Bayer Coding<ref>{{Cite web |author=ccs_hello |date=25 October 2017 |title=SONY Exmor HDR sensor's terms: DOL, BME, SME, QBC – Electronically Assisted Astronomy (EAA) – Cloudy Nights |url=https://www.cloudynights.com/topic/596377-sony-exmor-hdr-sensors-terms-dol-bme-sme-qbc/ |access-date=6 April 2022 |work=Cloudy Nights}}</ref>{{Unreliable source?|date=April 2022}} === Examples === This is an example of four standard dynamic range images that are combined to produce three resulting [[Tone mapping|tone mapped]] images: <gallery heights="120" widths="160" mode="nolines" perrow="6" caption="Exposed images:"> Image:StLouisArchMultExpEV-4.72.JPG|–4 stops Image:StLouisArchMultExpEV-1.82.JPG|–2 stops Image:StLouisArchMultExpEV+1.51.JPG|+2 stops Image:StLouisArchMultExpEV+4.09.JPG|+4 stops </gallery> <gallery heights="120" widths="160" mode="nolines" perrow="6" caption="Results after processing:"> File:StLouisArchMultExpCDR.jpg|Simple contrast reduction File:StLouisArchMultExpToneMapped.jpg|Local tone mapping File:StLouisArchMultExpEV SNS-HDR.jpg|alt=Natural tone mapping|Natural tone mapping </gallery> This is an example of a scene with a very wide dynamic range: <gallery heights="120" widths="160" mode="nolines" perrow="6" caption="Exposed images:"> Image:HDRI Sample Scene Window - 01.jpg|–6 stops Image:HDRI Sample Scene Window - 02.jpg|–5 stops Image:HDRI Sample Scene Window - 03.jpg|–4 stops Image:HDRI Sample Scene Window - 04.jpg|–3 stops Image:HDRI Sample Scene Window - 05.jpg|–2 stops Image:HDRI Sample Scene Window - 06.jpg|–1 stops Image:HDRI Sample Scene Window - 07.jpg|{{0}}0 stops Image:HDRI Sample Scene Window - 08.jpg|+1 stops Image:HDRI Sample Scene Window - 09.jpg|+2 stops Image:HDRI Sample Scene Window - 10.jpg|+3 stops Image:HDRI Sample Scene Window - 11.jpg|+4 stops Image:HDRI Sample Scene Window - 12.jpg|+5 stops </gallery> <gallery heights="120" widths="160" mode="nolines" perrow="6" caption="Results after processing:"> Image:HDRI Sample Scene Window.jpg|Natural tone mapping </gallery> ==Devices== === Post-capture software === Several software applications are available on the PC, Mac, and Linux platforms for producing HDR files and tone mapped images.<ref>{{cite news |url=https://www.digitalcameraworld.com/buying-guides/best-hdr-software |title=The best HDR software in 2022: produce super-realistic high dynamic range images |author=Parnell-Brookes, Jason |date=December 28, 2021 |work=Digital Camera World |access-date=15 December 2022}}</ref> Notable titles include: {{Div col|colwidth=14.5em}} * [[Adobe Photoshop]] * [[Affinity Photo]] * [[Aurora HDR]] * [[Dynamic Photo HDR]] * [[EasyHDR]] * [[GIMP]] * [[HDR PhotoStudio]] * [[Luminance HDR]] * [[Nik Collection]] HDR Efex Pro * [[Oloneo PhotoEngine]] * [[Photomatix Pro]] * [[PTGui]] * SNS-HDR {{Div col end}} === Photography === {{See also|Computational photography}} Several camera manufacturers offer built-in multi-exposure HDR features. For example, the [[Pentax K-7]] DSLR has an HDR mode that makes 3 or 5 exposures and outputs (only) a tone mapped HDR image in a JPEG file.<ref>{{cite web|last=Howard|first=Jack|date=May 20, 2009|title=The Pentax K-7: The Era of In-camera High Dynamic Range Imaging Has Arrived!|url=http://www.adorama.com/alc/0011608/blogarticle/The-Pentax-K-7-The-era-of-in-camera-High-Dynamic-Range-Imaging-has-arrived|url-status=dead|archive-url=https://web.archive.org/web/20141223124601/http://www.adorama.com/alc/0011608/blogarticle/The-Pentax-K-7-The-era-of-in-camera-High-Dynamic-Range-Imaging-has-arrived|archive-date=December 23, 2014|access-date=18 August 2009|work=Adorama Learning Center|publisher=[[Adorama]]}}</ref> The [[Canon PowerShot G12]], [[Canon PowerShot S95]], and [[Canon PowerShot S100]] offer similar features in a smaller format.<ref>{{cite web|last=Mokey|first=Nick|date=September 14, 2010|title=Canon PowerShot G12 picks up HD video recording, built-in HDR|url=http://www.digitaltrends.com/photography/cameras/canon-powershot-g12-picks-up-hd-video-recording-built-in-hdr/?news=123|access-date=June 12, 2020|work=[[Digital Trends]]}}</ref> Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the emphasis being on creating a realistic effect.<ref>{{cite web|last=Heiner|first=Steve|date=2017|title=Intermediate: Balancing Photo Exposures with Active D-lighting|url=https://www.nikonusa.com/en/learn-and-explore/a/ideas-and-inspiration/balancing-photo-exposures-with-nikons-active-d-lighting.html|access-date=August 2, 2017|work=Nikon Learn and Explore|publisher=[[Nikon]]|department="Ideas and Inspiration" section}}</ref> Some [[smartphone]]s provide HDR modes for their cameras, and most [[mobile platform]]s have apps that provide multi-exposure HDR picture taking.<ref>[[Android (operating system)|Android]] examples: {{cite web|title=Apps: HDR mode|url=https://play.google.com/store/search?q=hdr%20mode&c=apps|access-date=June 12, 2020|work=Google Play}}</ref> Google released a HDR+ mode for the [[Nexus 5]] and [[Nexus 6]] smartphones in 2014, which automatically captures a series of images and combines them into a single still image, as detailed by [[Marc Levoy]]. Unlike traditional HDR, Levoy's implementation of HDR+ uses multiple images underexposed by using a short shutter speed, which are then aligned and averaged by pixel, improving dynamic range and reducing noise. By selecting the sharpest image as the baseline for alignment, the effect of camera shake is reduced.<ref>{{cite web |url=https://ai.googleblog.com/2014/10/hdr-low-light-and-high-dynamic-range.html |title=HDR+: Low Light and High Dynamic Range photography in the Google Camera App |author=Levoy, Marc |date=October 27, 2014 |website=Google Research |access-date=14 December 2022}}</ref> Some of the sensors on modern phones and cameras may combine two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.{{Citation needed|date=November 2017}} === Videography === {{distinguish|text = the capture of video inside an [[High-dynamic-range video|HDR format]] in order to view them on an [[High-dynamic-range video|HDR display]]}} [[File:Hdr time lapse montage.ogv|thumb|Example of HDR [[Time-lapse photography|time-lapse]] video]] Although not as established as for still photography capture, it is also possible to capture and combine multiple images for each frame of a video in order to increase the dynamic range captured by the camera.<ref>{{Cite web|title=RED.com|url=https://www.red.com/red-101/hdrx-high-dynamic-range-video|access-date=2021-11-05|website=www.red.com}}</ref> This can be done via multiple methods: * Creating a [[Time-lapse photography|time-lapse]] of individually images created via the multi-exposure HDR technique.<ref>{{Cite web|date=2011-02-07|title=Create HDR time-lapse video with a digital camera|url=https://www.macworld.com/article/210116/hdr.html|access-date=2021-11-06|website=Macworld|language=en-US}}</ref> * Taking consecutively two differently exposed images by cutting the frame rate in half.<ref name=":0" /> * Taking simultaneously two differently exposed images by cutting the resolution in half.<ref name=":0" /> * Taking simultaneously two differently exposed images with full resolution and frame rate via a sensor with dual gain architecture. For example: [[Arri Alexa]]'s sensor,<ref name=":1" /> [[Samsung]] sensors with Smart-ISO Pro.<ref>{{Cite web|title=[Video] Painting With Light: How Smart-ISO Pro Captures Lifelike HDR Images|url=https://news.samsung.com/global/video-painting-with-light-how-smart-iso-pro-captures-lifelike-hdr-images|access-date=2021-11-05|website=news.samsung.com|language=en}}</ref> Some cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure.{{Citation needed|date=November 2017}} For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. In 2020, [[Qualcomm]] announced [[Snapdragon 888]], a mobile [[System on a chip|SoC]] able to do computational multi-exposure HDR video capture in 4K and also to record it in a format compatible with [[High-dynamic-range video|HDR displays]].<ref>{{Cite web|date=2020-12-04|title=Qualcomm explains how the Snapdragon 888 is changing the camera game (Video!)|url=https://www.androidauthority.com/snapdragon-888-camera-qualcomm-interview-1180775/|access-date=2021-06-08|website=Android Authority|language=en-US}}</ref> In 2021, the [[Xiaomi Mi 11 Ultra]] smartphone is able to do computational multi-exposure HDR for video capture.<ref>{{Cite web|last=Rehm|first=Lars|date=2021-04-02|title=Xiaomi Mi 11 Ultra Camera review: Large sensor power|url=https://www.dxomark.com/xiaomi-mi-11-ultra-camera-review-large-sensor-power/|access-date=2021-06-08|website=DXOMARK|language=en-US}}</ref> ===Surveillance cameras=== HDR capture can be implemented on surveillance cameras, even inexpensive models. This is usually termed a '''wide dynamic range''' (WDR) function<ref name=axis>{{cite web|title=Wide Dynamic Range: Challenges and Solutions|url=https://www.axis.com/files/whitepaper/wp_wide_dynamic_range_58576_en_1406_lo.pdf|website=Axis (via Wayback Machine)|archive-url=https://web.archive.org/web/20140928133722/https://www.axis.com/files/whitepaper/wp_wide_dynamic_range_58576_en_1406_lo.pdf|accessdate=2016-01-16|archive-date=2014-09-28}}</ref> Examples include CarCam Tiny, Prestige DVR-390, and DVR-478.<ref>{{Cite web|url=https://dashcamcar.com/#why-you-need-a-dash-cam |title=What is a Dash Cam? {{!}} Why should you have it?|last=Brown|first=James|date=2019-01-02|website=Dashboard camera vehicle|language=en-US|access-date=2019-01-17}}</ref> == History == === Mid-19th century === [[File:Gustave Le Gray - Brig upon the Water - Google Art Project.jpg|thumb|right|upright=1.2|An 1856 photo by [[Gustave Le Gray]]]] The idea of using several exposures to adequately reproduce a too-extreme range of [[luminance]] was pioneered as early as the 1850s by [[Gustave Le Gray]] to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.<ref>{{cite web |url= http://www.getty.edu/art/exhibitions/le_gray |title=Gustave Le Gray, Photographer |date=July 9 – September 29, 2002 |access-date=September 14, 2008 |work=Getty.edu |publisher=[[J. Paul Getty Museum]]}}</ref> === Mid-20th century === {{external media |image1=[https://web.archive.org/web/20170315033441/http://www.cybergrain.com/tech/hdr/images1/eugene_smith.jpg Schweitzer at the Lamp], by [[W. Eugene Smith]]<ref>{{cite web |url=http://www.cybergrain.com/tech/hdr/ |title=The Future of Digital Imaging – High Dynamic Range Photography |first=Jon |last=Meyer |date=February 2004}}</ref><ref name="durand">{{cite web |url=http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/ |title=4.209: The Art and Science of Depiction |first1=Frédo |last1=Durand |first2=Julie |last2=Dorsey |author2-link=Julie Dorsey}}[http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/12_Contrast/contrast.html Limitations of the Medium: Compensation and accentuation – The Contrast is Limited], lecture of Monday, April 9. 2001, [http://people.csail.mit.edu/fredo/ArtAndScienceOfDepiction/12_Contrast/contrast6.pdf slide 57–59]; image on slide 57, depiction of dodging and burning on slide 58</ref> }} Manual tone mapping was accomplished by [[dodging and burning]] – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph ''Schweitzer at the Lamp'' by [[W. Eugene Smith]], from his 1954 [[photo essay]] ''A Man of Mercy'' on [[Albert Schweitzer]] and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.<ref name="durand" /> [[Ansel Adams]] elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called ''The Print'', which prominently features dodging and burning, in the context of his [[Zone System]].<ref>{{cite book |url=https://archive.org/details/The_Print |title=The Print |author=Adams, Ansel |author-link=Ansel Adams |date=1980 |isbn=0-8212-1526-4 |publisher=Little, Brown and Company |location=New York, New York |edition=3rd |series=The Ansel Adams photography series |volume=3}}</ref> With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.{{citation needed|date=November 2013}} [[File:Wyckoff HDR Curve.tif|thumb|left|upright=2.4|Exposure/density characteristics of [[Charles Wyckoff|Wyckoff's]] extended exposure response film. One can note that each curve has a [[sigmoid function|sigmoidal shape]] and follows a [[hyperbolic tangent]], or a [[logistic function]] characterized by an induction period (initiation), a quasi-linear propagation, and a saturation plateau ([[asymptote]]).]] Color film capable of directly recording high-dynamic-range images was developed by [[Charles Wyckoff]] and [[EG&G]] "in the course of a contract with the [[United States Air Force|Department of the Air Force]]".<ref>{{cite patent |inventor1-last=Wyckoff |inventor1-first=Charles W. |inventor1-link=Charles Wyckoff |inventor2=EG&G Inc., assignee |inventor2-link=EG&G |fdate=1961-03-24 |pubdate=1969-09-17 |title=Silver Halide Photographic Film having Increased Exposure-response Characteristics |country=US -number=3450536 |url= http://www.google.com/patents?hl=en&lr=&vid=USPAT3450536&id=43RzAAAAEBAJ&oi=fnd&dq=%22Extended+exposure%22+Wyckoff&printsec=abstract#v=onepage&q=%22Extended%20exposure%22%20Wyckoff&f=false}}</ref> This XR film had three [[Photographic emulsion|emulsion]] layers, an upper layer having an [[Film speed#ASA|ASA]] speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to [[Color photography#"Modern" color film|color films]], and each layer produced a different color.<ref>{{cite journal |first1=Charles W. |last1=Wyckoff |author-link=Charles Wyckoff |title=Experimental extended exposure response film |journal=Society of Photographic Instrumentation Engineers Newsletter |date=June–July 1962 |pages=16–20}}</ref> The dynamic range of this extended range film has been estimated as 1:10<sup>8</sup>.<ref>{{cite web |first1=Michael |last1=Goesele |display-authors=etal |title=High Dynamic Range Techniques in Graphics: from Acquisition to Display |url= http://www.mpi-inf.mpg.de/resources/tmo/EG05_HDRTutorial_Complete.pdf |work=Eurographics 2005 Tutorial T7 |publisher=Max Planck Institute for Informatics}}</ref> It has been used to photograph nuclear explosions,<ref>{{cite web |url= http://www.fas.org/irp/threat/mctl98-2/p2sec05.pdf |title=The Militarily Critical Technologies List |date=1998 |pages=II-5-100, II-5-107 |work=FAS.org |publisher=Intelligence Resource Program, [[Federation of American Scientists]] |access-date=June 12, 2020}}</ref> for astronomical photography,<ref>{{cite book |first1=Andrew T. |last1=Young | first2=Harold Jr. |last2=Boeschenstein |title=Isotherms in the Region of Proclus at a Phase Angle of 9.8 Degrees |series=Scientific Report series |volume=5 |publisher=College Observatory, Harvard University |location=Cambridge, Massachusetts |date=1964}}</ref> for spectrographic research,<ref>{{cite journal |first1=R. L. |last1=Bryant |first2=G. J. |last2=Troup |first3=R. G. |last3=Turner |title=The use of a high-intensity-range photographic film for recording extended diffraction patterns and for spectrographic work |journal=Journal of Scientific Instruments |volume=42 |issue=2 |date=1965 |page=116 |doi=10.1088/0950-7671/42/2/315|bibcode=1965JScI...42..116B }}</ref> and for medical imaging.<ref>{{cite journal |first1=Leslie M. |last1=Eber |first2=Haervey M. |last2=Greenberg |first3=John M. |last3=Cooke |first4=Richard |last4=Gorlin |title=Dynamic Changes in Left Ventricular Free Wall Thickness in the Human Heart |journal=Circulation |volume=39 |date=1969 |issue=4 |pages=455–464 |doi=10.1161/01.CIR.39.4.455 |pmid=5778246 |doi-access=free}}</ref> Wyckoff's detailed pictures of nuclear explosions appeared on the cover of ''[[Life (magazine)|Life]]'' magazine in the mid-1950s. === Late 20th century === Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor,<ref>{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19910924&CC=US&NR=5051770A&KC=A# |title=Image processing device for controlling the transfer function of an optical system |work=Worldwide.Espacenet.com |publisher=[[Espacenet]] }}</ref> increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the [[Technion]] in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for a patent on this concept in 1991,<ref>{{cite patent |country=US |status=patent |number=5144442 |title=Wide dynamic range camera |pubdate=1992-09-01 |fdate=1991-11-21 |inventor1-last=Ginosar |inventor1-first=Ran |inventor2-last=Hilsenrath |inventor2-first=Oliver |inventor3-last=Zeevi |inventor3-first=Yehoshua Y.}}</ref> and several related patents in 1992 and 1993.<ref name="Technion">{{cite web |last1=Ginosar |first1=Ran |last2=Zinaty |first2=Ofra |last3=Sorek |first3=Noam |last4=Genossar |first4=Tamar |last5=Zeevi |first5=Yehoshua Y. |last6=Kligler |first6=Daniel J. |last7=Hilsenrath |first7=Oliver |title=Adaptive Sensitivity |url= http://visl.technion.ac.il/research/isight/AS/ |date=1993 |work=VISL.Technion.ac.il |publisher=Vision and Image Sciences Laboratory, [[Technion]], [[Israel Institute of Technology]] |access-date=January 27, 2019 |archive-url= https://web.archive.org/web/20140907142738/http://visl.technion.ac.il/research/isight/AS/ |archive-date=September 7, 2014 |url-status=dead}}</ref> In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor<ref name="espacenet1">{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19970610&CC=US&NR=5638119A&KC=A# |title=Device for increasing the dynamic range of a camera |work=Worldwide.Espacenet.com |publisher=Espacenet }}</ref> or simultaneously<ref>{{Cite web |url= https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19970610&CC=US&NR=5638119A&KC=A# |title=Camera with very wide dynamic range |work=Worldwide.Espacenet.com |publisher=Espacenet }}</ref> by two sensors of the camera. This process is known as [[bracketing]] used for a video stream. In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols. Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera:<ref name="espacenet1" /> for low-light environments, several successive images are accumulated, thus increasing the [[Signal-to-noise ratio (imaging)|signal-to-noise ratio]]. In 1993, another commercial medical camera producing an HDR video image, by the Technion.<ref name="Technion" /> Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then [[tone mapping]] the result. Global HDR was first introduced in 1993<ref name="mann1993">{{cite conference |title=Compositing Multiple Pictures of the Same Scene |first=Steve |last=Mann |publisher=Society for Imaging Science and Technology |isbn=0892081716 |conference=46th Annual Conference |location=Cambridge, Massachusetts |date=May 9–14, 1993}}</ref> resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by [[Steve Mann (inventor)|Steve Mann]] and [[Rosalind Picard]].<ref name="mann1995">{{cite web |url= http://wearcam.org/is_t95_myversion.pdf |title=On Being 'Undigital' with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures |first1=S. |last1=Mann |first2=R. W. |last2=Picard}}</ref> On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of [[STS-95]] on the launch pad at [[NASA]]'s [[Kennedy Space Center]]. It consisted of four film images of the [[Space Shuttle|space shuttle]] at night that were [[Digital compositing|digitally composited]] with additional digital graphic elements. The image was first exhibited at [[NASA Headquarters]] Great Hall, Washington DC, in 1999 and then published in ''Hasselblad Forum''.<ref>{{cite book |work=Hasselblad Forum |date=1999 |volume=35 |issue=3 |issn=0282-5449 |title=Ben Sarao, Trenton, NJ |first=Ben M. |last=Sarao |editor-first=S. |editor-last=Gunnarsson}}<!--Someone put a 1993 date on that, but that is not possible for a 1998 image.--></ref> The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. [[Steve Mann (inventor)|Steve Mann]] developed and patented the global-HDR method for producing digital images having extended dynamic range at the [[MIT Media Lab]].<ref name="MannPatent">{{cite patent |country=US |number=5828793 |status=application |title=Method and apparatus for producing digital images having extended dynamic ranges |pubdate=1998-10-27 |fdate=1996-05-06 |inventor-first=Steve |inventor-last=Mann |inventor-link=Steve Mann (inventor)}}</ref> Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a ''lightspace image'', ''lightspace picture'', or ''radiance map''. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for [[computer vision]], and other [[Digital image processing|image processing]] operations.<ref name="MannPatent" /> === 21st century === In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to the naked eye.<ref>{{Cite web|url=http://www.digitalsecrets.net/secrets/DynamicRanger.html|title = Dynamic Ranger}}</ref> In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras.<ref>{{cite book|last1=Kang|first1=Sing Bing|author1-link=Sing Bing Kang|last2=Uyttendaele|first2=Matthew|last3=Winder|first3=Simon|last4=Szeliski|first4=Richard|title=ACM SIGGRAPH 2003 Papers |chapter=High dynamic range video |year=2003|isbn=978-1-58113-709-5|at=ch. High dynamic range video (pages 319–325)|doi=10.1145/1201775.882270|s2cid=13946222}}</ref> A few companies such as [[Red Digital Cinema|RED]] and [[Arri]] have been developing digital sensors capable of a higher dynamic range.<ref>{{Cite web|title=RED Digital Cinema | 8K & 5K Professional Cameras|url=https://www.red.com/|url-status=live|archive-url=https://web.archive.org/web/20160727044301/http://www.red.com/|archive-date=27 July 2016|access-date=27 July 2016}}</ref><ref>{{Cite web|title=ARRI | Inspiring your Vision|url=http://www.arridigital.com|url-status=live|archive-url=https://web.archive.org/web/20110908052342/http://www.arridigital.com/|archive-date=8 September 2011|access-date=23 January 2021}}</ref> RED EPIC-X can capture time-sequential HDRx images<ref name=":0">{{Cite web|date=2016-10-12|title=Sony IMX378: Comprehensive Breakdown of the Google Pixel's Sensor and its Features|url=https://www.xda-developers.com/sony-imx378-comprehensive-breakdown-of-the-google-pixels-sensor-and-its-features/|url-status=live|archive-url=https://web.archive.org/web/20190401203436/https://www.xda-developers.com/sony-imx378-comprehensive-breakdown-of-the-google-pixels-sensor-and-its-features/|archive-date=2019-04-01|access-date=2016-10-17|website=xda-developers|language=en-US}}</ref> with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with the normal channel in post production software. The [[Arri Alexa]] camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time.<ref name=":1">{{Cite web|title=ARRI Group: ALEXA's Sensor|url=https://www.arri.com/camera/alexa/technology/arri_imaging_technology/alexas_sensor/|url-status=live|archive-url=https://web.archive.org/web/20160801182433/http://www.arri.com/camera/alexa/technology/arri_imaging_technology/alexas_sensor/|archive-date=1 August 2016|access-date=2 July 2016|website=www.arri.com}}</ref> With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR [[Time-lapse photography|time-lapse]] videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a [[beam splitter]] and consumer grade HD video cameras.<ref>{{cite web|title=HDR video accomplished using dual 5D Mark IIs, is exactly what it sounds like|url=https://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/|url-status=live|archive-url=https://web.archive.org/web/20170614065214/https://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/|archive-date=14 June 2017|access-date=29 August 2017|work=Engadget|date=9 September 2010 }}</ref> Similar methods have been described in the academic literature in 2001 and 2007.<ref>{{cite web|title=A Real Time High Dynamic Range Light Probe|url=http://gl.ict.usc.edu/Research/rtlp/|url-status=live|archive-url=https://web.archive.org/web/20160617205759/http://gl.ict.usc.edu/Research/rtlp/|archive-date=17 June 2016|access-date=27 July 2016}}</ref><ref>{{cite journal|last1=McGuire|first1=Morgan|last2=Matusik|first2=Wojciech|last3=Pfister|first3=Hanspeter|last4=Chen|first4=Billy|last5=Hughes|first5=John|last6=Nayar|first6=Shree|year=2007|title=Optical Splitting Trees for High-Precision Monocular Imaging|url=http://nrs.harvard.edu/urn-3:HUL.InstRepos:4101892|url-status=live|journal=IEEE Computer Graphics and Applications|volume=27|issue=2|pages=32–42|doi=10.1109/MCG.2007.45|pmid=17388201|archive-url=https://web.archive.org/web/20210123115720/https://dash.harvard.edu/handle/1/4101892|archive-date=23 January 2021|access-date=14 July 2019|s2cid=3055332}}</ref> In 2005, [[Adobe Systems]] introduced several new features in [[Photoshop CS2]] including ''Merge to HDR'', 32 bit floating point image support, and HDR tone mapping.<ref>{{cite web |url= http://luminous-landscape.com/tutorials/hdr.shtml |archive-url= https://web.archive.org/web/20100102063950/http://luminous-landscape.com/tutorials/hdr.shtml |url-status=dead |archive-date=January 2, 2010 |title=Merge to HDR in Photoshop CS2: A First Look |first=Michael |last=Reichmann |work=The Luminous Landscape |date=2005 |access-date=August 27, 2009}}</ref> On June 30, 2016, [[Microsoft]] added support for the digital compositing of HDR images to [[Windows 10]] using the [[Universal Windows Platform]].<ref>{{cite news |title=Microsoft talks up the advantages of HDR photography and videography in Universal Windows Platform apps |first=Kareem |last=Anderson |work=OnMSFT.com |url= https://www.onmsft.com/news/microsoft-talks-advantages-hdr-photography-videography-universal-windows-platform-apps |date=June 30, 2016 |access-date=June 12, 2020}}</ref> == See also == * [[Comparison of graphics file formats]] * [[HDRi (data format)]] * [[High-dynamic-range rendering]] * [[High-dynamic-range television]] * [[JPEG XT]] * [[Logluv TIFF]] * [[OpenEXR]] * [[RGBE image format]] * [[scRGB]] * [[Wide dynamic range]] == References == {{Reflist}} * Benjamin Sarao (1999). Ben Sarao, Trenton, NJ, USA: ''Space Shuttle Discovery'', pages 16–17 (English ed.). Victor Hasselblad AB, Goteborg, Sweden. ISSN 0282-5449 == External links == * {{Commons category-inline|High-dynamic-range imaging}} {{Photography}} {{Display technology}} [[Category:Articles containing video clips]] [[Category:Computer graphics]] [[Category:High dynamic range]] [[Category:High-dynamic-range imaging]] [[Category:Photographic techniques]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:0
(
edit
)
Template:Anchor
(
edit
)
Template:As of
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite news
(
edit
)
Template:Cite patent
(
edit
)
Template:Cite web
(
edit
)
Template:Commons category-inline
(
edit
)
Template:Dead link
(
edit
)
Template:Display technology
(
edit
)
Template:Distinguish
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:External media
(
edit
)
Template:Frac
(
edit
)
Template:Main
(
edit
)
Template:Main article
(
edit
)
Template:Multiple issues
(
edit
)
Template:Photography
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Unreliable source?
(
edit
)