Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Video
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Characteristics of video streams== ===Number of frames per second=== ''[[Frame rate]]'', the number of still pictures per unit of time of video, ranges from six or eight frames per second (''frame/s'') for old mechanical cameras to 120 or more for new professional cameras. [[PAL]] standards (Europe, Asia, Australia, etc.) and [[SECAM]] (France, Russia, parts of Africa, etc.) specify 25 frame/s, while [[NTSC]] standards (United States, Canada, Japan, etc.) specify 29.97 frame/s.<ref>{{cite web|last1=Soseman|first1=Ned|title=What's the difference between 59.94fps and 60fps?|url=http://www.tvtechnology.com/media-systems/0191/whats-the-difference-between-fps-and-fps/241737|access-date=July 12, 2017|url-status=dead|archive-url=https://web.archive.org/web/20170629190309/http://www.tvtechnology.com/media-systems/0191/whats-the-difference-between-fps-and-fps/241737|archive-date=June 29, 2017}}</ref> Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a [[Persistence of vision|moving image]] is about sixteen frames per second.<ref>{{cite news | url=http://vision.arc.nasa.gov/publications/TemporalSensitivity.pdf | title=Temporal Sensitivity | first=Andrew B. | last=Watson | year=1986 | journal=Sensory Processes and Perception | url-status=dead | archive-url=https://web.archive.org/web/20160308001647/http://vision.arc.nasa.gov/publications/TemporalSensitivity.pdf | archive-date=March 8, 2016 }}</ref> ===Interlaced vs. progressive=== Video can be [[interlaced]] or [[Progressive scan|progressive]]. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early [[mechanical television|mechanical]] and [[cathode-ray tube|CRT]] video displays without increasing the number of complete [[frames per second]]. Interlacing retains detail while requiring lower [[Bandwidth (signal processing)|bandwidth]] compared to progressive scanning.<ref name=":0">{{Cite book |last=Bovik |first=Alan C. |url=https://www.worldcat.org/oclc/190789775 |title=Handbook of image and video processing |date=2005 |publisher=Elsevier Academic Press |isbn=978-0-08-053361-2 |edition=2nd |location=Amsterdam |pages=14β21 |oclc=190789775 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123924/https://www.worldcat.org/title/190789775 |url-status=live }}</ref><ref name=":1">{{Cite book |last=Wright |first=Steve |url=https://www.worldcat.org/oclc/499054489 |title=Digital compositing for film and video |date=2002 |publisher=Focal Press |isbn=978-0-08-050436-0 |location=Boston |oclc=499054489 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123924/https://www.worldcat.org/title/499054489 |url-status=live }}</ref> In interlaced video, the horizontal [[scan line]]s of each complete frame are treated as if numbered consecutively and captured as two ''fields'': an ''odd field'' (upper field) consisting of the odd-numbered lines and an ''even field'' (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.<ref name=":0" /><ref name=":1" /> NTSC, PAL, and SECAM are interlaced formats. Abbreviated video resolution specifications often include an ''i'' to indicate interlacing. For example, PAL video format is often described as ''576i50'', where ''576'' indicates the total number of horizontal scan lines, ''i'' indicates interlacing, and ''50'' indicates 50 fields (half-frames) per second.<ref name=":1" /><ref name=":2">{{Cite book |last=Brown |first=Blain |title=Cinematography: Theory and Practice: Image Making for Cinematographers and Directors |publisher=[[Taylor & Francis]] |year=2013 |isbn=9781136047381 |pages=159β166}}</ref> When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple [[line doubling]]βartifacts, such as flickering or comb effects in moving parts of the image, appear unless special signal processing eliminates them.<ref name=":0" /><ref name=":3">{{Cite book |last=Parker |first=Michael |url=https://www.worldcat.org/oclc/815408915 |title=Digital Video Processing for Engineers : a Foundation for Embedded Systems Design |date=2013 |others=Suhel Dhanani |isbn=978-0-12-415761-3 |location=Amsterdam |oclc=815408915 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123924/https://www.worldcat.org/title/815408915 |url-status=live }}</ref> A procedure known as [[deinterlacing]] can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an [[LCD television]], digital [[video projector]], or plasma panel. Deinterlacing cannot, however, produce [[video quality]] that is equivalent to true progressive scan source material.<ref name=":1" /><ref name=":2" /><ref name=":3" /> ===Aspect ratio=== [[Image:Aspect ratios.svg|thumbnail|250px|Comparison of common [[cinematography]] and traditional [[television]] (green) aspect ratios]] [[Display aspect ratio|Aspect ratio]] describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are [[rectangular]], and this can be described by a ratio between width and height. The ratio of width to height for a traditional television screen is 4:3, or about 1.33:1. High-definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the [[Academy ratio]]) is 1.375:1.<ref name=":6">{{Cite book |last=Bing |first=Benny |url=https://www.worldcat.org/oclc/672322796 |title=3D and HD broadband video networking |date=2010 |publisher=Artech House |isbn=978-1-60807-052-7 |location=Boston |pages=57β70 |oclc=672322796 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123924/https://www.worldcat.org/title/672322796 |url-status=live }}</ref><ref name=":7">{{Cite book |last=Stump |first=David |url=https://www.worldcat.org/oclc/1233023513 |title=Digital cinematography : fundamentals, tools, techniques, and workflows |date=2022 |publisher=[[Routledge]] |isbn=978-0-429-46885-8 |edition=2nd |location=New York, NY |pages=125β139 |oclc=1233023513 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123928/https://www.worldcat.org/title/1233023513 |url-status=live }}</ref> [[Pixel]]s on computer monitors are usually square, but pixels used in [[digital video]] often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the [[CCIR 601]] digital video standard and the corresponding anamorphic widescreen formats. The [[480p|720 by 480 pixel]] raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.<ref name=":6" /><ref name=":7" /> The popularity of viewing video on mobile phones has led to the growth of [[vertical video]]. [[Mary Meeker]], a partner at Silicon Valley venture capital firm [[Kleiner Perkins Caufield & Byers]], highlighted the growth of vertical video viewing in her 2015 Internet Trends Report{{snd}}growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like [[Snapchat]]'s are watched in their entirety nine times more frequently than landscape video ads.<ref name=Constine>{{cite news|author=Constine, Josh|date=May 27, 2015|title=The Most Important Insights From Mary Meeker's 2015 Internet Trends Report|url=https://techcrunch.com/gallery/best-of-meeker/|work=TechCrunch|access-date=August 6, 2015|url-status=live|archive-url=https://web.archive.org/web/20150804234031/http://techcrunch.com/gallery/best-of-meeker/|archive-date=August 4, 2015}}</ref> ===Color model and depth=== [[Image:YUV UV plane.svg|thumb|200px|Example of U-V color plane, Y value=0.5]] The [[color model]] uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically, [[YIQ]] is used in NTSC television, [[YUV]] is used in PAL television, [[YDbDr]] is used by SECAM television, and [[YCbCr]] is used for digital video.<ref name=":4">{{Cite book |last1=Li |first1=Ze-Nian |url=https://www.worldcat.org/oclc/1243420273 |title=Fundamentals of multimedia |last2=Drew |first2=Mark S. |last3=Liu |first3=Jiangchun |date=2021 |publisher=[[Springer Publishing|Springer]] |isbn=978-3-030-62124-7 |edition=3rd |location=Cham, Switzerland |pages=108β117 |oclc=1243420273 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123925/https://www.worldcat.org/title/1243420273 |url-status=live }}</ref><ref name=":5">{{Cite book |last=Banerjee |first=Sreeparna |url=https://www.worldcat.org/oclc/1098279086 |title=Elements of multimedia |date=2019 |publisher=CRC Press |isbn=978-0-429-43320-7 |location=Boca Raton |chapter=Video in Multimedia |oclc=1098279086 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123925/https://www.worldcat.org/title/1098279086 |url-status=live }}</ref> The number of distinct colors a pixel can represent depends on the [[color depth]] expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by [[chroma subsampling]] (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.<ref name=":2" /><ref name=":4" /><ref name=":5" /> ===Video quality=== [[Video quality]] can be measured with formal metrics like [[peak signal-to-noise ratio]] (PSNR) or through [[subjective video quality]] assessment using expert observation. Many subjective video quality methods are described in the [[ITU-T]] recommendation [[BT.500]]. One of the standardized methods is the ''Double Stimulus Impairment Scale'' (DSIS). In DSIS, each expert views an ''unimpaired'' reference video, followed by an ''impaired'' version of the same video. The expert then rates the ''impaired'' video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying." ===Video compression method (digital only)=== {{Main|Video compression}} [[Uncompressed video]] delivers maximum quality, but at a very high [[Uncompressed video#Data rates|data rate]]. A variety of methods are used to compress video streams, with the most effective ones using a [[group of pictures]] (GOP) to reduce spatial and temporal [[Redundancy (information theory)|redundancy]]. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as ''[[intraframe]] compression'' and is closely related to [[image compression]]. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as ''[[interframe]] compression'', including [[motion compensation]] and other techniques. The most common modern compression standards are [[MPEG-2]], used for [[DVD]], Blu-ray, and [[satellite television]], and [[MPEG-4]], used for [[AVCHD]], mobile phones (3GP), and the Internet.<ref>{{Cite book |last=Andy Beach |url=https://www.worldcat.org/oclc/1302274863 |title=Real World Video Compression |date=2008 |publisher=Peachpit Press |isbn=978-0-13-208951-7 |oclc=1302274863 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123925/https://www.worldcat.org/title/1302274863 |url-status=live }}</ref><ref>{{Cite book |last=Sanz |first=Jorge L. C. |url=https://www.worldcat.org/oclc/840292528 |title=Image Technology : Advances in Image Processing, Multimedia and Machine Vision |date=1996 |publisher=Springer Berlin Heidelberg |isbn=978-3-642-58288-2 |location=Berlin, Heidelberg |oclc=840292528 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825124015/https://www.worldcat.org/title/840292528 |url-status=live }}</ref> ===Stereoscopic=== [[Stereoscopic video coding|Stereoscopic video]] for [[3D film]] and other applications can be displayed using several different methods:<ref>{{Cite book |last1=Ekmekcioglu |first1=Erhan |url=https://www.worldcat.org/oclc/844775006 |title=3DTV : processing and transmission of 3D video signals |last2=Fernando |first2=Anil |last3=Worrall |first3=Stewart |date=2013 |publisher=Wiley & Sons |isbn=978-1-118-70573-5 |location=Chichester, West Sussex, United Kingdom |oclc=844775006 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123926/https://www.worldcat.org/title/844775006 |url-status=live }}</ref><ref>{{Cite book |last1=Block |first1=Bruce A. |url=https://www.worldcat.org/oclc/858027807 |title=3D storytelling : how stereoscopic 3D works and how to use it |last2=McNally |first2=Phillip |date=2013 |publisher=Taylor & Francis |isbn=978-1-136-03881-5 |location=Burlington, MA |oclc=858027807 |access-date=August 25, 2022 |archive-date=August 25, 2022 |archive-url=https://web.archive.org/web/20220825123926/https://www.worldcat.org/title/858027807 |url-status=live }}</ref> * Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using [[light polarization|light-polarizing filters]] 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters. * [[Anaglyph 3D]], where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcasts or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content. * One channel with alternating left and right frames for the corresponding eye, using [[LCD shutter glasses]] that synchronize to the video to alternately block the image for each eye, so the appropriate eye sees the correct frame. This method is most common in computer [[virtual reality]] applications, such as in a [[Cave Automatic Virtual Environment]], but reduces effective video framerate by a factor of two.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)