Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Data compression
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Encoding theory ==== Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal [[redundancy (information theory)|redundancy]]. Video compression algorithms attempt to reduce redundancy and store information more compactly. Most [[video compression formats]] and [[video codec|codecs]] exploit both spatial and temporal redundancy (e.g. through difference coding with [[motion compensation]]). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). [[Inter-frame]] compression (a temporal [[delta encoding]]) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. [[Intra-frame coding]], on the other hand, uses only data from within the current frame, effectively being still-image compression.<ref name="faxin47"/> The [[Video coding format#Intra-frame video coding formats|intra-frame video coding formats]] used in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a compressed frame refers to data that the editor has deleted. Usually, video compression additionally employs [[lossy compression]] techniques like [[quantization (image processing)|quantization]] that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas in a manner similar to those used in JPEG image compression.<ref name="TomLane"/> As in all lossy compression, there is a [[trade-off]] between [[video quality]] and [[bit rate]], cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting [[compression artifact|artifacts]]. Other methods other than the prevalent DCT-based transform formats, such as [[fractal compression]], [[matching pursuit]] and the use of a [[discrete wavelet transform]] (DWT), have been the subject of some research, but are typically not used in practical products. [[Wavelet compression]] is used in still-image coders and video coders without motion compensation. Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.<ref name="faxin47"/> ===== Inter-frame coding ===== {{main|Inter frame}} {{further|Motion compensation}} In inter-frame coding, individual frames of a video sequence are compared from one frame to the next, and the [[video codec|video compression codec]] records the [[residual frame|differences]] to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than data generated by intra-frame compression. Usually, the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the [[variable bitrate]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)