Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Advanced Video Coding
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Features === [[File:H.264 block diagram with quality score.jpg|thumb|150px|Block diagram of H.264]] H.264/AVC/MPEG-4 Part 10 contains a number of new features that allow it to compress video much more efficiently than older standards and to provide more flexibility for application to a wide variety of network environments. In particular, some such key features include: * Multi-picture [[inter frame|inter-picture prediction]] including the following features: ** Using previously encoded pictures as references in a much more flexible way than in past standards, allowing up to 16 reference frames (or 32 reference fields, in the case of interlaced encoding) to be used in some cases. In profiles that support non-[[Network Abstraction Layer#Coded Video Sequences|IDR]] frames, most levels specify that sufficient buffering should be available to allow for at least 4 or 5 reference frames at maximum resolution. This is in contrast to prior standards, where the limit was typically one; or, in the case of conventional "[[Video compression picture types#Bi-directional predicted frames/slices (B-frames/slices)|B pictures]]" (B-frames), two. ** Variable block-size [[motion compensation]] (VBSMC) with block sizes as large as 16×16 and as small as 4×4, enabling precise segmentation of moving regions. The supported [[Luma (video)|luma]] prediction block sizes include 16×16, 16×8, 8×16, 8×8, 8×4, 4×8, and 4×4, many of which can be used together in a single macroblock. Chroma prediction block sizes are correspondingly smaller when [[chroma subsampling]] is used. ** The ability to use multiple motion vectors per macroblock (one or two per partition) with a maximum of 32 in the case of a B macroblock constructed of 16 4×4 partitions. The motion vectors for each 8×8 or larger partition region can point to different reference pictures. ** The ability to use any macroblock type in [[Video compression picture types#Bi-directional predicted frames/slices (B-frames/slices)|B-frames]], including I-macroblocks, resulting in much more efficient encoding when using B-frames. This feature was notably left out from [[MPEG-4 ASP]]. ** Six-tap filtering for derivation of half-pel luma sample predictions, for sharper subpixel motion-compensation. Quarter-pixel motion is derived by linear interpolation of the halfpixel values, to save processing power. ** [[Qpel|Quarter-pixel]] precision for motion compensation, enabling precise description of the displacements of moving areas. For [[Chrominance|chroma]] the resolution is typically halved both vertically and horizontally (see [[4:2:0]]) therefore the motion compensation of chroma uses one-eighth chroma pixel grid units. ** Weighted prediction, allowing an encoder to specify the use of a scaling and offset when performing motion compensation, and providing a significant benefit in performance in special cases—such as fade-to-black, fade-in, and cross-fade transitions. This includes implicit weighted prediction for B-frames, and explicit weighted prediction for P-frames. * Spatial prediction from the edges of neighboring blocks for [[Intra-frame|"intra"]] coding, rather than the "DC"-only prediction found in MPEG-2 Part 2 and the transform coefficient prediction found in H.263v2 and MPEG-4 Part 2. This includes luma prediction block sizes of 16×16, 8×8, and 4×4 (of which only one type can be used within each [[macroblock]]). * Integer [[discrete cosine transform]] (integer DCT),<ref name="Wang">{{cite journal |last1=Wang |first1=Hanli |last2=Kwong |first2=S. |last3=Kok |first3=C. |s2cid=2060937 |title=Efficient prediction algorithm of integer DCT coefficients for H.264/AVC optimization |journal=IEEE Transactions on Circuits and Systems for Video Technology |date=2006 |volume=16 |issue=4 |pages=547–552 |doi=10.1109/TCSVT.2006.871390}}</ref><ref name="Stankovic">{{cite journal |last1=Stanković |first1=Radomir S. |last2=Astola |first2=Jaakko T. |title=Reminiscences of the Early Work in DCT: Interview with K.R. Rao |journal=Reprints from the Early Days of Information Sciences |date=2012 |volume=60 |page=17 |url=http://ticsp.cs.tut.fi/reports/ticsp-report-60-reprint-rao-corrected.pdf#page=18 |access-date=13 October 2019}}</ref><ref>{{cite book |last1=Kwon |first1=Soon-young |last2=Lee |first2=Joo-kyong |last3=Chung |first3=Ki-dong |title=Image Analysis and Processing – ICIAP 2005 |chapter=Half-Pixel Correction for MPEG-2/H.264 Transcoding |series=Lecture Notes in Computer Science |date=2005 |volume=3617 |pages=576–583 |doi=10.1007/11553595_71 |publisher=Springer Berlin Heidelberg|isbn=978-3-540-28869-5 |doi-access=free }}</ref> a type of discrete cosine transform (DCT)<ref name="Stankovic"/> where the transform is an integer approximation of the standard DCT.<ref name="Britanak2010">{{cite book |last1=Britanak |first1=Vladimir |last2=Yip |first2=Patrick C. |last3=Rao |first3=K. R. |author3-link=K. R. Rao |title=DiProperties, Fast Algorithms and Integer Approximations |date=2010 |publisher=[[Elsevier]] |isbn=9780080464640 |pages=ix, xiii, 1, 141–304 |url=https://books.google.com/books?id=iRlQHcK-r_kC&pg=PA141}}</ref> It has selectable block sizes<ref name="apple">{{cite web |last1=Thomson |first1=Gavin |last2=Shah |first2=Athar |title=Introducing HEIF and HEVC |url=https://devstreaming-cdn.apple.com/videos/wwdc/2017/503i6plfvfi7o3222/503/503_introducing_heif_and_hevc.pdf |publisher=[[Apple Inc.]] |year=2017 |access-date=5 August 2019}}</ref> and exact-match integer computation to reduce complexity, including: ** An exact-match integer 4×4 spatial block transform, allowing precise placement of [[residual frame|residual]] signals with little of the "[[ringing artifact|ringing]]" often found with prior codec designs. It is similar to the standard DCT used in previous standards, but uses a smaller block size and simple integer processing. Unlike the cosine-based formulas and tolerances expressed in earlier standards (such as H.261 and MPEG-2), integer processing provides an exactly specified decoded result. ** An exact-match integer 8×8 spatial block transform, allowing highly correlated regions to be compressed more efficiently than with the 4×4 transform. This design is based on the standard DCT, but simplified and made to provide exactly specified decoding. ** Adaptive encoder selection between the 4×4 and 8×8 transform block sizes for the integer transform operation. ** A secondary [[Hadamard transform]] performed on "DC" coefficients of the primary spatial transform applied to chroma DC coefficients (and also luma in one special case) to obtain even more compression in smooth regions. * [[Lossless]] macroblock coding features including: ** A lossless "PCM macroblock" representation mode in which video data samples are represented directly,<ref>{{cite web|url=http://www.fastvdo.com/spie04/spie04-h264OverviewPaper.pdf |title=The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions |access-date=2011-07-30}}</ref> allowing perfect representation of specific regions and allowing a strict limit to be placed on the quantity of coded data for each macroblock. ** An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode. * Flexible [[Interlaced video|interlace]]d-scan video coding features, including: ** Macroblock-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16×16 macroblocks in field mode (compared with MPEG-2, where field mode processing in a picture that is coded as a frame results in the processing of 16×8 half-macroblocks). ** Picture-adaptive frame-field coding (PAFF or PicAFF) allowing a freely selected mixture of pictures coded either as complete frames where both fields are combined for encoding or as individual single fields. * A quantization design including: ** Logarithmic step size control for easier bit rate management by encoders and simplified inverse-quantization scaling ** Frequency-customized quantization scaling matrices selected by the encoder for perceptual-based quantization optimization * An in-loop [[Deblocking filter (video)|deblocking filter]] that helps prevent the blocking artifacts common to other DCT-based image compression techniques, resulting in better visual appearance and compression efficiency * An [[entropy encoding|entropy coding]] design including: ** [[Context-adaptive binary arithmetic coding]] (CABAC), an algorithm to losslessly compress syntax elements in the video stream knowing the probabilities of syntax elements in a given context. CABAC compresses data more efficiently than CAVLC but requires considerably more processing to decode. ** [[Context-adaptive variable-length coding]] (CAVLC), which is a lower-complexity alternative to CABAC for the coding of quantized transform coefficient values. Although lower complexity than CABAC, CAVLC is more elaborate and more efficient than the methods typically used to code coefficients in other prior designs. ** A common simple and highly structured [[Variable-length code|variable length coding]] (VLC) technique for many of the syntax elements not coded by CABAC or CAVLC, referred to as [[Exponential-Golomb coding]] (or Exp-Golomb). * Loss resilience features including: ** A [[Network Abstraction Layer]] (NAL) definition allowing the same video syntax to be used in many network environments. One very fundamental design concept of H.264 is to generate self-contained packets, to remove the header duplication as in MPEG-4's Header Extension Code (HEC).<ref name="rfc3984_3"/> This was achieved by decoupling information relevant to more than one slice from the media stream. The combination of the higher-level parameters is called a parameter set.<ref name="rfc3984_3"/> The H.264 specification includes two types of parameter sets: Sequence Parameter Set (SPS) and Picture Parameter Set (PPS). An active sequence parameter set remains unchanged throughout a coded video sequence, and an active picture parameter set remains unchanged within a coded picture. The sequence and picture parameter set structures contain information such as picture size, optional coding modes employed, and macroblock to slice group map.<ref name="rfc3984_3">RFC 3984, p.3</ref> ** [[Flexible macroblock ordering]] (FMO), also known as slice groups, and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (''macroblocks'') in pictures. Typically considered an error/loss robustness feature, FMO and ASO can also be used for other purposes. ** Data partitioning (DP), a feature providing the ability to separate more important and less important syntax elements into different packets of data, enabling the application of unequal error protection (UEP) and other types of improvement of error/loss robustness. ** Redundant slices (RS), an error/loss robustness feature that lets an encoder send an extra representation of a picture region (typically at lower fidelity) that can be used if the primary representation is corrupted or lost. ** Frame numbering, a feature that allows the creation of "sub-sequences", enabling temporal scalability by optional inclusion of extra pictures between other pictures, and the detection and concealment of losses of entire pictures, which can occur due to network packet losses or channel errors. * Switching slices, called SP and SI slices, allowing an encoder to direct a decoder to jump into an ongoing video stream for such purposes as video streaming bit rate switching and "trick mode" operation. When a decoder jumps into the middle of a video stream using the SP/SI feature, it can get an exact match to the decoded pictures at that location in the video stream despite using different pictures, or no pictures at all, as references prior to the switch. * A simple automatic process for preventing the accidental emulation of [[start code]]s, which are special sequences of bits in the coded data that allow random access into the bitstream and recovery of byte alignment in systems that can lose byte synchronization. * Supplemental enhancement information (SEI) and video usability information (VUI), which are extra information that can be inserted into the bitstream for various purposes such as indicating the color space used the video content or various constraints that apply to the encoding. SEI messages can contain arbitrary user-defined metadata payloads or other messages with syntax and semantics defined in the standard. * Auxiliary pictures, which can be used for such purposes as [[alpha compositing]]. * Support of monochrome (4:0:0), 4:2:0, 4:2:2, and 4:4:4 [[chroma sampling]] (depending on the selected profile). * Support of sample bit depth precision ranging from 8 to 14 bits per sample (depending on the selected profile). * The ability to encode individual color planes as distinct pictures with their own slice structures, macroblock modes, motion vectors, etc., allowing encoders to be designed with a simple parallelization structure (supported only in the three 4:4:4-capable profiles). * Picture order count, a feature that serves to keep the ordering of the pictures and the values of samples in the decoded pictures isolated from timing information, allowing timing information to be carried and controlled/changed separately by a system without affecting decoded picture content. These techniques, along with several others, help H.264 to perform significantly better than any prior standard under a wide variety of circumstances in a wide variety of application environments. H.264 can often perform radically better than MPEG-2 video—typically obtaining the same quality at half of the bit rate or less, especially on high bit rate and high resolution video content.<ref>{{cite web|author=Apple Inc. |url=https://www.apple.com/quicktime/technologies/h264/faq.html |title=H.264 FAQ |publisher=Apple |date=1999-03-26 |access-date=2010-05-17 |url-status=dead |archive-url=https://web.archive.org/web/20100307022217/http://www.apple.com/quicktime/technologies/h264/faq.html |archive-date=March 7, 2010 }}</ref> Like other ISO/IEC MPEG video standards, H.264/AVC has a reference software implementation that can be freely downloaded.<ref>{{cite web|author=Karsten Suehring |url=http://iphome.hhi.de/suehring/tml/download/ |title=H.264/AVC JM Reference Software Download |publisher=Iphome.hhi.de |access-date=2010-05-17}}</ref> Its main purpose is to give examples of H.264/AVC features, rather than being a useful application ''per se''. Some reference hardware design work has also been conducted in the [[Moving Picture Experts Group]]. The above-mentioned aspects include features in all profiles of H.264. A profile for a codec is a set of features of that codec identified to meet a certain set of specifications of intended applications. This means that many of the features listed are not supported in some profiles. Various profiles of H.264/AVC are discussed in next section.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)