Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Motion estimation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Process used in video coding/compression}} [[Image:Elephantsdream_vectorstill06.png|thumb|350px|Motion vectors that result from a movement into the <math>z</math>-plane of the image, combined with a lateral movement to the lower-right. This is a visualization of the motion estimation performed in order to compress an MPEG movie.]] In [[computer vision]] and [[image processing]], '''motion estimation''' is the process of determining ''motion vectors'' that describe the transformation from one 2D image to another; usually from adjacent [[video frame|frames]] in a video sequence. It is an [[well-posed problem|ill-posed problem]] as the [[motion]] happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (''global motion estimation'') or specific parts, such as rectangular blocks, arbitrary shaped patches or even per [[pixel]]. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom. ==Related terms== More often than not, the term motion estimation and the term ''[[optical flow]]'' are used interchangeably.{{citation needed|date=August 2019}} It is also related in concept to ''[[image registration]]'' and ''[[stereo correspondence]]''.<ref name="Liu2006">{{cite book|author=John X. Liu|title=Computer Vision and Robotics|url=https://books.google.com/books?id=pmXzAbPUvnYC&q=%22motion+estimation%22+correspondence|year=2006|publisher=Nova Publishers|isbn=978-1-59454-357-9}}</ref> In fact all of these terms refer to the process of [[correspondence problem|finding corresponding points]] between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are "usually" the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for as well as the optimisation strategy in the estimation process. Each ''motion vector'' is used to represent a ''[[macroblock]]'' in a picture based on the position of this macroblock (or a similar one) in another picture, called the reference picture. The [[H.264/MPEG-4 AVC]] standard defines ''motion vector'' as: <blockquote> motion vector: a two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture.<ref>[http://www.stewe.org/itu-recs/h264.pdf Latest working draft of H.264/MPEG-4 AVC] {{webarchive|url=https://web.archive.org/web/20040723160536/http://www.stewe.org/itu-recs/h264.pdf |date=2004-07-23 }}. Retrieved on 2008-02-29.</ref><ref>{{Cite web|url=http://www.hhi.fraunhofer.de/fileadmin/hhi/downloads/IP/ip_ic_H.264-MPEG4-AVC-Version8-FinalDraft.pdf|title=Latest working draft of H.264/MPEG-4 AVC on hhi.fraunhofer.de.}}{{Dead link|date=October 2023 |bot=InternetArchiveBot |fix-attempted=yes }}</ref> </blockquote> ==Algorithms== The methods for finding motion vectors can be categorised into pixel based methods ("direct") and feature based methods ("indirect"). A famous debate resulted in two papers from the opposing factions being produced to try to establish a conclusion.<ref>Philip H.S. Torr and Andrew Zisserman: [https://www.robots.ox.ac.uk/~vgg/publications/2000/Torr00a/torr00a.pdf Feature Based Methods for Structure and Motion Estimation], ICCV Workshop on Vision Algorithms, pages 278-294, 1999</ref><ref>Michal Irani and P. Anandan: [https://web.archive.org/web/20180102072903/https://pdfs.semanticscholar.org/3d18/95f35202c2f421491df10105ff83c851ebd1.pdf About Direct Methods], ICCV Workshop on Vision Algorithms, pages 267-277, 1999.</ref> ===Direct methods=== * [[Block-matching algorithm]] * [[Phase correlation]] and frequency domain methods * Pixel recursive algorithms * [[Optical flow]] ===Indirect methods=== ''Indirect methods'' use features, such as [[corner detection]], and match corresponding features between frames, usually with a statistical function applied over a local or global area. The purpose of the statistical function is to remove matches that do not correspond to the actual motion. Statistical functions that have been successfully used include [[RANSAC]]. ===Additional note on the categorization=== It can be argued that almost all methods require some kind of definition of the matching criteria. The difference is only whether you summarise over a local image region first and then compare the summarisation (such as feature based methods), or you compare each pixel first (such as squaring the difference) and then summarise over a local image region (block base motion and filter based motion). An emerging type of matching criteria summarises a local image region first for every pixel location (through some feature transform such as Laplacian transform), compares each summarised pixel and summarises over a local image region again.<ref>Rui Xu, David Taubman & Aous Thabit Naman, '[https://ieeexplore.ieee.org/abstract/document/7370941/ Motion Estimation Based on Mutual Information and Adaptive Multi-scale Thresholding]', in IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1095-1108, March 2016.</ref> Some matching criteria have the ability to exclude points that do not actually correspond to each other albeit producing a good matching score, others do not have this ability, but they are still matching criteria. ==Affine motion estimation== [[Affine motion estimation]] is a technique used in computer vision and image processing to estimate the motion between two images or frames. It assumes that the motion can be modeled as an affine transformation (translation + rotation + zooming), which is a linear transformation followed by a translation. ==Applications== [[File:Motion_interpolation_example.jpg|thumb|Video frames with [[motion interpolation]]]] ===Video coding=== Applying the motion vectors to an image to synthesize the transformation to the next image is called [[motion compensation]].<ref name="FurhtGreenberg2012">{{cite book|author1=Borko Furht|author2=Joshua Greenberg|author3=Raymond Westwater|title=Motion Estimation Algorithms for Video Compression|url=https://books.google.com/books?id=OaLhBwAAQBAJ&q=%22motion+compensation%22|date=6 December 2012|publisher=Springer Science & Business Media|isbn=978-1-4615-6241-2}}</ref> It is most easily applied to [[discrete cosine transform]] (DCT) based [[video coding standards]], because the coding is performed in blocks.<ref>{{cite book |last1=Swartz |first1=Charles S. |title=Understanding Digital Cinema: A Professional Handbook |date=2005 |publisher=[[Taylor & Francis]] |isbn=9780240806174 |page=143 |url=https://books.google.com/books?id=tYw3ehoBnjkC&pg=PA143}}</ref> As a way of exploiting temporal redundancy, motion estimation and compensation are key parts of [[video compression]]. Almost all video coding standards use block-based motion estimation and compensation such as the [[MPEG]] series including the most recent [[HEVC]]. ===3D reconstruction=== In [[simultaneous localization and mapping]], a 3D model of a scene is reconstructed using images from a moving camera.<ref>Kerl, Christian, Jürgen Sturm, and [[Daniel Cremers]]. "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.402.5544&rep=rep1&type=pdf Dense visual SLAM for RGB-D cameras]." 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013.</ref> ==See also== * [[Moving object detection]] * [[Graphics processing unit]] * [[Vision processing unit]] * [[Scale-invariant feature transform]] ==References== <references/> {{Digital image processing}} [[Category:Video processing]] [[Category:Motion (physics)]] [[Category:Motion in computer vision]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Dead link
(
edit
)
Template:Digital image processing
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)