Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Scale-invariant feature transform
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== 3D scene modeling, recognition and tracking === This application uses SIFT features for [[3D single-object recognition|3D object recognition]] and [[3D modeling]] in context of [[augmented reality]], in which synthetic objects with accurate pose are superimposed on real images. SIFT matching is done for a number of 2D images of a scene or object taken from different angles. This is used with [[bundle adjustment]] initialized from an [[essential matrix]] or [[trifocal tensor]] to build a sparse 3D model of the viewed scene and to simultaneously recover camera poses and [[Geometric camera calibration|calibration]] parameters. Then the position, orientation and size of the virtual object are defined relative to the coordinate frame of the recovered model. For online [[match moving]], SIFT features again are extracted from the current video frame and matched to the features already computed for the world model, resulting in a set of 2D-to-3D correspondences. These correspondences are then used to compute the current camera pose for the virtual projection and final rendering. A regularization technique is used to reduce the jitter in the virtual projection.<ref name="Gordon2006" /> The use of SIFT directions have also been used to increase robustness of this process.<ref name="SIFTOrientationTrifocal" /><ref name="SIFTOrientationPose" /> 3D extensions of SIFT have also been evaluated for [[true 3D]] object recognition and retrieval.<ref name=Flitton2010 /><ref name="flitton13interestpoint">{{cite journal| author=Flitton, G.T., Breckon, T.P., Megherbi, N.| title=A Comparison of 3D Interest Point Descriptors with Application to Airport Baggage Object Detection in Complex CT Imagery| journal=Pattern Recognition| volume=46| issue=9| pages=2420β2436| year=2013| doi=10.1016/j.patcog.2013.02.008| bibcode=2013PatRe..46.2420F| hdl=1826/15213| hdl-access=free}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)