Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Template matching
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Feature-based approach== [[File:Artificial Neural Network.jpg|thumb|The hidden layer outputs a vector that holds classification information about the image and is used in the Template Matching algorithm as the features of the image]] The feature-based approach to template matching relies on the extraction of [[Feature (computer vision)|image features]], such as shapes, textures, and colors, that match the target image or frame. This approach is usually achieved using [[Neural network|neural networks]] and [[Deep learning|deep-learning]] [[Statistical classification|classifiers]] such as VGG, [[AlexNet]], and [[Residual neural network|ResNet]].{{Citation needed|date=January 2023}}[[Convolutional neural network|Convolutional neural networks]] (CNNs), which many modern classifiers are based on, process an image by passing it through different hidden layers, producing a [[Vector space|vector]] at each layer with classification information about the image. These vectors are extracted from the network and used as the features of the image. [[Feature extraction]] using [[deep neural networks]], like CNNs, has proven extremely effective has become the standard in state-of-the-art template matching algorithms.<ref>{{cite arXiv|last1=Zhang|first1=Richard|last2=Isola|first2=Phillip|last3=Efros|first3=Alexei A.|last4=Shechtman|first4=Eli|last5=Wang|first5=Oliver|date=2018-01-11|title=The Unreasonable Effectiveness of Deep Features as a Perceptual Metric|eprint=1801.03924|class=cs.CV}}</ref> This feature-based approach is often more robust than the template-based approach described below. As such, it has become the state-of-the-art method for template matching, as it can match templates with non-rigid and out-of-plane [[Rigid transformation|transformations]], as well as high background clutter and illumination changes.<ref>{{Cite arXiv|title=Template Matching with Deformable Diversity Similarity|last=Talmi, Mechrez, Zelnik-Manor|eprint = 1612.02190|class = cs.CV|year = 2016}}</ref><ref>Li, Yuhai, L. Jian, T. Jinwen, X. Honbo. β[https://www.spiedigitallibrary.org/conference-proceedings-of-spie/6043/60431P/A-fast-rotated-template-matching-based-on-point-feature/10.1117/12.654932.short A fast rotated template matching based on point feature].β Proceedings of the SPIE 6043 (2005): 453-459. MIPPR 2005: SAR and Multispectral Image Processing.</ref><ref>B. Sirmacek, C. Unsalan. β[https://www.iro.umontreal.ca/~mignotte/IFT6150/Articles/graphbuilding.pdf Urban Area and Building Detection Using SIFT Keypoints and Graph Theory]β, IEEE Transactions on Geoscience and Remote Sensing, Vol.47 (4), pp. 1156-1167, April 2009.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)