Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Image registration
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithm classification == === Intensity-based vs feature-based === Image registration or image alignment algorithms can be classified into intensity-based and feature-based.<ref name="AG">A. Ardeshir Goshtasby: [http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471649546.html 2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial Applications], Wiley Press, 2005.</ref> One of the images is referred to as the ''target'', ''fixed'' or ''sensed'' image and the others are referred to as the ''moving'' or ''source'' images. Image registration involves spatially transforming the source/moving image(s) to align with the target image. The reference frame in the target image is stationary, while the other datasets are transformed to match to the target.<ref name="AG"/> Intensity-based methods compare intensity patterns in images via correlation metrics, while feature-based methods find [[correspondence problem|correspondence]] between image features such as points, lines, and contours.<ref name="AG"/> Intensity-based methods register entire images or sub-images. If sub-images are registered, centers of corresponding sub images are treated as corresponding feature points. Feature-based methods establish a correspondence between a number of especially distinct points in images. Knowing the correspondence between a number of points in images, a geometrical transformation is then determined to map the target image to the reference images, thereby establishing point-by-point correspondence between the reference and target images.<ref name="AG"/> Methods combining intensity-based and feature-based information have also been developed.<ref name="PapademetrisJackowski2004">{{cite book|last1=Papademetris|first1=Xenophon|title=Medical Image Computing and Computer-Assisted Intervention β MICCAI 2004|last2=Jackowski|first2=Andrea P.|last3=Schultz|first3=Robert T.|last4=Staib|first4=Lawrence H.|last5=Duncan|first5=James S.|chapter=Integrated Intensity and Point-Feature Nonrigid Registration|volume=3216|year=2004|pages=763β770|issn=0302-9743|doi=10.1007/978-3-540-30135-6_93|series=Lecture Notes in Computer Science|isbn=978-3-540-22976-6}}</ref> === Transformation models === Image registration algorithms can also be classified according to the transformation models they use to relate the target image space to the reference image space. The first broad category of transformation models includes [[linear transformation]]s, which include rotation, scaling, translation, and other affine transforms.<ref>http://www.comp.nus.edu.sg/~cs4243/lecture/register.pdf {{Bare URL PDF|date=March 2022}}</ref> [[Linear transformation]]s are global in nature, thus, they cannot model local geometric differences between images.<ref name="AG"/> The second category of transformations allow 'elastic' or 'nonrigid' transformations. These transformations are capable of locally warping the target image to align with the reference image. Nonrigid transformations include [[radial basis functions]] ([[Thin plate spline|thin-plate]] or surface splines, [[Radial basis functions#Types|multiquadrics]], and [[support (mathematics)#Compact support|compactly-supported transformations]]<ref name="AG"/>), physical continuum models (viscous fluids), and large deformation models ([[diffeomorphism]]s). Transformations are commonly described by a parametrization, where the model dictates the number of parameters. For instance, the translation of a full image can be described by a single parameter, a translation vector. These models are called parametric models. Non-parametric models on the other hand, do not follow any parameterization, allowing each image element to be displaced arbitrarily.<ref>{{cite journal|last1=Sotiras|first1=A.|last2=Davatzikos|first2=C.|last3=Paragios|first3=N.|title=Deformable Medical Image Registration: A Survey|journal=IEEE Transactions on Medical Imaging|date=July 2013|volume=32|issue=7|pages=1153β1190|doi=10.1109/TMI.2013.2265603|pmid=23739795|pmc=3745275|url=https://hal.inria.fr/hal-00858737/document}}</ref> There are a number of programs that implement both estimation and application of a warp-field. It is a part of the [[Statistical parametric mapping|SPM]] and [[AIR (program)|AIR]] programs. === Transformations of coordinates via the law of function composition rather than addition === Alternatively, many advanced methods for spatial normalization are building on structure preserving transformations [[homeomorphism]]s and [[diffeomorphism]]s since they carry smooth submanifolds smoothly during transformation. Diffeomorphisms are generated in the modern field of [[Computational anatomy|Computational Anatomy]] based on flows since diffeomorphisms are not additive although they form a group, but a group under the law of function composition. For this reason, flows which generalize the ideas of additive groups allow for generating large deformations that preserve topology, providing 1-1 and onto transformations. Computational methods for generating such transformation are often called [[Large deformation diffeomorphic metric mapping|LDDMM]]<ref>{{Cite book|url=https://books.google.com/books?id=8WdlWJepgWMC|title=Brain Warping|last=Toga|first=Arthur W.|date=1998-11-17|publisher=Academic Press|isbn=9780080525549|language=en}}</ref><ref>{{Cite web|url=https://utah.pure.elsevier.com/en/publications/landmark-matching-on-brain-surfaces-via-large-deformation-diffeom-2|title=Landmark matching on brain surfaces via large deformation diffeomorphisms on the sphere β University of Utah|website=utah.pure.elsevier.com|access-date=2016-03-21|archive-url=https://web.archive.org/web/20180629235930/https://utah.pure.elsevier.com/en/publications/landmark-matching-on-brain-surfaces-via-large-deformation-diffeom-2|archive-date=2018-06-29|url-status=dead}}</ref><ref>{{Cite journal|url=https://www.researchgate.net/publication/220660081|title=Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms|journal=International Journal of Computer Vision|volume=61|issue=2|pages=139β157|doi=10.1023/B:VISI.0000043755.93987.aa|access-date=2016-03-21|year=2005|last1=Beg|first1=M. Faisal|last2=Miller|first2=Michael I.|last3=TrouvΓ©|first3=Alain|last4=Younes|first4=Laurent|s2cid=17772076}}</ref><ref>{{Cite journal|last1=Joshi|first1=S. C.|last2=Miller|first2=M. I.|date=2000-01-01|title=Landmark matching via large deformation diffeomorphisms|journal=IEEE Transactions on Image Processing|volume=9|issue=8|pages=1357β1370|doi=10.1109/83.855431|issn=1057-7149|pmid=18262973|bibcode=2000ITIP....9.1357J}}</ref> which provide flows of diffeomorphisms as the main computational tool for connecting coordinate systems corresponding to [[Computational anatomy#The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit|the geodesic flows of Computational Anatomy]]. There are a number of programs which generate diffeomorphic transformations of coordinates via [[Diffeomorphic Mapping in Computational Anatomy|diffeomorphic mapping]] including MRI Studio<ref>{{Cite web|url=https://www.mristudio.org|title=MRI Studio}}</ref> and MRI Cloud.org<ref>{{Cite web|url=https://mricloud.org/|title=MRICloud Brain Mapping}}</ref> === Spatial vs frequency domain methods === Spatial methods operate in the image domain, matching intensity patterns or features in images. Some of the feature matching algorithms are outgrowths of traditional techniques for performing manual image registration, in which an operator chooses corresponding [[Feature (computer vision)|control points]] (CP) in images. When the number of control points exceeds the minimum required to define the appropriate transformation model, iterative algorithms like [[RANSAC]] can be used to robustly estimate the parameters of a particular transformation type (e.g. affine) for registration of the images. Frequency-domain methods find the transformation parameters for registration of the images while working in the transform domain. Such methods work for simple transformations, such as translation, rotation, and scaling. Applying the [[phase correlation]] method to a pair of images produces a third image which contains a single peak. The location of this peak corresponds to the relative translation between the images. Unlike many spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects typical of medical or satellite images. Additionally, the phase correlation uses the [[fast Fourier transform]] to compute the cross-correlation between the two images, generally resulting in large performance gains. The method can be extended to determine rotation and scaling differences between two images by first converting the images to [[log-polar coordinates]].<ref>{{cite journal|author=B. Srinivasa Reddy|author2=B. N. Chatterji|title=An FFT-Based Technique for Translation, Rotation and Scale-Invariant Image Registration|journal=IEEE Transactions on Image Processing|volume=5|issue=8|pages=1266β1271|date=Aug 1996|doi=10.1109/83.506761|pmid=18285214|bibcode=1996ITIP....5.1266R |s2cid=6562358}}</ref><ref>Zokai, S., Wolberg, G., [https://www-cs.ccny.cuny.edu/~wolberg/pub/tip05.pdf "Image Registration Using Log-Polar Mappings for Recovery of Large-Scale Similarity and Projective Transformations"]. ''IEEE Transactions on Image Processing'', vol. 14, No. 10, October, 2005.</ref> Due to properties of the [[Fourier transform]], the rotation and scaling parameters can be determined in a manner invariant to translation. === Single- vs multi-modality methods === Another classification can be made between single-modality and multi-modality methods. Single-modality methods tend to register images in the same modality acquired by the same scanner/sensor type, while multi-modality registration methods tended to register images acquired by different scanner/sensor types. Multi-modality registration methods are often used in [[medical imaging]] as images of a subject are frequently obtained from different scanners. Examples include registration of brain [[Computed tomography|CT]]/[[MRI]] images or whole body [[Positron emission tomography|PET]]/[[Computed tomography|CT]] images for tumor localization, registration of contrast-enhanced [[Computed tomography|CT]] images against non-contrast-enhanced [[Computed tomography|CT]] images<ref>{{Cite journal |last1=Ristea |first1=Nicolae-Catalin |last2=Miron |first2=Andreea-Iuliana |last3=Savencu |first3=Olivian |last4=Georgescu |first4=Mariana-Iuliana |last5=Verga |first5=Nicolae |last6=Khan |first6=Fahad Shahbaz |last7=Ionescu |first7=Radu Tudor |title=Cy ''Tran'': A cycle-consistent transformer with multi-level consistency for non-contrast to contrast CT translation |journal=Neurocomputing |year=2023 |volume=538 |page=126211 |doi=10.1016/j.neucom.2023.03.072 |arxiv=2110.06400 |s2cid=257952429 }}</ref> for segmentation of specific parts of the anatomy, and registration of [[ultrasound]] and [[Computed tomography|CT]] images for [[prostate]] localization in [[radiotherapy]]. === Automatic vs interactive methods === Registration methods may be classified based on the level of automation they provide. Manual, interactive, semi-automatic, and automatic methods have been developed. Manual methods provide tools to align the images manually. Interactive methods reduce user bias by performing certain key operations automatically while still relying on the user to guide the registration. Semi-automatic methods perform more of the registration steps automatically but depend on the user to verify the correctness of a registration. Automatic methods do not allow any user interaction and perform all registration steps automatically. === Similarity measures for image registration === Image similarities are broadly used in [[medical imaging]]. An image [[similarity measure]] quantifies the degree of similarity between intensity patterns in two images.<ref name="AG"/> The choice of an image similarity measure depends on the modality of the images to be registered. Common examples of image similarity measures include [[cross-correlation]], [[mutual information]], sum of squared intensity differences, and ratio image uniformity. Mutual information and normalized mutual information are the most popular image similarity measures for registration of multimodality images. Cross-correlation, sum of squared intensity differences and ratio image uniformity are commonly used for registration of images in the same modality. Many new features have been derived for cost functions based on matching methods via [[Computational anatomy|large deformations]] have emerged in the field [[Computational anatomy|Computational Anatomy]] including [[Computational anatomy#Measure matching: unregistered landmarks|Measure matching]] which are pointsets or landmarks without correspondence, [[Computational anatomy#Curve matching|Curve matching]] and [[Computational anatomy#Surface matching|Surface matching]] via mathematical [[Current (mathematics)|currents]] and varifolds.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)