Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Super-resolution imaging
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Techniques == {{Update|section|date=January 2023|reason=We should update this to include progress in improving superresolution with machine learning and neural networks.}} ===Optical or diffractive super-resolution=== Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. [[Dark-field microscopy|Dark-field illumination]] in microscopy is an example. See also [[aperture synthesis]]. [[File:Structured Illumination Superresolution.png|thumb|left|220px|The "structured illumination" technique of super-resolution is related to [[moiré pattern]]s. The target, a band of fine fringes (top row), is beyond the diffraction limit. When a band of somewhat coarser resolvable fringes (second row) is artificially superimposed, the combination (third row) features [[Moiré pattern|moiré]] components that are within the diffraction limit and hence contained in the image (bottom row) allowing the presence of the fine fringes to be inferred even though they are not themselves represented in the image.]] ====Multiplexing spatial-frequency bands==== An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target.<ref name="Guerra 3555–3557"/><ref name = "Gustaffson"/> The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. [[Moiré pattern|moiré fringes]], and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left). ====Multiple parameter use within traditional diffraction limit==== If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution. ====Probing near-field electromagnetic disturbance==== The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source<ref name="near-field"/> which has superior resolution properties, see also [[evanescent waves]] and the development of the new [[super lens]]. ===Geometrical or image-processing super-resolution=== [[File:Super-resolution example closeup.png|thumb|right|220px|Compared to a single image marred by noise during its acquisition or transmission (left), the [[Signal-to-noise ratio (imaging)|signal-to-noise ratio]] is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail.]] ====Multi-exposure image noise reduction==== When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right. ====Single-frame deblurring==== {{main|Deblurring}} Known defects in a given imaging situation, such as [[defocus]] or [[optical aberration|aberration]]s, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it. [[File:Localization Resolution.png|thumb|left|220px|Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.]] ====Sub-pixel image localization==== The location of a single source can be determined by computing the "center of gravity" ([[centroid]]) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as [[super-resolution microscopy]], e.g. [[stochastic optical reconstruction microscopy]] (STORM), where fluorescent probes attached to molecules give [[Nanoscopic scale|nanoscale]] distance information. It is also the mechanism underlying visual [[hyperacuity]].<ref>{{cite journal | last1 = Westheimer | first1 = G | year = 2012 | title = Optical superresolution and visual hyperacuity | journal = Prog Retin Eye Res | volume = 31 | issue = 5| pages = 467–80 | doi=10.1016/j.preteyeres.2012.05.001| pmid = 22634484 | doi-access = free }}</ref> ====Bayesian induction beyond traditional diffraction limit==== {{Main|Bayesian inference}} Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object.<ref>Harris, J.L., 1964. Resolving power and decision making. J. opt. soc. Am. 54, 606–611.</ref> The classical example is Toraldo di Francia's proposition<ref>Toraldo di Francia, G., 1955. Resolving power and information. J. opt. soc. Am. 45, 497–501.</ref> of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?" The approach can take the form of [[extrapolation|extrapolating]] the image in the frequency domain, by assuming that the object is an [[analytic function]], and that we can exactly know the [[Function (mathematics)|function]] values in some [[Interval (mathematics)|interval]]. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for [[radar]], [[astronomy]], [[microscope|microscopy]] or [[magnetic resonance imaging]].<ref>[[#refPoot12|D. Poot, B. Jeurissen, Y. Bastiaensen, J. Veraart, W. Van Hecke, P. M. Parizel, and J. Sijbers, "Super-Resolution for Multislice Diffusion Tensor Imaging", Magnetic Resonance in Medicine, (2012)]]</ref> More recently, a fast single image super-resolution algorithm based on a closed-form solution to ''<math>\ell_2-\ell_2</math>'' problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly.<ref>N. Zhao, Q. Wei, A. Basarab, N. Dobigeon, D. Kouamé and J-Y. Tourneret, [https://arxiv.org/abs/1510.00143 "Fast single image super-resolution using a new analytical solution for ''<math>\ell_2-\ell_2</math>'' problems"], IEEE Trans. Image Process., 2016, to appear.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)