Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Halftone
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Inverse halftoning== {{Multiple image |align= right |total_width= 250 |direction= vertical |image1= Grayscale Cat.jpg |alt1= Original image |caption1= Original image |image2= Dithered Cat.jpg |alt2= Dithered image |caption2= Dithered image |image3= Descreened Cat.jpg |alt3= Descreened image |caption3= Descreened image }} Inverse halftoning or descreening is the process of reconstructing high-quality continuous-tone images from the halftone version. Inverse halftoning is an ill-posed problem because different source images can produce the same halftone image. Consequently, one halftone image has multiple plausible reconstructions. Additionally, information like tones and details are discarded during halftoning and thus irrecoverably lost. Due to the variety of different halftone patterns, it is not always obvious which algorithm to use for the best quality. [[File:Risingstar_test_crop.png|thumb|300px|Dots in the sky due to spatial [[aliasing]] caused by halftone resized to a lower resolution]] There are many situations where reconstruction is desired. For artists, it is a challenging task to edit halftone images. Even simple modifications like altering the brightness usually work by changing the color tones. In halftone images, this additionally requires preservation of the regular pattern. The same applies to more complex tools like retouching. Many other image processing techniques are designed to operate on continuous-tone images. For example, image compression algorithms are more efficient for those images.<ref>{{cite journal|last1=Ming Yuan Ting|last2=Riskin|first2=E.A.|author2-link=Eve Riskin|date=1994|title=Error-diffused image compression using a binary-to-gray-scale decoder and predictive pruned tree-structured vector quantization |journal=IEEE Transactions on Image Processing|volume=3|issue=6|pages=854–858|doi=10.1109/83.336256|pmid=18296253|bibcode=1994ITIP....3..854T|issn=1057-7149}}</ref> Another reason is the visual aspect since halftoning degrades the quality of an image. Sudden tone changes of the original image are removed due to the limited tone variations in halftoned images. It can also introduce distortions and visual effects like [[moiré pattern]]s. Especially when printed on newspaper, the halftone pattern becomes more visible due to the paper properties. By scanning and reprinting these images moiré patterns are emphasized. Thus, reconstructing them before reprinting is important to provide a reasonable quality. ===Spatial and frequency filtering=== The main steps of the procedure are the removal of halftone patterns and reconstruction of tone changes. In the end, it may be necessary to recover details to improve image quality. There are many halftoning algorithms which can be mostly classified into the categories [[ordered dithering]], [[error diffusion]], and optimization-based methods. It is important to choose a proper descreening strategy since they generate different patterns and most of the inverse halftoning algorithms are designed for a particular type of pattern. Time is another selection criteria because many algorithms are iterative and therefore rather slow. The most straightforward way to remove the halftone patterns is the application of a [[low-pass filter]] either in spatial or frequency domain. A simple example is a [[Gaussian filter]]. It discards the high-frequency information which blurs the image and simultaneously reduces the halftone pattern. This is similar to the blurring effect of our eyes when viewing a halftone image. In any case, it is important to pick a proper [[bandwidth (signal processing)|bandwidth]]. A too-limited bandwidth blurs edges out, while a high bandwidth produces a noisy image because it does not remove the pattern completely. Due to this trade-off, it is not able to reconstruct reasonable edge information. Further improvements can be achieved with edge enhancement. Decomposing the halftone image into its [[wavelet transform|wavelet representation]] allows to pick information from different frequency bands.<ref>{{cite book|last1=Zixiang Xiong|last2=Orchard|first2=M.T.|last3=Ramchandran|first3=K.|title=Proceedings of 3rd IEEE International Conference on Image Processing |chapter=Inverse halftoning using wavelets |volume=1|pages=569–572|publisher=IEEE|doi=10.1109/icip.1996.559560|isbn=0-7803-3259-8|year=1996|s2cid=35950695}}</ref> Edges are usually consisting of highpass energy. By using the extracted highpass information, it is possible to treat areas around edges differently to emphasize them while keeping lowpass information among smooth regions. ===Optimization-based filtering=== Another possibility for inverse halftoning is the usage of [[machine learning]] algorithms based on [[artificial neural network]]s.<ref>{{Citation|last1=Li|first1=Yijun|title=Deep Joint Image Filtering|date=2016|work=Computer Vision – ECCV 2016|pages=154–169|publisher=Springer International Publishing|isbn=978-3-319-46492-3|last2=Huang|first2=Jia-Bin|last3=Ahuja|first3=Narendra|last4=Yang|first4=Ming-Hsuan|series=Lecture Notes in Computer Science |volume=9908 |doi=10.1007/978-3-319-46493-0_10}}</ref> These learning-based approaches can find the descreening technique that gets as close as possible to the perfect one. The idea is to use different strategies depending on the actual halftone image. Even for different content within the same image, the strategy should be varied. [[Convolutional neural network]]s are well-suited for tasks like [[object detection]] which allows a category based descreening. Additionally, they can do edge detection to enhance the details around edge areas. The results can be further improved by [[generative adversarial network]]s.<ref>{{cite journal|last1=Kim|first1=Tae-Hoon|last2=Park|first2=Sang Il|date=2018-07-30|title=Deep context-aware descreening and rescreening of halftone images|journal=ACM Transactions on Graphics|volume=37|issue=4|pages=1–12|doi=10.1145/3197517.3201377|s2cid=51881126|issn=0730-0301}}</ref> This type of network can artificially generate content and recover lost details. However, these methods are limited by the quality and completeness of the used training data. Unseen halftoning patterns which were not represented in the training data are rather hard to remove. Additionally, the learning process can take some time. By contrast, computing the inverse halftoning image is fast compared to other iterative methods because it requires only a single computational step. ===Lookup table=== Unlike other approaches, the [[lookup table]] method does not involve any filtering.<ref>{{cite book|title=Look-up table (LUT) method for inverse halftoning.|last=Murat.|first=Mese|date=2001-10-01|publisher=The Institute of Electrical and Electronics Engineers, Inc-IEEE|oclc=926171988}}</ref> It works by computing a distribution of the neighborhood for every pixel in the halftone image. The lookup table provides a continuous-tone value for a given pixel and its distribution. The corresponding lookup table is obtained before using histograms of halftone images and their corresponding originals. The histograms provide the distribution before and after halftoning and make it possible to approximate the continuous-tone value for a specific distribution in the halftone image. For this approach, the halftoning strategy has to be known in advance for choosing a proper lookup table. Additionally, the table needs to be recomputed for every new halftoning pattern. Generating the descreened image is fast compared to iterative methods because it requires a lookup per pixel.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)