Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Image segmentation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Compression-based methods == Compression based methods postulate that the optimal segmentation is the one that minimizes, over all possible segmentations, the coding length of the data.<ref>{{cite journal |author1=Hossein Mobahi |author2=Shankar Rao |author3=Allen Yang |author4=Shankar Sastry |author5=Yi Ma. |url=http://perception.csl.illinois.edu/coding/papers/MobahiH2011-IJCV.pdf |title=Segmentation of Natural Images by Texture and Boundary Compression |journal=International Journal of Computer Vision |volume=95 |pages=86β98 |year=2011 |doi=10.1007/s11263-011-0444-0 |arxiv=1006.3679 |citeseerx=10.1.1.180.3579 |s2cid=11070572 |access-date=8 May 2011 |archive-url=https://web.archive.org/web/20170808173212/http://perception.csl.illinois.edu/coding//papers/MobahiH2011-IJCV.pdf |archive-date=8 August 2017 }}</ref><ref>Shankar Rao, Hossein Mobahi, Allen Yang, Shankar Sastry and Yi Ma [http://perception.csl.illinois.edu/coding/papers/RaoS2009-ACCV.pdf Natural Image Segmentation with Adaptive Texture and Boundary Encoding] {{Webarchive|url=https://web.archive.org/web/20160519101956/http://perception.csl.illinois.edu/coding/papers/RaoS2009-ACCV.pdf |date=19 May 2016 }}, Proceedings of the Asian Conference on Computer Vision (ACCV) 2009, H. Zha, R.-i. Taniguchi, and S. Maybank (Eds.), Part I, LNCS 5994, pp. 135β146, Springer.</ref> The connection between these two concepts is that segmentation tries to find patterns in an image and any regularity in the image can be used to compress it. The method describes each segment by its texture and boundary shape. Each of these components is modeled by a probability distribution function and its coding length is computed as follows: # The boundary encoding leverages the fact that regions in natural images tend to have a smooth contour. This prior is used by [[Huffman coding]] to encode the difference [[chain code]] of the contours in an image. Thus, the smoother a boundary is, the shorter coding length it attains. # Texture is encoded by [[lossy compression]] in a way similar to [[minimum description length]] (MDL) principle, but here the length of the data given the model is approximated by the number of samples times the [[Entropy (information theory)|entropy]] of the model. The texture in each region is modeled by a [[multivariate normal distribution]] whose entropy has a closed form expression. An interesting property of this model is that the estimated entropy bounds the true entropy of the data from above. This is because among all distributions with a given mean and covariance, normal distribution has the largest entropy. Thus, the true coding length cannot be more than what the algorithm tries to minimize. For any given segmentation of an image, this scheme yields the number of bits required to encode that image based on the given segmentation. Thus, among all possible segmentations of an image, the goal is to find the segmentation which produces the shortest coding length. This can be achieved by a simple agglomerative clustering method. The distortion in the lossy compression determines the coarseness of the segmentation and its optimal value may differ for each image. This parameter can be estimated heuristically from the contrast of textures in an image. For example, when the textures in an image are similar, such as in camouflage images, stronger sensitivity and thus lower quantization is required.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)