Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Self-organizing map
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Initialization options === Selection of initial weights as good approximations of the final weights is a well-known problem for all iterative methods of artificial neural networks, including self-organizing maps. Kohonen originally proposed random initiation of weights.<ref>{{cite book |first=T. |last=Kohonen |title=Self-Organization and Associative Memory |publisher=Springer |orig-year=1988 |edition=2nd |isbn= 978-3-662-00784-6 |year=2012}}</ref> (This approach is reflected by the algorithms described above.) More recently, principal component initialization, in which initial map weights are chosen from the space of the first principal components, has become popular due to the exact reproducibility of the results.<ref>{{cite conference |first1=A. |last1=Ciampi |first2=Y. |last2=Lechevallier |title=Clustering large, multi-level data sets: An approach based on Kohonen self organizing maps |editor-first=D.A. |editor-last=Zighed |editor2-first=J. |editor2-last=Komorowski |editor3-first=J. |editor3-last=Zytkow |date=2000 |publisher=Springer |volume=1910 |pages=353β358 |doi=10.1007/3-540-45372-5_36 |book-title=Principles of Data Mining and Knowledge Discovery: 4th European Conference, PKDD 2000 Lyon, France, September 13β16, 2000 Proceedings |series=Lecture notes in computer science |isbn=3-540-45372-5|doi-access=free }}</ref> [[File:Self oraganizing map cartography.jpg|thumb|Cartographical representation of a self-organizing map ([[U-Matrix]]) based on Wikipedia featured article data (word frequency). Distance is inversely proportional to similarity. The "mountains" are edges between clusters. The red lines are links between articles.]] A careful comparison of random initialization to principal component initialization for a one-dimensional map, however, found that the advantages of principal component initialization are not universal. The best initialization method depends on the geometry of the specific dataset. Principal component initialization was preferable (for a one-dimensional map) when the principal curve approximating the dataset could be univalently and linearly projected on the first principal component (quasilinear sets). For nonlinear datasets, however, random initiation performed better.<ref>{{cite journal | last1 = Akinduko | first1 = A.A. | last2 = Mirkes | first2 = E.M. | last3 = Gorban | first3 = A.N. | year = 2016 | title = SOM: Stochastic initialization versus principal components | url = https://www.researchgate.net/publication/283768202 | journal = Information Sciences | volume = 364β365| pages = 213β221| doi = 10.1016/j.ins.2015.10.013 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)