Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cluster analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithms == {{main category|Cluster analysis algorithms}} As listed above, clustering algorithms can be categorized based on their cluster model. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in the [[List of algorithms#Statistics|list of statistics algorithms]]. There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder."<ref name="estivill" /> In fact, an axiomatic approach to clustering demonstrates that it is impossible for any clustering method to meet three fundamental properties simultaneously: '''scale invariance''' (results remain unchanged under proportional scaling of distances), '''richness''' (all possible partitions of the data can be achieved), and '''consistency''' between distances and the clustering structure.<ref>{{Cite conference |last=Kleinberg |first=Jon |year=2002 |title=An Impossibility Theorem for Clustering |url=https://proceedings.neurips.cc/paper_files/paper/2002/file/43e4e6a6f341e00671e123714de019a8-Paper.pdf |conference=Advances in Neural Information Processing Systems |publisher=MIT Press |volume=15 |book-title= |editor=}}</ref> The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. An algorithm that is designed for one kind of model will generally fail on a data set that contains a radically different kind of model.<ref name="estivill" /> For example, k-means cannot find non-convex clusters.<ref name="estivill" /> Most traditional clustering methods assume the clusters exhibit a spherical, elliptical or convex shape.<ref>{{Cite journal |last1=Gao |first1=Caroline X. |last2=Dwyer |first2=Dominic |last3=Zhu |first3=Ye |last4=Smith |first4=Catherine L. |last5=Du |first5=Lan |last6=Filia |first6=Kate M. |last7=Bayer |first7=Johanna |last8=Menssink |first8=Jana M. |last9=Wang |first9=Teresa |last10=Bergmeir |first10=Christoph |last11=Wood |first11=Stephen |last12=Cotton |first12=Sue M. |date=2023-09-01 |title=An overview of clustering methods with guidelines for application in mental health research |journal=Psychiatry Research |volume=327 |pages=115265 |doi=10.1016/j.psychres.2023.115265 |issn=0165-1781|doi-access=free |pmid=37348404 |hdl=10481/84538 |hdl-access=free }}</ref> === Connectivity-based clustering (hierarchical clustering) === {{Main|Hierarchical clustering}} Connectivity-based clustering, also known as ''[[hierarchical clustering]]'', is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using a [[dendrogram]], which explains where the common name "[[hierarchical clustering]]" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix. Connectivity-based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice of [[distance function]]s, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use. Popular choices are known as [[single-linkage clustering]] (the minimum of object distances), [[complete linkage clustering]] (the maximum of object distances), and [[UPGMA]] or [[WPGMA]] ("Unweighted or Weighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions). These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular with [[single-linkage clustering]]). In the general case, the complexity is <math>\mathcal{O}(n^3)</math> for agglomerative clustering and <math>\mathcal{O}(2^{n-1})</math> for [[divisive clustering]],<ref>{{cite book | last = Everitt | first = Brian | title = Cluster analysis | publisher = Wiley | location = Chichester, West Sussex, U.K | year = 2011 | isbn = 9780470749913 }}</ref> which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexity <math>\mathcal{O}(n^2)</math>) are known: SLINK<ref>{{cite journal | first = R. | last = Sibson | title=SLINK: an optimally efficient algorithm for the single-link cluster method | journal=The Computer Journal | volume=16 | issue=1 | pages=30–34 | year=1973 | publisher=British Computer Society | url=http://www.cs.gsu.edu/~wkim/index_files/papers/sibson.pdf | doi=10.1093/comjnl/16.1.30 }}</ref> for single-linkage and CLINK<ref>{{cite journal | first = D. | last = Defays | title=An efficient algorithm for a complete link method | journal=The Computer Journal | volume=20 | issue=4 | pages=364–366 | year=1977 | publisher=British Computer Society | doi=10.1093/comjnl/20.4.364}}</ref> for complete-linkage clustering. <gallery caption="Linkage clustering examples" widths="200px" heights="200px"> File:SLINK-Gaussian-data.svg|Single-linkage on Gaussian data. At 35 clusters, the biggest cluster starts fragmenting into smaller parts, while before it was still connected to the second largest due to the single-link effect. File:SLINK-density-data.svg|Single-linkage on density-based clusters. 20 clusters extracted, most of which contain single elements, since linkage clustering does not have a notion of "noise". </gallery> === Centroid-based clustering === {{Main|k-means clustering}} In centroid-based clustering, each cluster is represented by a central vector, which is not necessarily a member of the data set. When the number of clusters is fixed to ''k'', [[k-means clustering|''k''-means clustering]] gives a formal definition as an optimization problem: find the ''k'' cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. The optimization problem itself is known to be [[NP-hard]], and thus the common approach is to search only for approximate solutions. A particularly well-known approximate method is [[Lloyd's algorithm]],<ref name="lloyd">{{Cite journal | last1 = Lloyd | first1 = S. | title = Least squares quantization in PCM | doi = 10.1109/TIT.1982.1056489 | journal = IEEE Transactions on Information Theory | volume = 28 | issue = 2 | pages = 129–137 | year = 1982 | s2cid = 10833328 }}</ref> often just referred to as "''k-means algorithm''" (although [[k-means clustering#History|another algorithm introduced this name]]). It does however only find a [[local optimum]], and is commonly run multiple times with different random initializations. Variations of ''k''-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set ([[K-medoids|''k''-medoids]]), choosing [[median]]s ([[k-medians clustering|''k''-medians clustering]]), choosing the initial centers less randomly ([[k-means++|''k''-means++]]) or allowing a fuzzy cluster assignment ([[Fuzzy clustering|fuzzy c-means]]). Most ''k''-means-type algorithms require the [[Determining the number of clusters in a data set|number of clusters]] – ''k'' – to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid; often yielding improperly cut borders of clusters. This happens primarily because the algorithm optimizes cluster centers, not cluster borders. Steps involved in the centroid-based clustering algorithm are: # Choose, ''k'' '''distinct''' clusters at random. These are the initial centroids to be improved upon. # Suppose a set of observations, {{math|('''x'''<sub>1</sub>, '''x'''<sub>2</sub>, ..., '''x'''<sub>''n''</sub>)}}. Assign each observation to the centroid to which it has the smallest squared [[Euclidean distance]]. This results in ''k'' distinct groups, each containing unique observations. # Recalculate centroids (see [[k-means clustering|''k''-means clustering]]). # Exit ''iff'' the new centroids are equivalent to the previous iteration's centroids. Else, repeat the algorithm, the centroids have yet to converge. K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as a [[Voronoi diagram]]. Second, it is conceptually close to nearest neighbor classification, and as such is popular in [[machine learning]]. Third, it can be seen as a variation of model-based clustering, and Lloyd's algorithm as a variation of the [[Expectation-maximization algorithm]] for this model discussed below. <gallery widths="200" heights="200" caption="''k''-means clustering examples"> File:KMeans-Gaussian-data.svg|''k''-means separates data into Voronoi cells, which assumes equal-sized clusters (not adequate here). File:KMeans-density-data.svg|''k''-means cannot represent density-based clusters. </gallery> Centroid-based clustering problems such as ''k''-means and ''k''-medoids are special cases of the uncapacitated, metric [[Optimal facility location|facility location problem]], a canonical problem in the operations research and computational geometry communities. In a basic facility location problem (of which there are numerous variants that model more elaborate settings), the task is to find the best warehouse locations to optimally service a given set of consumers. One may view "warehouses" as cluster centroids and "consumer locations" as the data to be clustered. This makes it possible to apply the well-developed algorithmic solutions from the facility location literature to the presently considered centroid-based clustering problem. === Model-based clustering === The clustering framework most closely related to statistics is [[model-based clustering]], which is based on [[Probability distribution|distribution models]]. This approach models the data as arising from a mixture of probability distributions. It has the advantages of providing principled statistical answers to questions such as how many clusters there are, what clustering method or model to use, and how to detect and deal with outliers. While the theoretical foundation of these methods is excellent, they suffer from [[overfitting]] unless constraints are put on the model complexity. A more complex model will usually be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. Standard [[model-based clustering]] methods include more parsimonious models based on the [[eigenvalue decomposition]] of the covariance matrices, that provide a balance between overfitting and fidelity to the data. One prominent method is known as Gaussian mixture models (using the [[expectation-maximization algorithm]]). Here, the data set is usually modeled with a fixed (to avoid overfitting) number of [[Gaussian distribution]]s that are initialized randomly and whose parameters are iteratively optimized to better fit the data set. This will converge to a [[local optimum]], so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary. Distribution-based clustering produces complex models for clusters that can capture [[correlation and dependence]] between attributes. However, these algorithms put an extra burden on the user: for many real data sets, there may be no concisely defined mathematical model (e.g. assuming Gaussian distributions is a rather strong assumption on the data). <gallery widths="200" heights="200" caption="Gaussian mixture model clustering examples"> File:EM-Gaussian-data.svg|On Gaussian-distributed data, <abbr title="expectation–maximization">EM</abbr> works well, since it uses Gaussians for modelling clusters. File:EM-density-data.svg|Density-based clusters cannot be modeled using Gaussian distributions. </gallery> === Density-based clustering === In density-based clustering,<ref>{{Cite journal | author1-link = Hans-Peter Kriegel | first1 = Hans-Peter | last1 = Kriegel | first2 = Peer | last2 = Kröger | first3 = Jörg | last3 = Sander | first4 = Arthur | last4 = Zimek|author-link4=Arthur Zimek | title = Density-based Clustering | journal = WIREs Data Mining and Knowledge Discovery | volume = 1 | issue = 3 | year = 2011 | pages = 231–240 | url = http://wires.wiley.com/WileyCDA/WiresArticle/wisId-WIDM30.html | doi = 10.1002/widm.30 | s2cid = 36920706 }}</ref> clusters are defined as areas of higher density than the remainder of the data set. Objects in sparse areas – that are required to separate clusters – are usually considered to be noise and border points. The most popular<ref>[http://academic.research.microsoft.com/CSDirectory/paper_category_7.htm Microsoft academic search: most cited data mining articles] {{webarchive|url=https://web.archive.org/web/20100421170848/http://academic.research.microsoft.com/CSDirectory/Paper_category_7.htm |date=2010-04-21 }}: DBSCAN is on rank 24, when accessed on: 4/18/2010</ref> density-based clustering method is [[DBSCAN]].<ref>{{Cite conference | author1-first = Martin | author1-last = Ester | author2-link = Hans-Peter Kriegel | author2-first = Hans-Peter | author2-last = Kriegel | author3-first = Jörg | author3-last = Sander | author4-first = Xiaowei | author4-last = Xu | title = A density-based algorithm for discovering clusters in large spatial databases with noise | pages = 226–231 | editor1-first = Evangelos | editor1-last = Simoudis | editor2-first = Jiawei | editor2-last = Han | editor3-first = Usama M. | editor3-last = Fayyad | book-title = Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96) | publisher = [[AAAI Press]] | year = 1996 | isbn = 1-57735-004-9 }}</ref> In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage-based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it is [[deterministic algorithm|deterministic]] for core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times. [[OPTICS algorithm|OPTICS]]<ref>{{Cite conference | first1 = Mihael | last1 = Ankerst | first2 = Markus M. | last2 = Breunig | author3-link = Hans-Peter Kriegel | first3 = Hans-Peter | last3 = Kriegel | first4 = Jörg | last4 = Sander | title = OPTICS: Ordering Points To Identify the Clustering Structure | year = 1999 | pages = 49–60 | book-title = ACM SIGMOD international conference on Management of data | publisher = [[ACM Press]] | citeseerx = 10.1.1.129.6542 }}</ref> is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameter <math>\varepsilon</math>, and produces a hierarchical result related to that of [[hierarchical clustering|linkage clustering]]. DeLi-Clu,<ref name="ReferenceA">{{Cite conference| doi = 10.1007/11731139_16| isbn = 978-3-540-33206-0| contribution = DeLi-Clu: Boosting Robustness, Completeness, Usability, and Efficiency of Hierarchical Clustering by a Closest Pair Ranking| year = 2006| last1 = Achtert | first1 = E.| last2 = Böhm| series = Lecture Notes in Computer Science | first2 = C.| last3 = Kröger | first3 = P.| pages = 119–128|title= Advances in Knowledge Discovery and Data Mining| volume = 3918| citeseerx = 10.1.1.64.1161}}</ref> Density-Link-Clustering combines ideas from [[single-linkage clustering]] and OPTICS, eliminating the <math>\varepsilon</math> parameter entirely and offering performance improvements over OPTICS by using an [[R-tree]] index. The key drawback of [[DBSCAN]] and [[OPTICS]] is that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such as [[Expectation–maximization algorithm|EM clustering]] that are able to precisely model this kind of data. [[Mean-shift]] is a clustering approach where each object is moved to the densest area in its vicinity, based on [[kernel density estimation]]. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these "density attractors" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.<ref name="ReferenceA"/> <gallery widths="200" heights="200" caption="Density-based clustering examples"> File:DBSCAN-density-data.svg|Density-based clustering with [[DBSCAN]] File:DBSCAN-Gaussian-data.svg|[[DBSCAN]] assumes clusters of similar density, and may have problems separating nearby clusters. File:OPTICS-Gaussian-data.svg|[[OPTICS algorithm|OPTICS]] is a DBSCAN variant, improving handling of different densities clusters. </gallery> === Grid-based clustering === The grid-based technique is used for a [[Multidimensional scaling|multi-dimensional]] data set.<ref>{{Cite book|editor-last1=Aggarwal|editor-first1=Charu C.|editor-last2=Reddy|editor-first2=Chandan K.|title=Data Clustering : Algorithms and Applications|isbn=978-1-315-37351-5|oclc=1110589522}}</ref> In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clustering [[algorithm]] are: # Divide data space into a finite number of cells. # Randomly select a cell ‘c’, where c should not be traversed beforehand. # Calculate the density of ‘c’ # If the density of ‘c’ greater than threshold density ## Mark cell ‘c’ as a new cluster ## Calculate the density of all the neighbors of ‘c’ ## If the density of a neighboring cell is greater than threshold density then, add the cell in the cluster and repeat steps 4.2 and 4.3 till there is no neighbor with a density greater than threshold density. # Repeat steps 2,3 and 4 till all the cells are traversed. # Stop. === Recent developments === In recent years, considerable effort has been put into improving the performance of existing algorithms.<ref>{{cite conference | first = D. | last = Sculley |title=Web-scale k-means clustering |conference=Proc. 19th WWW |year=2010}}</ref><ref>{{cite journal | last1 = Huang | first1 = Z. | s2cid = 11323096 | year = 1998 | title = Extensions to the ''k''-means algorithm for clustering large data sets with categorical values | journal = Data Mining and Knowledge Discovery | volume = 2 | issue = 3| pages = 283–304 | doi = 10.1023/A:1009769707641 }}</ref> Among them are ''CLARANS'',<ref>R. Ng and J. Han. "Efficient and effective clustering method for spatial data mining". In: Proceedings of the 20th VLDB Conference, pages 144–155, Santiago, Chile, 1994.</ref> and ''[[Birch (data clustering)|BIRCH]]''.<ref>Tian Zhang, Raghu Ramakrishnan, Miron Livny. "[http://www.cs.du.edu/~leut/4423/papers/zhang96birch.pdf An Efficient Data Clustering Method for Very Large Databases]." In: Proc. Int'l Conf. on Management of Data, ACM SIGMOD, pp. 103–114.</ref> With the recent need to process larger and larger data sets (also known as [[big data]]), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as [[canopy clustering algorithm|canopy clustering]], which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as [[k-means clustering]]. For [[high-dimensional data]], many of the existing methods fail due to the [[curse of dimensionality]], which renders particular distance functions problematic in high-dimensional spaces. This led to new [[clustering high-dimensional data|clustering algorithms for high-dimensional data]] that focus on [[subspace clustering]] (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and [[correlation clustering]] that also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving a [[correlation]] of their attributes.<ref>{{cite journal|last1=Kriegel|first1=Hans-Peter|author-link1=Hans-Peter Kriegel|last2=Kröger|first2=Peer|last3=Zimek|first3=Arthur|author-link3=Arthur Zimek|title=Subspace clustering|journal=Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery|date=July 2012|volume=2|issue=4|pages=351–364|doi=10.1002/widm.1057|s2cid=7241355}}</ref> Examples for such clustering algorithms are CLIQUE<ref>{{Cite journal | last1 = Agrawal | first1 = R. | last2 = Gehrke | first2 = J. | last3 = Gunopulos | first3 = D. | last4 = Raghavan | first4 = P. | title = Automatic Subspace Clustering of High Dimensional Data | doi = 10.1007/s10618-005-1396-1 | journal = Data Mining and Knowledge Discovery | volume = 11 | pages = 5–33 | year = 2005 | citeseerx = 10.1.1.131.5152 | s2cid = 9289572 }}</ref> and [[SUBCLU]].<ref>Karin Kailing, [[Hans-Peter Kriegel]] and Peer Kröger. ''Density-Connected Subspace Clustering for High-Dimensional Data''. In: ''Proc. SIAM Int. Conf. on Data Mining (SDM'04)'', pp. 246–257, 2004.</ref> Ideas from density-based clustering methods (in particular the [[DBSCAN]]/[[OPTICS]] family of algorithms) have been adapted to subspace clustering (HiSC,<ref>{{Cite conference| doi = 10.1007/11871637_42| contribution = Finding Hierarchies of Subspace Clusters| isbn = 978-3-540-45374-1| year = 2006| last1 = Achtert | first1 = E.| last2 = Böhm | first2 = C.| last3 = Kriegel | first3 = H.-P.| series = Lecture Notes in Computer Science | author-link3 =Hans-Peter Kriegel| last4 = Kröger | first4 = P.| last5 = Müller-Gorman | first5 = I.| last6 = Zimek | first6 = A.|author-link6=Arthur Zimek| pages = 446–453| title= Knowledge Discovery in Databases: PKDD 2006| volume = 4213| citeseerx = 10.1.1.705.2956}}</ref> hierarchical subspace clustering and DiSH<ref>{{Cite conference| doi = 10.1007/978-3-540-71703-4_15| contribution = Detection and Visualization of Subspace Cluster Hierarchies| isbn = 978-3-540-71702-7| year = 2007| last1 = Achtert | first1 = E.| last2 = Böhm | first2 = C.| last3 = Kriegel | first3 = H. P. | author-link3 =Hans-Peter Kriegel| series = Lecture Notes in Computer Science| last4 = Kröger | first4 = P.| last5 = Müller-Gorman | first5 = I.| last6 = Zimek | first6 = A.|author-link6=Arthur Zimek| volume = 4443| pages = 152–163| title= Advances in Databases: Concepts, Systems and Applications| citeseerx = 10.1.1.70.7843}}</ref>) and correlation clustering (HiCO,<ref>{{Cite book| doi = 10.1109/SSDBM.2006.35| isbn = 978-0-7695-2590-7| year = 2006| last1 = Achtert | first1 = E.| last2 = Böhm | first2 = C.| last3 = Kröger | first3 = P.| last4 = Zimek | first4 = A.| title = 18th International Conference on Scientific and Statistical Database Management (SSDBM'06)| chapter = Mining Hierarchies of Correlation Clusters|author-link4=Arthur Zimek| pages = 119–128| citeseerx = 10.1.1.707.7872| s2cid = 2679909}}</ref> hierarchical correlation clustering, 4C<ref>{{Cite book | last1 = Böhm | first1 = C. | last2 = Kailing | first2 = K. | last3 = Kröger | first3 = P. | last4 = Zimek | first4 = A. |author-link4=Arthur Zimek| chapter = Computing Clusters of Correlation Connected objects | doi = 10.1145/1007568.1007620 | title = Proceedings of the 2004 ACM SIGMOD international conference on Management of data - SIGMOD '04 | pages = 455 | year = 2004 | isbn = 978-1581138597 | citeseerx = 10.1.1.5.1279 | s2cid = 6411037 }}</ref> using "correlation connectivity" and ERiC<ref>{{Cite book | last1 = Achtert | first1 = E. | last2 = Bohm | first2 = C. | last3 = Kriegel | first3 = H. P. | author-link3=Hans-Peter Kriegel| last4 = Kröger | first4 = P. | last5 = Zimek | first5 = A. |author-link5=Arthur Zimek| doi = 10.1109/SSDBM.2007.21 | chapter = On Exploring Complex Relationships of Correlation Clusters | title = 19th International Conference on Scientific and Statistical Database Management (SSDBM 2007) | pages = 7 | year = 2007 | isbn = 978-0-7695-2868-7 | citeseerx = 10.1.1.71.5021 | s2cid = 1554722 }}</ref> exploring hierarchical density-based correlation clusters). Several different clustering systems based on [[mutual information]] have been proposed. One is Marina Meilă's ''[[variation of information]]'' metric;<ref>{{Cite conference|last=Meilă|first=Marina | contribution=Comparing Clusterings by the Variation of Information |title=Learning Theory and Kernel Machines|year=2003|pages=173–187|doi=10.1007/978-3-540-45167-9_14|series=Lecture Notes in Computer Science|isbn=978-3-540-40720-1|volume=2777}}</ref> another provides hierarchical clustering.<ref>{{cite arXiv| first1 = Alexander | last1 = Kraskov | first2 = Harald | last2 = Stögbauer | first3 = Ralph G. | last3 = Andrzejak | first4 = Peter | last4 = Grassberger | title = Hierarchical Clustering Based on Mutual Information | date = 1 December 2003 | arxiv= q-bio/0311039}}</ref> Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information.<ref>{{cite journal | last = Auffarth | first = B. | title = Clustering by a Genetic Algorithm with Biased Mutation Operator | journal = Wcci Cec | publisher = IEEE | date = July 18–23, 2010 |url=http://www.diva-portal.org/smash/get/diva2:456742/FULLTEXT02}}</ref> Also [[belief propagation]], a recent development in [[computer science]] and [[statistical physics]], has led to the creation of new types of clustering algorithms.<ref>{{Cite journal| first1 = B. J. | last1 = Frey | first2 = D. | last2 = Dueck| title=Clustering by Passing Messages Between Data Points|journal=Science| volume=315| pages=972–976|doi= 10.1126/science.1136800|year= 2007|issue= 5814|pmid= 17218491 | bibcode = 2007Sci...315..972F | citeseerx = 10.1.1.121.3145 | s2cid = 6502291 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)