Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hierarchical clustering
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Agglomerative clustering example == [[Image:Clusters.svg|frame|none|Raw data]] For example, suppose this data is to be clustered, and the [[Euclidean distance]] is the [[Metric (mathematics)|distance metric]]. The hierarchical clustering [[dendrogram]] would be: [[Image:Hierarchical clustering simple diagram.svg|frame|none|Traditional representation]] Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row (from the top) of the [[dendrogram]] will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number but larger clusters. This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance. Optionally, one can also construct a [[distance matrix]] at this stage, where the number in the ''i''-th row ''j''-th column is the distance between the ''i''-th and ''j''-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the [[single-linkage clustering]] page; it can easily be adapted to different types of linkage (see below). Suppose we have merged the two closest elements ''b'' and ''c'', we now have the following clusters {''a''}, {''b'', ''c''}, {''d''}, {''e''} and {''f''}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters <math>\mathcal{A}</math> and <math>\mathcal{B}</math> is one of the following: * The maximum distance between elements of each cluster (also called [[complete-linkage clustering]]): ::<math> \max \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B}\,\}. </math> * The minimum distance between elements of each cluster (also called [[single-linkage clustering]]): ::<math> \min \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B} \,\}. </math> * The mean distance between elements of each cluster (also called average linkage clustering, used e.g. in [[UPGMA]]): ::<math> {1 \over {|\mathcal{A}|\cdot|\mathcal{B}|}}\sum_{x \in \mathcal{A}}\sum_{ y \in \mathcal{B}} d(x,y). </math> * The sum of all intra-cluster variance. * The increase in variance for the cluster being merged ([[Ward's method]]<ref name="wards method"/>) * The probability that candidate clusters spawn from the same distribution function (V-linkage). In case of tied minimum distances, a pair is randomly chosen, thus being able to generate several structurally different dendrograms. Alternatively, all tied pairs may be joined at the same time, generating a unique dendrogram.<ref>{{cite journal | doi=10.1007/s00357-008-9004-x | last1=Fernández | first1=Alberto | last2=Gómez | first2=Sergio | title=Solving Non-uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms | journal=Journal of Classification | volume=25 | year=2008 | issue=1 | pages=43–65| arxiv=cs/0608049 | s2cid=434036 }}</ref> One can always decide to stop clustering when there is a sufficiently small number of clusters (number criterion). Some linkages may also guarantee that agglomeration occurs at a greater distance between clusters than the previous agglomeration, and then one can stop clustering when the clusters are too far apart to be merged (distance criterion). However, this is not the case of, e.g., the centroid linkage where the so-called reversals<ref>{{cite book |first1=P. |last1=Legendre |first2=L.F.J. |last2=Legendre |chapter=Cluster Analysis §8.6 Reversals |title=Numerical Ecology |chapter-url=https://books.google.com/books?id=DKlUIQcHhOsC&pg=PA376 |date=2012 |publisher=Elsevier |isbn=978-0-444-53868-0 |pages=376–7 |edition=3rd |series=Developments in Environmental Modelling |volume=24}}</ref> (inversions, departures from ultrametricity) may occur.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)