agnes(x, diss=F, metric="euclidean", stand=F, method="average", save.x=T, save.diss=T)
diss
argument.
x
is typically the output of
daisy
or
dist
.
Also a vector with length n*(n-1)/2 is allowed
(where n is the number of observations),
and will be interpreted in the same way as the output of the above-mentioned
functions.
Missing values (NAs) are not allowed.
x
will be
considered as a dissimilarity matrix.
If FALSE, then
x
will be
considered as a matrix of observations by variables.
x
is already a dissimilarity matrix,
then this argument will be ignored.
x
are standardized before calculating the dissimilarities.
Measurements are standardized for each variable (column),
by subtracting the variable's mean value and dividing by
the variable's mean absolute deviation.
If
x
is already a dissimilarity matrix,
then this argument will be ignored.
"agnes"
representing the clustering.
See agnes.object for details.
agnes
is fully described in chapter 5 of Kaufman and Rousseeuw (1990).
Compared to other agglomerative clustering methods such as
hclust
,
agnes
has the following features: (a) it yields the
agglomerative coefficient (see
agnes.object
)
which measures the amount of clustering structure found; and (b)
apart from the usual tree it also provides the banner, a novel
graphical display (see
plot.agnes
).
The
agnes
-algorithm constructs a hierarchy of clusterings.
At first, each observation
is a small cluster by itself. Clusters are merged until only one large
cluster remains which contains all the observations.
At each stage the two "nearest" clusters are combined to form one larger
cluster. For
method
="average", the distance between two clusters is the
average of the dissimilarities between the points in one cluster and the
points in the other cluster. In
method
="single", we use
the smallest dissimilarity between a point in the first cluster
and a point in the second cluster (nearest neighbor method).
When
method
="complete", we use the
largest dissimilarity between a point in the first cluster and a point
in the second cluster (furthest neighbor method).
Cluster analysis divides a dataset into groups (clusters) of observations
that are similar to each other.
Hierarchical methods like
agnes
,
diana
, and
mona
construct a hierarchy of clusterings,
with the number of clusters ranging from one to the number of observations.
Partitioning methods like
pam
,
clara
,
and
fanny
require that
the number of clusters be given by the user.
Kaufman, L. and Rousseeuw, P. J. (1990).
Finding Groups in Data: An Introduction to Cluster Analysis.
Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P. J. (1997).
Integrating robust clustering techniques in S-PLUS.
Computational Statistics and Data Analysis,
26, 17-37.
agn1 <- agnes(votes.repub, metric="manhattan", stand=T) print(agn1) plot(agn1) agn2 <- agnes(daisy(votes.repub), diss=T, method="complete") plot(agn2) agnes(dist(votes.repub), diss=T)