For people who are unfamiliar, k-means is a partitioning algorithm that aims to group observations into a specific number (k) of clusters in such a way that each observation ends up in the cluster with the “nearest” mean. So say you want 5 groups, it will make five groups so that every observation is in the group where it’s nearest to the mean.
And so that raises the question of what “nearest” means, and here this allows you to replace Euclidian distance with things like Kullback-Leibler divergence (that’s the KL below) which make more sense than Euclidian distance if you’re trying to measure how close two probability distributions are to each other.
For the data I work with at $dayjob I've found the Silhouette algorithm to perform best but I assume it will be extremely field specific. Clustering your data and taking a representative sample of each cluster is such a powerful trick to make big data small but finding an appropriate K is an art more than a science.
At a previous $dayjob at a very large financial institution, it's however many clusters are present in the strategy that was agreed to by the exec team and their highly paid consultants.
You find that many clusters and shoehorn the consultant provided categories on to the k clusters you obtain.
To be fair finding K is highly domain dependent and I would argue should not be for the analyst (solely) to decide, but with a feedback from domain experts.
K is whatever you want it to be. You want 5 clusters k=5. If you don’t know the right number of clusters try a few different values of k and see which partitions your sample in a way that’s good for your problem
I agree it is a profound question. My thesis is fairly boring.
For any given clustering task of interest, there is no single value of K.
Clustering & unsupervised machine learning is as much about creating meaning and structure as it is about discovering or revealing it.
Take the case of biological taxonomy, what K will best segment the animal kingdom?
There is no true value of K. If your answer is for a child, maybe it’ 7 corresponding to what we’re taught in school - mammals, birds, reptiles, amphibians, fish, and invertebrates.
If your answer is for a zoologist, obviously this won’t do.
Every clustering task of interest is like this. And I say of interest because clustering things like digits in the classic MNIST dataset is better posed as a classification problem - the categories are defined analytically.
Can folks comment on what applications they use k-means for? It was a basic technique I learned in school, but honestly I am not really familiar with a single use case that is very clearly motivated besides "pretty pictures".
So I do a bit of work in geospatial analysis, and hotspots are better represented by DBSCAN (do not need to assign every point a cluster). I just do not even use clustering very often in gig (supervised ML and anomaly detection are much more prevalent in the rest of my work).
Generalized K-Means Clustering for Apache Spark with Bregman Divergences
## Body (3,982 characters)
I've built a production-ready K-Means library for Apache Spark that supports multiple distance functions beyond Euclidean.
*Why use this instead of Spark MLlib?*
MLlib's KMeans is hard-coded to Euclidean distance, which is mathematically wrong for many data types:
- *Probability distributions* (topic models, histograms): KL divergence is the natural metric. Euclidean treats [0.5, 0.3, 0.2] and [0.49, 0.31, 0.2] as similar even though they represent different distributions.
- *Audio/spectral data*: Itakura-Saito respects multiplicative power spectra. Euclidean incorrectly treats -20dB and -10dB as closer than -10dB and 0dB.
- *Count data* (traffic, sales): Generalized-I divergence for Poisson-distributed data.
- *Outlier robustness*: L1/Manhattan gives median-based clustering vs mean-based (L2).
Using the wrong divergence yields mathematically valid but semantically meaningless clusters.
This started as an experiment to understand Bregman divergences. Surprisingly, KL divergence is often faster than Euclidean for probability data. Open to feedback!
And so that raises the question of what “nearest” means, and here this allows you to replace Euclidian distance with things like Kullback-Leibler divergence (that’s the KL below) which make more sense than Euclidian distance if you’re trying to measure how close two probability distributions are to each other.
reply