Machine learning/Unsupervised Learning/K-means Clustering

K-means is a method of clustering which is an unsupervised learning problem.


 * In this method the number of clusters is an input to the algorithm (hyper-parameter).
 * K-means method is a greedy algorithm. It is a special case of expectation maximization (EM) algorithm, in which we try to find maximum likelihood expectation (MLE).
 * K-means algorithm is not guaranteed to converge to the global mean of the loss function (sum of Euclidean distances from each cluster center). The global minimum problem is an NP hard problem.

Relation to Gaussian Mixture Model (GMM):

 * GMM is a more probabilistic approach to clustering.
 * Expectation maximization (EM) algorithm is used to find a good gaussian mixture model to cluster the data.

= Intuition = Data points in each cluster are closer to the center of each cluster than the center of other clusters.

= Algorithm = Assume that the number of clusters is given by $$k$$ and the cluster centers are shown with $$\mu_1, \mu_2, \cdots, \mu_k \in \R^d$$.

We define a loss function for clustering and try to minimize it through the following greedy algorithm.

The loss function is defined as

$$L = \sum_{j=1}^k \sum_{i} a_{ij}||x_i - \mu_j ||^2 \text{where}  a_{ij}= \left\{ \begin{align} 1 & \text{if } x_i \text{ assigned to } j \\ 0 & \text{else} \end{align}

\right.$$

Minimize $$L$$ with respect to $$a$$ and $$\mu$$ by following these two steps until convergence is achieved:

\begin{align} 1 & \text{if } j = {\arg\min}_l |x_i - \mu_l|^2\\ 0 & \text{else} \end{align}
 * 1) Choose optimal $$a$$ for fixed $$\mu$$ by assigning $$x_i$$ to the nearest $$\mu_j$$ $$a_{ij}= \left\{

\right.$$
 * 1) Choose optimal $$\mu$$ for fixed $$a$$ by updating $$\mu_j$$ to be the empirical mean of the points assigned to each cluster $$\mu_j = \frac{1}{n_j} \sum_{i:~x_i\text{ in j}} x_i \text{ where } n_j = \sum_{i=1}^m a_{ij} = \text{number of data points assigned to j}$$

= Justification = In this section, we show that choosing cluster center, $$\mu_j$$, according to step 2 of the algorithm minimized loss function for a fixed set of assignment factors ($$a_{ij}$$)

In order to find the minimum value of loss function ($$L$$) as a function of $$\mu_j$$, we find the point for which the gradient of the function is zero

$$\nabla_{\mu_j}L = 0$$Therefore, we have

$$\begin{aligned} \nabla_{\mu_j} L = & \sum_i a_{ij} \nabla_{\mu_j} (x_i-\mu_j)^T (x_i-\mu_j) \\ = & \sum_i a_{ij} \nabla_{\mu_j} (x_i^T x_i -2\mu_j^T x_i +\mu_j^T \mu_j) \\ = & \sum_i a_{ij} (-2 x_i + 2 \mu_j) = 0 \end{aligned}$$Getting rid of the factor of 2 in the last expression we have $$\mu_j \sum_{i} a_{ij} = \sum_{i} a_{ij} x_i \Rightarrow  \mu_j = \frac{\sum_{i} a_{ij} x_i}{\sum_{i} a_{ij}}$$We also have $$n_j = \sum_{i=1}^n a_{ij} = \# \{i: x_i \text{ is assigned to }j \}$$, which simplifies $$\mu_j$$ to$$\mu_j = \frac{1}{n_j} \sum_{i:~x_i\text{ in j}} x_i $$