What is Clustering?

Clustering is a type of unsupervised learning. This means that the model has no previous training whatsoever. In clustering, the model divides data points such that the similar data points are in one group while the dissimilar ones are in other groups.

Example

We have a classification problem at hand. We have a data set with images of humans and we need to classify them based on age with looks as a feature(let’s say). Say the model checks the size of the human in the image and then classifies the image. The first image is classified as a baby due to its small size and its put in category 1. Now the next image is read and is compared for similarities with the previous image(s) as well. If the similarity is low, it is put in another class, say 2 which stands for an adult man. The 3rd image is an image of an adolescent. An adolescent looks neither like a baby nor like an adult man but has a few similarities to both of them. Initially the model would group this image in 1 or 2 depending on which it is more similar to. However as more and more adolescent images come, the model “realizes” that there are many of these doubtful images and puts them in a new class 3. These images can later be matched with ground truth and the accuracy can be found.

K – Means Clustering (KMC)

Algorithm:

  • First we initialize K points, which can be initialized randomly or in a particular way. These K points are called our means.
  • Then for each data point, we group them to their closest mean. This can again be done using several distance formula. Once we categorize the point, we update the means position. The means position is the average of the items classified to that mean so far.
  • This process can be repeats for a number of iteration of epochs for better accuracy and we have our clusters at the end.
  • Note that the number of clusters in KMC is already predefined. The number of clusters of groups is K.

KMC in Color Quantization:

What is Color Quantization?

Color quantization is a process that is applied to images to reduce the number of distinct colors in the image. This technique is used in computer graphics to support machines that aren’t capable to holding many colors. In short, it is a technique used to represent an image in exactly K distinct colors.

Implementation in Python:

Note: For this implementation, python modules such as open-cv and numpy are needed which may not come in your default installation of python. Please ensure they are downloaded before implementation KMC.

We first import our needed modules and read the image through open-cv. We convert the image into a 3D numpy array as well.

import numpy as np
import cv2

img = cv2.imread(r"C:\Users\shriya-student\Documents\machinelearning\messi.jpg")
#Insert your own address above.
Z = img.reshape((-1,3)) 
#Making it 3D shape. -
#-1 means that the other dimension for the array is chosen accordingly for the image by numpy.

#We're converting to float
Z = np.float32(Z)

The image I have used can be seen over here: https://github.com/PyProjectsIsFun/Machine-Learning/blob/master/messi.jpg

Now that we have read our image, we can apply our K-Means algorithm on it. We first define the criteria we need for KMC and then apply it on a fixed K and a fixed number of iterations.

criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,10,1.0)
#cv2.TERM_CRITERIA_EPS - stop the algorithm iteration if specified accuracy, epsilon, is reached. 
#cv2.TERM_CRITERIA_MAX_ITER - stop the algorithm after the specified number of iterations, max_iter. 
#cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER - stop the iteration when any of the above condition is met.
#1.0 means no stop in between each iteration.

K = 2   #our value for K
ret, label, center = cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
#Random centers for K means are random as we don't know which colors we have on image.
#ret is the standard deviation from the centres.
#Label is the array of image shape and contains which center each pixel has been assigned to.
#Center is the array of the centres of the cluster.

center = np.uint8(center)
#converts image back to integer.

res = center[label.flatten()]
#Creates an array of size equal to number of pixels where ith pixel has center[label[i]] i.e,
#each pixel is assigned its appropriate center according to its label given.

res2 = res.reshape((img.shape))
#Reshaping res into an image. You can print out the arrays for better clarity.

cv2.imshow("Color Quanitized Image", res2) #Showing image.
cv2.waitKey(0)   #Image will close only when we close it.

You can play around with the value of K and observe the various results. I have included some of the results below.

This is the output when K = 2

This is the output when K = 8

This is the output when K = 25

This is the output when K = 100. You can see even at K=100. there are still minor disparities between this and the original image.

It is important to realize that over here each center is not a random pixel of the form (i,j) but each center is a RGB representation of colors in the form (x,y,z)that is chosen randomly and updated later on in the process.

The complete code can also be viewed here : https://github.com/PyProjectsIsFun/Machine-Learning/blob/master/color_quantization_kmeans.py

Visits: 70

Leave a Reply

Your email address will not be published. Required fields are marked *