blog

Home / DeveloperSection / Blogs / The 5 Clustering Algorithms Data Scientists Need to Know

The 5 Clustering Algorithms Data Scientists Need to Know

The 5 Clustering Algorithms Data Scientists Need to Know

HARIDHA P528 02-Nov-2022

Data points are grouped as part of the machine learning approach known as clustering. We can put each data point into a certain group using a clustering algorithm given a set of data points. Theoretically, data points belonging to the same group should have similar characteristics, whereas those belonging to other groups should have very different characteristics. Unsupervised learning is a common method for statistical data analysis that is utilized in many different domains.

In Data Science, we can use clustering analysis to identify what categories the data points fall into when we use a clustering algorithm, which might provide some insightful information from our data.

Clustering with agglomerative hierarchies

There are two types of hierarchical clustering algorithms: top-down and bottom-up. Bottom-up algorithms begin by treating each data point as a single cluster and proceed by merging (or agglomerating) pairs of clusters one at a time until every cluster has been combined into a single cluster that holds every data point. Hierarchical agglomerative clustering, or HAC, is the name given to bottom-up hierarchical clustering. A tree is used to depict the clusters' hierarchical structure (or dendrogram). The unique cluster at the tree's base has all of the samples, while the clusters at the tree's leaves each contain a single sample. 

  • We start by treating each data point as a separate cluster; for example, if our dataset has X data points, then X clusters exist. Next, a distance metric that calculates the separation between two clusters is chosen. We will use average linkage as an example, which states that the distance between any two clusters is equal to the average distance between any two clusters' data points.
  • We consolidate two clusters into one during each cycle. The two clusters that have the smallest average connectivity are chosen to be joined. In other words, these two clusters should be joined because they are the most comparable and have the fewest distances between them, according to the distance measure we chose.
  • The second step is repeated until the cluster that contains all of the data points, or the root of the tree, is reached. By deciding when to stop joining the clusters, or when to stop creating the tree, we may choose how many clusters we want in the end.
  • Since we are creating a tree, hierarchical clustering allows us to choose the number of clusters that we think best represents the data. Additionally, the procedure is not dependent on the distance metric that is selected; all of them typically function equally well, in contrast to other clustering algorithms where the distance metric selection is crucial.

Expectation–Maximization (EM) Using Gaussian Mixture Models for Clustering (GMM)

The naive use of the cluster center's mean value by K-Means is one of its main flaws. Looking at the image below, we can see why this isn't the greatest approach. The presence of two circular clusters on the left with distinct radii and centered at the same mean is clearly evident to the naked eye. Because the clusters' mean values are so close together, K-Means cannot handle this situation. Because the mean is used as the cluster center, K-Means also fails when the clusters are not circular.

We will employ the optimization technique known as expectation-maximization to determine the Gaussian parameters for each cluster (such as the mean and standard deviation) (EM). Look at the figure below to see how the Gaussians are fitted to the clusters. Following that, we may go on to Expectation-Maximization clustering using GMMs.

  • As with K-Means, we start by choosing the number of clusters and initializing the Gaussian distribution parameters at random for each cluster. By quickly scanning the data, one can attempt to produce a solid estimate for the starting parameters.
  • Calculate the likelihood that each data point belongs to a specific cluster based on these Gaussian distributions for each cluster. The likelihood that a point belongs to a cluster increases with distance from the Gaussian's center. This ought to be understandable because, using a Gaussian distribution, we are expecting that the majority of the data is closer to the cluster's center.
  • We construct a new set of Gaussian distribution parameters based on these probabilities in order to optimize the odds of data points falling into clusters. Utilizing a weighted sum of the data point positions—where the weights are the odds that the data point belongs in that specific cluster—we compute these additional parameters.

Applications' Density-Based Spatial Clustering with Noise (DBSCAN)

While mean-shift and DBSCAN are both density-based clustering algorithms, the former has a few distinct advantages. 

  • DBSCAN starts at a random beginning place that hasn't been explored. Using a distance epsilon, the area around this point is determined (all points inside the distance are considered neighborhood points).
  • The clustering process begins and the current data point becomes the initial point in the new cluster if there are enough points (based on minPoints) in this neighborhood. If not, the assertion will be classified as noise (later this noisy point might become part of the cluster). That point is designated as 'visited' in both situations.
  • The points in the neighborhood of this first point in the new cluster likewise become a component of the same cluster. Then, for each new point that was recently joined to the cluster group, the process of making all the points in the neighborhood belong to the same cluster is repeated.
  • Steps 2 and 3 are repeated until all of the cluster's points have been identified, or until all of the points in the cluster's vicinity have been visited and categorized.
  • Following completion of the current cluster, a fresh, unexplored point is received and processed, resulting in the identification of a new cluster or piece of noise. Repeat this procedure until each point has been marked as visited.

Clustering using Mean-Shift

A sliding-window-based approach called mean shift clustering looks for clusters of data points that are close together. A centroid-based method, it updates candidates for center points to be the mean of the points within the sliding-window and has as its objective finding the centers of each group or class. The final set of center points and their respective groups are created by filtering these candidate windows in a post-processing stage to remove near-duplicates. 

  • To describe mean-shift, we will take into account a collection of points in two-dimensional space similar to the example shown above. We start with a circular sliding window of radius r as the kernel, centered at a randomly chosen point C. Iteratively shifting this kernel to a higher density region on each step until convergence, mean shift is a hill-climbing algorithm.
  • By moving the center point to the average of the window's points at each iteration, the sliding window is moved in the direction of areas with higher densities (hence the name). The number of points inside the sliding window determines how dense it is.

The sliding window is moved in accordance with the mean until there is no longer any direction in which a shift can accommodate any further kernel points. Look at the illustration above; we move the circle around till the density doesn't increase any more (i.e number of points in the window).

With numerous sliding windows, this process of steps 1 through 3 is repeated until all points are within a window. The sliding window with the most points is kept when many sliding windows overlap. The data points are then grouped based on the sliding window that contains them.

Clustering with K-Means

The most well-known clustering algorithm is undoubtedly K-Means. Many beginning data science and machine learning courses cover it. It's simple to comprehend and use in code! 

  • To get started, we choose a few classes or groups to use and randomly initialize each center point. It's wise to take a cursory look at the data and see if there are any clear categories before deciding how many classes to utilize. The 'X's' in the aforementioned diagram represent the center points, which are vectors with lengths equal to that of each data point vector.
  • To classify each data point, the distance between it and each group's centre is calculated. The point is then assigned to the group whose centre is closest to it.
  • Using these categorized locations as a starting point, we recompute the group centre by averaging all the group's vectors.
  • Keep going until the group centers don't vary much between iterations or for a predetermined number of times. You can also choose to begin the group centers at random a few times, and then choose the run that appears to have produced the best outcomes.

Writing is my thing. I enjoy crafting blog posts, articles, and marketing materials that connect with readers. I want to entertain and leave a mark with every piece I create. Teaching English complements my writing work. It helps me understand language better and reach diverse audiences. I love empowering others to communicate confidently.

Leave Comment

Comments

Liked By