Read the full article below (in Italian):


How do machine learning professionals make data readable and accessible? What techniques do they use to dissect raw information?
One of these techniques is clustering. Data clustering is the process of grouping items in a data set together. These items are related, allowing key stakeholders to make critical strategic decisions using the insights.
After preparing data, which is what specialists do 50%-80% of the time, clustering takes center stage. It forms structures other members of the company can understand more easily, even if they lack advanced technical knowledge.
Clustering in machine learning involves many techniques to help accomplish this goal. Here is a detailed overview of those techniques.
Clustering Techniques
Data science is an ever-changing field with lots of variables and fluctuations. However, one thing’s for sure – whether you want to practice clustering in data mining or clustering in machine learning, you can use a wide array of tools to automate your efforts.
Partitioning Methods
The first groups of techniques are the so-called partitioning methods. There are three main sub-types of this model.
K-Means Clustering
K-means clustering is an effective yet straightforward clustering system. To execute this technique, you need to assign clusters in your data sets. From there, define your number K, which tells the program how many centroids (“coordinates” representing the center of your clusters) you need. The machine then recognizes your K and categorizes data points to nearby clusters.
You can look at K-means clustering like finding the center of a triangle. Zeroing in on the center lets you divide the triangle into several areas, allowing you to make additional calculations.
And the name K-means clustering is pretty self-explanatory. It refers to finding the median value of your clusters – centroids.
K-Medoids Clustering
K-means clustering is useful but is prone to so-called “outlier data.” This information is different from other data points and can merge with others. Data miners need a reliable way to deal with this issue.
Enter K-medoids clustering.
It’s similar to K-means clustering, but just like planes overcome gravity, so does K-medoids clustering overcome outliers. It utilizes “medoids” as the reference points – which contain maximum similarities with other data points in your cluster. As a result, no outliers interfere with relevant data points, making this one of the most dependable clustering techniques in data mining.
Fuzzy C-Means Clustering
Fuzzy C-means clustering is all about calculating the distance from the median point to individual data points. If a data point is near the cluster centroid, it’s relevant to the goal you want to accomplish with your data mining. The farther you go from this point, the farther you move the goalpost and decrease relevance.
Hierarchical Methods
Some forms of clustering in machine learning are like textbooks – similar topics are grouped in a chapter and are different from topics in other chapters. That’s precisely what hierarchical clustering aims to accomplish. You can the following methods to create data hierarchies.
Agglomerative Clustering
Agglomerative clustering is one of the simplest forms of hierarchical clustering. It divides your data set into several clusters, making sure data points are similar to other points in the same cluster. By grouping them, you can see the differences between individual clusters.
Before the execution, each data point is a full-fledged cluster. The technique helps you form more clusters, making this a bottom-up strategy.
Divisive Clustering
Divisive clustering lies on the other end of the hierarchical spectrum. Here, you start with just one cluster and create more as you move through your data set. This top-down approach produces as many clusters as necessary until you achieve the requested number of partitions.
Density-Based Methods
Birds of a feather flock together. That’s the basic premise of density-based methods. Data points that are close to each other form high-density clusters, indicating their cohesiveness. The two primary density-based methods of clustering in data mining are DBSCAN and OPTICS.
DBSCAN (Density-Based Spatial Clustering of Applications With Noise)
Related data groups are close to each other, forming high-density areas in your data sets. The DBSCAN method picks up on these areas and groups information accordingly.
OPTICS (Ordering Points to Identify the Clustering Structure)
The OPTICS technique is like DBSCAN, grouping data points according to their density. The only major difference is that OPTICS can identify varying densities in larger groups.
Grid-Based Methods
You can see grids on practically every corner. They can easily be found in your house or your car. They’re also prevalent in clustering.
STING (Statistical Information Grid)
The STING grid method divides a data point into rectangular grills. Afterward, you determine certain parameters for your cells to categorize information.
CLIQUE (Clustering in QUEst)
Agglomerative clustering isn’t the only bottom-up clustering method on our list. There’s also the CLIQUE technique. It detects clusters in your environment and combines them according to your parameters.
Model-Based Methods
Different clustering techniques have different assumptions. The assumption of model-based methods is that a model generates specific data points. Several such models are used here.
Gaussian Mixture Models (GMM)
The aim of Gaussian mixture models is to identify so-called Gaussian distributions. Each distribution is a cluster, and any information within a distribution is related.
Hidden Markov Models (HMM)
Most people use HMM to determine the probability of certain outcomes. Once they calculate the probability, they can figure out the distance between individual data points for clustering purposes.
Spectral Clustering
If you often deal with information organized in graphs, spectral clustering can be your best friend. It finds related groups of notes according to linked edges.
Comparison of Clustering Techniques
It’s hard to say that one algorithm is superior to another because each has a specific purpose. Nevertheless, some clustering techniques might be especially useful in particular contexts:
- OPTICS beats DBSCAN when clustering data points with different densities.
- K-means outperforms divisive clustering when you wish to reduce the distance between a data point and a cluster.
- Spectral clustering is easier to implement than the STING and CLIQUE methods.
Cluster Analysis
You can’t put your feet up after clustering information. The next step is to analyze the groups to extract meaningful information.
Importance of Cluster Analysis in Data Mining
The importance of clustering in data mining can be compared to the importance of sunlight in tree growth. You can’t get valuable insights without analyzing your clusters. In turn, stakeholders wouldn’t be able to make critical decisions about improving their marketing efforts, target audience, and other key aspects.
Steps in Cluster Analysis
Just like the production of cars consists of many steps (e.g., assembling the engine, making the chassis, painting, etc.), cluster analysis is a multi-stage process:
Data Preprocessing
Noise and other issues plague raw information. Data preprocessing solves this issue by making data more understandable.
Feature Selection
You zero in on specific features of a cluster to identify those clusters more easily. Plus, feature selection allows you to store information in a smaller space.
Clustering Algorithm Selection
Choosing the right clustering algorithm is critical. You need to ensure your algorithm is compatible with the end result you wish to achieve. The best way to do so is to determine how you want to establish the relatedness of the information (e.g., determining median distances or densities).
Cluster Validation
In addition to making your data points easily digestible, you also need to verify whether your clustering process is legit. That’s where cluster validation comes in.
Cluster Validation Techniques
There are three main cluster validation techniques when performing clustering in machine learning:
Internal Validation
Internal validation evaluates your clustering based on internal information.
External Validation
External validation assesses a clustering process by referencing external data.
Relative Validation
You can vary your number of clusters or other parameters to evaluate your clustering. This procedure is known as relative validation.
Applications of Clustering in Data Mining
Clustering may sound a bit abstract, but it has numerous applications in data mining.
- Customer Segmentation – This is the most obvious application of clustering. You can group customers according to different factors, like age and interests, for better targeting.
- Anomaly Detection – Detecting anomalies or outliers is essential for many industries, such as healthcare.
- Image Segmentation – You use data clustering if you want to recognize a certain object in an image.
- Document Clustering – Organizing documents is effortless with document clustering.
- Bioinformatics and Gene Expression Analysis – Grouping related genes together is relatively simple with data clustering.
Challenges and Future Directions
- Scalability – One of the biggest challenges of data clustering is expected to be applying the process to larger datasets. Addressing this problem is essential in a world with ever-increasing amounts of information.
- Handling High-Dimensional Data – Future systems may be able to cluster data with thousands of dimensions.
- Dealing with Noise and Outliers – Specialists hope to enhance the ability of their clustering systems to reduce noise and lessen the influence of outliers.
- Dynamic Data and Evolving Clusters – Updates can change entire clusters. Professionals will need to adapt to this environment to retain efficiency.
Elevate Your Data Mining Knowledge
There are a vast number of techniques for clustering in machine learning. From centroid-based solutions to density-focused approaches, you can take many directions when grouping data.
Mastering them is essential for any data miner, as they provide insights into crucial information. On top of that, the data science industry is expected to hit nearly $26 billion by 2026, which is why clustering will become even more prevalent.
Related posts

Source:
- Agenda Digitale, published on June 16th, 2025
By Lokesh Vij, Professor of Cloud Computing Infrastructure, Cloud Development, Cloud Computing Automation and Ops and Cloud Data Stacks at OPIT – Open Institute of Technology
NIST identifies five key characteristics of cloud computing: on-demand self-service, network access, resource pooling, elasticity, and metered service. These pillars explain the success of the global cloud market of 912 billion in 2025

You’ve probably seen two of the most recent popular social media trends. The first is creating and posting your personalized action figure version of yourself, complete with personalized accessories, from a yoga mat to your favorite musical instrument. There is also the Studio Ghibli trend, which creates an image of you in the style of a character from one of the animation studio’s popular films.
Both of these are possible thanks to OpenAI’s GPT-4o-powered image generator. But what are you risking when you upload a picture to generate this kind of content? More than you might imagine, according to Tom Vazdar, chair of cybersecurity at the Open Institute of Technology (OPIT), in a recent interview with Wired. Let’s take a closer look at the risks and how this issue ties into the issue of responsible artificial intelligence.
Uploading Your Image
To get a personalized image of yourself back from ChatGPT, you need to upload an actual photo, or potentially multiple images, and tell ChatGPT what you want. But in addition to using your image to generate content for you, OpenAI could also be using your willingly submitted image to help train its AI model. Vazdar, who is also CEO and AI & Cybersecurity Strategist at Riskoria and a board member for the Croatian AI Association, says that this kind of content is “a gold mine for training generative models,” but you have limited power over how that image is integrated into their training strategy.
Plus, you are uploading much more than just an image of yourself. Vazdar reminds us that we are handing over “an entire bundle of metadata.” This includes the EXIF data attached to the image, such as exactly when and where the photo was taken. And your photo may have more content in it than you imagine, with the background – including people, landmarks, and objects – also able to be tied to that time and place.
In addition to this, OpenAI also collects data about the device that you are using to engage with the platform, and, according to Vazdar, “There’s also behavioral data, such as what you typed, what kind of image you asked for, how you interacted with the interface and the frequency of those actions.”
After all that, OpenAI knows a lot about you, and soon, so could their AI model, because it is studying you.
How OpenAI Uses Your Data
OpenAI claims that they did not orchestrate these social media trends simply to get training data for their AI, and that’s almost certainly true. But they also aren’t denying that access to that freely uploaded data is a bonus. As Vazdar points out, “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI isn’t the only company using your data to train its AI. Meta recently updated its privacy policy to allow the company to use your personal information on Meta-related services, such as Facebook, Instagram, and WhatsApp, to train its AI. While it is possible to opt-out, Meta isn’t advertising that fact or making it easy, which means that most users are sharing their data by default.
You can also control what happens with your data when using ChatGPT. Again, while not well publicized, you can use ChatGPT’s self-service tools to access, export, and delete your personal information, and opt out of having your content used to improve OpenAI’s model. Nevertheless, even if you choose these options, it is still worth it to strip data like location and time from images before uploading them and to consider the privacy of any images, including people and objects in the background, before sharing.
Are Data Protection Laws Keeping Up?
OpenAI and Meta need to provide these kinds of opt-outs due to data protection laws, such as GDPR in the EU and the UK. GDPR gives you the right to access or delete your data, and the use of biometric data requires your explicit consent. However, your photo only becomes biometric data when it is processed using a specific technical measure that allows for the unique identification of an individual.
But just because ChatGPT is not using this technology, doesn’t mean that ChatGPT can’t learn a lot about you from your images.
AI and Ethics Concerns
But you might wonder, “Isn’t it a good thing that AI is being trained using a diverse range of photos?” After all, there have been widespread reports in the past of AI struggling to recognize black faces because they have been trained mostly on white faces. Similarly, there have been reports of bias within AI due to the information it receives. Doesn’t sharing from a wide range of users help combat that? Yes, but there is so much more that could be done with that data without your knowledge or consent.
One of the biggest risks is that the data can be manipulated for marketing purposes, not just to get you to buy products, but also potentially to manipulate behavior. Take, for instance, the Cambridge Analytica scandal, which saw AI used to manipulate voters and the proliferation of deepfakes sharing false news.
Vazdar believes that AI should be used to promote human freedom and autonomy, not threaten it. It should be something that benefits humanity in the broadest possible sense, and not just those with the power to develop and profit from AI.
Responsible Artificial Intelligence
OPIT’s Master’s in Responsible AI combines technical expertise with a focus on the ethical implications of AI, diving into questions such as this one. Focusing on real-world applications, the course considers sustainable AI, environmental impact, ethical considerations, and social responsibility.
Completed over three or four 13-week terms, it starts with a foundation in technical artificial intelligence and then moves on to advanced AI applications. Students finish with a Capstone project, which sees them apply what they have learned to real-world problems.
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: