16983 60
Serpstat updates 43 min read April 15, 2022

Keyword Clustering: Algorithms and Approaches of Popular SEO Tools

Clustering tools

Customer Support & Education Specialist at Serpstat
Keyword clustering in SEO is combining similar queries into groups and using whole groups instead of separate keywords for site optimization. It helps clean up the semantic core by dividing it into manageable groups.

Let’s dive deeper inside the milestones of clustering, not only from the SEO point of view, but also from machine learning theory.

What is keyword clustering, and what is it for in SEO?

Keyword research is the primary task of any SEO strategy to find the most profitable and relevant keywords to rank for. That’s why this task needs special approaches, which will save your time and money.

Your SEO content strategy can be elevated by using keyword clusters, which make your site more Google-friendly. Clustering is the process of dividing a set of objects into tight groups called clusters. The similar objects should fall into the same group, at the same time, the objects in different groups should be as different as possible. Clustering helps you to prioritize collected keywords and filter the irrelevant for better ranking.

In 2013 Google rolled out its Hummingbird update and the algorithm started focusing on phrases instead of single keywords, it helped to bring meaning to the words that people were using in their queries. Further, in 2015 the RankBrain update was presented that was able to define themes of search queries and find multiple similar keywords.

Building your clusters also gives more opportunities to add internal links to your website, increasing users' engagement on your website. Internal links help Google understand which website pages are the most important.

If your business has multiple products or areas of expertise, you will be able to build out more clusters on your website.

If you only sell one core product or service, the number of keyword clusters you identify will be fewer. Still, exploring your primary topic areas with lots of helpful content can help you outrank your rivals in less time.

One of the key tenets to doing impactful digital analysis is understanding what your visitors are trying to accomplish. One of the easiest methods to do this is by analyzing the words your visitors use to arrive on site (search keywords) and what words they are using while on the site (on-site search).

There are two major types of keyword clustering in SEO:

Lemma-based (by similarities in the meaning of keywords and morphological matches);
SERP-based (matches and similarities can be found using the search results page).
In this study, we will research both approaches. We are planning to test a few lemma-based platforms and use the same data set with three SERP-based algorithms to compare the results from distinguishing clusters, and then to conclude, ready-made clusters will be checked using the Text analysis tool.

Why should you use automated keyword clustering?

Automated clustering, based on machine learning, saves your time and performs clustering with high accuracy. Automated clustering approaches are much more accurate than manual algorithms. It takes an SEO specialist up to three days to cluster a thousand phrases manually. At the same time, there is a high probability of adding a phrase to a cluster to which the phrase should not belong, also skipping or randomly losing keywords from your dataset. The same task can be done in approximately three hours in automated clustering, depending on the number of valid keywords and settings.

  • Clustering makes it possible to combine phrases by meaning and conduct a deeper analysis of your keyword pool.
  • By using keyword clustering, you can create a content plan that organizes over a series of pages the most relevant phrases to promote different parts of your site.
  • Clustering helps better understand user intent. Topic-focused SEO offers a more thorough response to users: putting similar phrases together, you target user intent instead of a single keyword coverage.
  • Clusters you received, as a result, will help you determine how the potential segments of your content should be connected. These enable you to assess semantic relationships between your pages in the general site architecture.
  • Keyword clustering can allow you to create more effective landing pages to have a positive impact on generating traffic, leads, and CTR in your campaigns.
  • You can organize the structure of your website from scratch using a clustering hierarchy. Clustering helps to improve your overall website structure and UX, making it more navigable for your visitors.
  • Keywords from the same cluster can be placed on the appropriate page without risks of traffic cannibalization or mixed content on your website.
  • Non-clustered keywords can be used for other purposes.
  • Thanks to keyword grouping, you can boost your site’s visibility and authority, both for users and search engines.
  • Automated keyword clustering gives you all the above benefits quickly and efficiently.

There are different approaches to clustering methodology. For example, using "soft" clustering, some tools pick the keyword with the most significant search volume and compare the TOP search results shown for the keyword with the top results shown for the other keywords based on the number of corresponding URLs within the search engine.
When the number of common URLs is equal to the level of keyword grouping accuracy, the key phrases will be grouped. With such approaches, clusters of keywords are linked but not necessarily related.

There is also the “hard clustering” method, which requires the connection between all elements within the cluster. The downside of this algorithm is that it creates an excessive number of clusters that could be merged into larger ones. High-accuracy hard clustering might ignore the similarities between several groups. So semantically close keywords you got as separate clusters might be united as one more significant cluster. The intelligent hierarchical clustering combines clusters into a supercluster.

In case of "wide" keywords, binding to some group will occur randomly in the event of a collision. And in theory, it may turn out that the same set of keywords will be divided into different clusters with each new start of clustering.

Manual clustering will require you to break each keyword into terms, define their intent, and make lists of keywords based on the parameters you need. The problem is still in phrases with different intents - this is especially true for homonyms or words with a broad meaning.

There are also a lot of words that have changed their intent. The good examples of such keywords, changes depending on personified results or country-specific SERPs are:

  • “Tesla”;
  • “Corona” (as a virus, lager beer, or software);
  • “Kafka” (as a writer and event streaming platform);
  • “Bayraktar” (as the famous tactical drones and Turkish surname, which means “Standard-bearer”).
Clustering, in this case, mainly serves the purpose of discovering underlying topics and partitioning search terms into different groups. This process works better for the exploratory scenario where topics are unknown. However, classification is a better choice with known cases or labels that you want to categorize the keywords into. Classification has a substantial advantage over clustering because it allows us to take advantage of our knowledge about the problem we are trying to solve. Instead of just letting the clustering algorithm determine what the classes should be, we can clarify what we know about the categories. The classification algorithm aims to find the most valuable models to select the classes.

The features in interactions between computers and human language are studied in the science field called Natural language processing (NLP). At the same time, AI (Artificial Intelligence) — is concerned with giving computers the ability to understand the text and spoken words in much the same way human beings can. This knowledge helps to assess the way programs and computers can process and analyze large amounts of natural language data. Google made a historical shift in understanding the intent of users' search in this unit with BERT (which stands for Bidirectional Encoder Representations from Transformers, an algorithm that includes a method of pre-training language expressions).

NLP combines computational linguistics with statistical, machine learning, and deep learning models. With Apple's introduction of Siri as part of the iOS operating system, this technology was fully represented. Other examples of NLP at work are: Alexa, and Google Home devices, Autocomplete in Google Search and Gmail, Language translation software, Spell and grammar check, Spam filters, Search, Chatbots.

Clustering algorithms in machine learning

Supervised learning is a machine learning approach based on the usage of labeled datasets. Such datasets accurately predict outcomes. With labeled inputs and outputs, the model is able to match data for accuracy and incrementally learn. Supervised learning can be divided into two types: classification and regression.

In solving classification problems, for example, to sort spam into a separate email folder, these algorithms are used to accurately categorize test data. Linear classifiers, support vector machines, decision trees, and random forest are all common classification algorithms. Regression data models help you predict numbers based on point data, such as future sales revenue.

In the context of machine learning, clustering belongs to unsupervised learning, which infers a rule to describe hidden patterns in unlabeled data.

In unsupervised learning, machine learning algorithms are used to analyze and group raw datasets. These algorithms identify patterns in the data without human intervention. Unsupervised learning models are built to detect anomalies, improve recommendation services, predict customer behavior, etc.

Unsupervised learning models are used to perform three main tasks - clustering, association, and dimensionality reduction. Clustering is a data mining technique to group unlabeled data based on their similarities and differences. This method is suitable for market segmentation, image compression, etc. Association is an unsupervised learning method that uses certain rules to identify relationships between variables and a given set of data. These methods are often used to analyze shopping behavior, create recommendation services and select products in the "To buy with" categories. Dimensionality reduction is a technique that is used when there are too many features (or dimensions) in a certain data set. This technique is frequently used in the data preprocessing phase, to remove noise from visual data to improve image quality.

The goal of unsupervised learning is to get useful information from a huge amount of new data without corrections. In supervised learning, the algorithm "learns" by making predictions based on the training dataset and adjusting them until it gets the correct answer. Although supervised learning models are usually more accurate than unsupervised, they require direct human intervention and accurate data labeling. For example, a supervised learning model can predict how long it will take to get to work depending on the time of day, weather conditions, and so on.

Unsupervised learning requires powerful tools to deal with large amounts of unclassified data. These models independently learn the internal structure of unlabeled data. However, they still require little human intervention to validate the output variables. For example, an unsupervised learning model might reveal that online shoppers often buy groups of products at the same time, but a data scientist would need to check whether it makes sense for a recommendation service to group all of these products into one group.

There is no generally accepted classification of clustering methods, but several groups of approaches can be distinguished (some ways can be attributed to several conditional groups at once, there are many methods, and methodologically, they are significantly different):

General mathematical approaches

“K” is the number of clusters expected from the dataset. It means that before clustering you have to know the estimated number of future groups. This method can be used for recommendations, spam or fake news detection. The goal of movies clustering on Netflix using K-means: the website gives you a set of films and a list of reviews each rater has given; your goal is to create a hundred or so groups of related movies. Each k initial point serves as the center point for one of the k sets.

Using this algorithm, please note that you have to set the approximate quantity of needed clusters. You can try to convert your keywords data to vectors to research how this approach works with Google ranking.

K-means with Serpstat dataset (Strength and Homogeneity, 10 clusters):
Parameters for K-means clustering needed, 10 clusters, Legend
The full-screen result
K-medians
The median is most often used to measure average income because it represents the middle point.

Approaches based on artificial intelligence systems

  • Fuzzy clustering method (C-means) Fuzzy C-means create k numbers of clusters and then assign each data to each cluster, but there is a factor that defines how strongly the data belong to that cluster.
  • Kohonen networks. Kohonen's networks are one of the most basic self-organizing neural networks. Such self-organizing systems offer new possibilities — adaptation to previously unknown input data. It seems the most natural way of learning in our brains when no patterns are defined. Instead, the patterns emerge during the learning process, combined with regular practice.
  • Genetic algorithm (GA-clustering) A Genetic Algorithm (GA) is a search-based optimization technique based on Genetics and Natural Selection principles. It is frequently used to find optimal or near-optimal solutions to complex problems that otherwise would take a lifetime.

Logical approach

A logical approach - constructing a dendrogram is carried out using a decision tree.
In this case, the dendrogram shows which keywords are most similar by their own clusters, which began with two general groups and broke down into smaller clusters with the most similar keywords.

The visualization of such an approach you can find in Keyword Cupid.

Graph-theoretic approach

In graph theory, a branch of mathematics, a cluster is a graph formed from the disjoint union of complete graphs. It means that because of similar characteristics and connections, different points can create groups.

Hierarchical clustering (also graph clustering algorithms and hierarchical cluster analysis) is a set of data ordering algorithms to create a hierarchy of nested clusters. The hierarchical approach assumes the presence of nested groups (clusters of different draw orders). Heuristic clustering involves separating data into groups based on some measure of similarity, determining how they're alike and different, and further narrowing them down.The second method, on the opposite, goes from unique objects and consecutively combines this data into larger groups.

Clustering algorithms of popular SEO tools

Serpstat

Most SEO services use the highest search volume keywords as central terms for clusters, relying on similar pages on the search engine results page. However, Serpstat considers such a heuristic method unsuitable since there may be several high-volume keywords to create a cluster. We use a combination of algorithms, based on a graph-theoretic approach of hierarchical clustering.

We build an Adjacency matrix, according to the quantity of common URLs. Let’s say, keyword “nettop” will have the same quantity of common URLs as misspelled “netope”, it might be 12, but two keywords “nettop” and “Mac Mini” will have only 5 common URLs from the analyzed top 30 SERP results.
As a next step, we transform the matrix so that relative numerical values are close together. If we assign each numerical value a color, we have the classic Czekanowski diagram.

An example
The list of checked URLs in the analyzed SERP, competing pages for the keywords of the cluster called “Metatop”. The higher a page is in the metatop, the more relevant it is to the topic of the cluster.

With the Czekanowski method, it is more likely that a metatop would function as a centroid - i.e., a set of URLs representing the cluster. In this case, the proximity of the keyword to the cluster is calculated with the similarity of the SERP of the keyword and the analyzed metatop.We have developed a unique iterative algorithm that allows you to find and correct inaccuracies in clustering.

Serpstat also has special metrics to describe final clusters. There are: Homogeneity, and Connection Strength.

Cluster homogeneity indicates how the keyword in this cluster is related between each other (%). This metric estimates SERPs for each keyword.

Connection Strength is the similarity between a metatop and a specific keyword’s SERP. Based on a scale of 0 to 100, it shows how close the keyword from the cluster is to the cluster's main topic.

To check how different clustering approaches work, we will perform Text analysis in Serpstat on clusters we created using several SEO platforms with the same clustering algorithm - SERP analysis.

Text analytics in Serpstat is a tool that helps you to improve the relevance of your content based on the TOP 15 of your competitors via SERP analysis. This tool will show the occurrence of particular keywords in your text and allow you to understand if the text is overstuffed with specific phrases. Alternatively, if you didn't include some relevant keywords in your content, you will see them as recommended words in the Text analytics results. In addition, if you attached target URLs, it's also possible to check some technical issues on the page.

For Text analysis, and also to understand the intent better, Serpstat operates an algorithm TF-IDF-CDF (TF - term frequency, IDF - inverse document frequency and our own CDF - cluster’s document frequency). We use this approach to rank the keywords that define the topic for the entire cluster:
TF - takes into account the number of occurrences of the keyword in the text;
IDF - controls uninformative keywords found in a large percentage of text, stop-words;
СDF - finds the most powerful keywords for each cluster.

With Text analytics by cluster, you will get the most valuable keywords for your website's structure. Then, you can use these keywords to start a project in the Rank Tracker and observe your website's performance.

To monitor the quality of clusters done by different SEO tools, we will also compare metrics from the Text analytics, not only clustering. The same dataset and settings for clustering will be used in this experiment for three different platforms.

Cluster army

Cluster Army is a free tool to generate keyword clusters, using lemma similarities. The algorithm is trying to find stems (a part of a word responsible for its lexical meaning). For example, the stem of the "waiting" is "wait" – the part that is common to all inflected variants.
  • wait (infinitive)
  • wait (imperative)
  • waits (present, 3rd person, singular)
  • wait (present, other persons, or plural)
  • waited (simple past)
  • waited (past participle)
  • waiting (progressive)

A stem is a form to which affixes can be attached in one usage. Thus, in this usage, the English word friendships contain the stem friend, to which the derivational suffix -ship is attached to form a new stem friendship, to which the inflectional suffix -s is attached. In a variant of this usage, the word's root (in the example, friend) is not counted as a stem.

Cluster Army performs a 7-step process:
  1. To explore the imported list.
  2. To find the distribution (search volume) of each single stem, if the option is selected the stop-words will be removed.
  3. To find the distribution of each single term, if the option is selected the stop-words will be removed.
  4. To find the distribution of all term pairs, stop-words are not excluded.
  5. To find the distribution of all triples, stop-words are not excluded.
  6. To create the table with the first keyword associated with the high-volume stem, single, double and triple terms.
  7. Finally, the tool generates a chart for each distribution that you can use in your project.

Spy fu

One more example of clustering, based on word similarities (lemma-based) you will find in Spy fu. This approach will be more helpful to create topic clusters, a collection of articles for the interlinking, for pages related to one primary topic. Also known as pillar content, the main topic covers a broad subject — like digital marketing.

SpyFu clustering tool works in this way:
  • You can paste your branded and long-tail keywords. Spy Fu will add data to them to get the complete picture of your research.
  • Then, sort your keywords to see how new data affect your priorities or filter them down, relying on automatic groups.
  • Finally, export your new keyword list to other platforms or conveniently add them to the built-in tool inside Spy fu.

Contadu

This is a basic keyword grouping tool based on common text processing algorithms with a very basic method of text analysis.

The topic analysis process in Contadu holds a few steps :
  1. To collect keywords ideas based on your input.
  2. To check search volume, trends, CPC, and competition values.
  3. To search results for all keyword ideas.
  4. To build the similarity matrix between the keywords.
  5. To start clustering based on similarity levels.
Umbrellum is an example of Adaptive hierarchical clustering.

An unsupervised clustering mechanism is required to generate a self-organizing hierarchical structure for classification. Hence, Umbrella’s algorithm is based on spectral clustering, which identifies the structure of the data set and clusters them according to the degree of affinity.

To do so, Umbrellum uses clustering with Levenshtein distance — a string metric for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits required to change one word into the other.

Technically, it is a number that tells you how different two strings are. The higher the number, the more diverse the two strings are.
The typical illustration of the Levenshtein distance is the distinction between “kitten” and “sitting.” The answer is 3 since, at a minimum, 3 edits are required to change one into the other.
kitten → sitten (substitution of “s” for “k”)
sitten → sittin (substitution of “i” for “e”)
sittin → sitting (insertion of “g” at the end).
Results
An automatic keyword clustering service allows lemma-based grouping of up to 10,000 words or phrases in 5 seconds. The service works in two steps: first, a primary group of expressions will be created, then words will be grouped by similarity.
Results
The keyword grouping tool automatically gathers the questions, keywords, and suggestions from our in-depth research and groups them along with similar themes.

You can see the range of long-tail keywords each group represents and the total potential search volume for the queries you might rank for.

Each group can be sent to a content planner, where you can start to build an outline of the articles you plan to write, build up a cluster of relevant articles, boosting your topical authority.
Results
Working with lemma-based clustering services and generating your keyword list, consider the importance of relevance and search intent.

You only want to include keywords that will bring the right kinds of searchers to your website, those who are interested in the products or services you offer and are likely to convert.

Here are the criteria you should use to group these keywords into clusters:
  • Semantic Relevance. The keywords in your clusters must share similar search intent.
  • Search Volume and CPC. The core keywords in your clusters should have a reasonable search volume (otherwise, you optimize for nobody). This keyword pool should also have conversion potential (CPC).
  • Organic Difficulty (KD). Include only keywords your site has a realistic chance of ranking for.

In contrast to lemma-based keyword grouping, SERP-based keyword clustering produces groups of keywords that yield no morphological matches but conform to the search results. Using it, SEOs can create keyword structures that match what search engines require.

The primary research will mainly concentrate on the SERP-based algorithms of Serpstat, Keyword Cupid, and Spy SERP.
get a personal trial
Clustering with a free trial!

Try Serpstat wіth a 7-day trial, use comprehensive training articles, webinar recordings, and advices from a our specialist.

How to collect data for your project in Clustering using Serpstat?

To sort keywords into topic-centered groups, you need to collect a full list of keywords first. Collecting as many keywords as possible is your first fundamental task in the process of building and promoting your website. The initial research process will help you explore what users are searching for in your niche and how your competitors handle keywords.

There’s an array of free and paid tools that will help you find keywords for your website. Use Keyword Trends to track the top search queries and current tendencies in search results. While doing your research, note that there are different types of search queries: you can distinguish them by length and specificity (use long-tail keywords) and by user intent (navigational, informational, transactional). These differences will give you an idea of how intense the competition is for particular keywords.

The most credible reports you can use to expand your keywords list:
  • Keywords selection ( organic keywords associated with the searched keyword),
  • Related keywords ( all search queries that are semantically related to the searched keyword);
  • Search suggestions and Search questions ( queries offered to users under the search bar, which complement and facilitate the wording of an original query; questions that include a selected keyword that users are looking for an answer to).
Make sure to include the keyword difficulty, search volume, and cost-per-click metrics of the keywords in your list. These metrics will help you prioritize which keywords have the most economic value and should be the “core” keywords in your clusters.

Using Serpstat, we have started our project with our data set. The following are step-by-step instructions.

The basics of clustering setup explained

We will use the domain towardsdatascience.com to assign pages to ready-made clusters inside the primary project.
Region and search engine we choose for analysis (Google/US/California/East Palo Alto):
Strength refers to the number of common pages in the top 30 search results for analyzed keywords. The “Weak” requires fewer common pages, and the “Strong” requires more.
Options for “Medium” - at least 8 common URLs in SERP between two keywords. In case we apply “Weak,” there will be a lot of clusters containing keywords related by intent with at least 3 common urls. In the case of “Strong” strength — we will need 12 common urls, which leads to many unsorted keywords because of increased requirements to group these phrases.

Cluster type: Soft - we do not need common urls between ALL keywords to create a cluster.
In case you will choose Hard you will get fewer groups, mostly, clusters will be undefined (unsorted), because of a lack of common urls.
We choose Medium strength and Soft Cluster type for this research. Two other services were also analyzed with similar settings.
I added more keywords to my project from Domain analysis before I started automated clustering. You can also add it later and restart your project with new keywords.
We have added 1 K more keywords and skipped the invalid, which might affect on results
You are able to cluster up to 50 K keywords in one project. Having separate groups of keywords can maximize the number of phrases your content ranks for. Furthermore, you can unlock more terms to include in your content when you decide which group corresponds to which page and section of your website. Finally, exploring the list of keywords within a group will allow you to find some new concepts and intents to cover. This way, you’ll make your content more trustworthy and on par with the user’s expectations.

The duration of the clustering procedure in Serpstat is from a few minutes to a few hours, depending on the number of keywords in your project.

The visualization of Clustering we got using Serpstat by Homogeneity, Group size and Group names can be depicted in this way:
The next SERP-based clustering platform we checked is Spy SERP. Keyword clustering in Spy SERP operates in a few steps:
  • You have to start a project from scratch in the Rank Tracker to get SERP data. According to your settings, the tool collects data, and matches the web pages from SERPs to every keyword (from top 3 to top 100).
  • If the same pages are assigned to various keywords with several matches, a bot will group such keywords. You can also set a clustering power (a minimum number of matches). The lower this metric is - the more significant quantity of clusters will be created. If there are no matches for the keywords in SERPs, they are grouped apart.

There are different types of keyword clustering out there. Soft keyword clustering is a method of grouping keywords around multiple clusters, taking into account the popularity of keywords. In another method - the SERPs are compared with each other, moderating keyword clustering. This method of grouping keywords around multiple groups is based on the relevance of keywords.
The last platform inspected in our research is Keyword Cupid we mentioned above. Keyword Cupid analyzes the first 5-10 pages of Google (depending on the niche and the number of total results) to create a multi-dimensional plane to unravel how closely related these keywords are. It's obvious also that it's better to put heavier weight on matches on the 1st page than a match on the 5th page.

The approach is unique because of using two kinds of neural networks. The first neural network focuses on grouping together the keywords you uploaded in very tightly themed clusters to ensure that at least the first level clusters will be correct. Since this action, even if it neglects some keyword relationships that would otherwise make sense.

The second neural network focuses on grouping the clusters created and uses a set of simple rules in its neurons to allow for more "flexible relationships".

Keyword Cupid doesn't use the NLP algorithm, TF-IDF, or other relevancy scores to group keywords together. Also, it doesn't provide the number of SERPs indicating a solid relationship in the cluster. If Google comes up with the next version of its language model and becomes more competent, results will improve.

The cluster name serves as the label of the resulting group. In other words, it is used as a model of the themes inside the node (the unit or topic).

To start the analysis of your dataset, you have to prepare a file with KD, CPC, and search volume. I did it via Serpstat Batch Analysis.

To check the quality of created clusters, we chose one random and common for different tools to compare them in Text analysis.



In Serpstat it was “Data science learning” - 13 keywords:
1
data science learn
2
learn data science
3
learning data science
4
data science learning
5
data science training
6
data science course
7
data sciences courses
8
data science courses
9
data science classes
10
course data science
11
data science class
12
online data science
13
data science online
Homogeneity: 72%
In Keyword Cupid - “Data sciences courses”:
1
365 data science
2
course data science
3
data science class
4
data science classes
5
data science course
6
data science courses
7
data science online
8
data science training
9
data sciences courses
10
online data science
In Spy SERP - “Data science courses”:
1
course data science
2
data science class
3
data science classes
4
data science courses
5
data science online
6
data sciences courses
7
data science training
8
online data science
The next step - is to conduct Text analysis and resemble results
The Text analysis started. It takes up to a few hours to check the TOP 15 URLs in SERP and calculate results, depending on the number of keywords. We are going to compare the valuable metrics inside this functionality and explore text relevance by each cluster we got in different tools.

Proximity level indicates the strength of a keyword’s relation to other keywords in the group based on their topics.

Relevance is the keyword and target page subject match. According to the TF-IDF formula, we consider the importance of each keyword in competitors' titles in the metatop. After that, we display the average value for each keyword.

LSI Rank, % - the importance of the keyword for Title/H1/Body in the context of the analyzed subject. This metric is calculated as the ratio of a keyword to a set of keywords used in the text of competitors.

Chance, % - shows how many competitors use this keyword, popularity. The score shows how important it is to use the keyword. The higher the score, the more necessary it is.

The exact match metric (%) shows how often competitors use the exactly matching keyword.

Text analysis result example, Serpstat:
Text analysis in Serpstat
We have added the ability to create a new cluster in one of the latest updates, so let’s do it for two more platforms!

To add a new cluster, use the button :
After the first analysis is finished, I will move needed keywords to these clusters to start Text analysis with a new dataset.
To move keywords to a new Cluster for Keyword Cupid Dataset.
“365 data science” keyword was in “Unsorted” cluster in Serpstat, Keyword Cupid grouped this keyword with the main cluster.
We have started Text analysis with 10 keywords from KC.
8 of 10 keywords Serpstat have found in the TOP 15.

Results
According to this result, the word “365 data science” isn’t relevant inside this cluster and is not worth using in the content.

Than, we moved least 8 keywords to Spy SERP cluster.
Platform didn’t use two keywords from the KC dataset: “365 data science” and “data science course”. The last keyword was in the “not clustered” group, according to the report in SpySERP.

As a result:
This dataset didn't contain a highly relevant keyword - "data science course" and had a less suitable "online data science."
Table:

Features of Serpstat Clustering

SERP-based algorithms give you more options to provide clustering transparently, using common URLs as parameters. Serpstat quickly clusters keywords into groups, analyzes your published content, and creates keyword groups relevant to that page. You can see the SERP metatop used during clustering in the built group by clicking the same-named button.
Your target URLs will be colored green.
In this way, you can also track the position of the analyzed page in the region. Unlike many other SEO tools, Serpstat will collect data similar to the SERP Crawling Service outcome for grouping in the main clustering flow, and it will run in the background.
With Text analytics, you can check errors in meta tags with the current target URL. If the error in the list is colored gray, it means that it wasn't detected on the page.
Further, you can double-check this issue within our extension, Serpstat SEO Checker:
Using this tool, you have the ability to see overstuffed and “not included” keywords to improve the relevance of our content.

Use-case from author’s practice

I have my example of using Serpstat to start a new website. During SEO courses from Netpeak agency specialists, I have created MVP — an internet shop to sell tattoo equipment. My first step was to create the needed structure for a future site using Serpstat clustering and prepare the technical assignment according to the pages needed.

What goals have I set?

By clustering my keyword pool, to create the most suitable structure for an internet shop with tattoo equipment.

This structure should be followed to ensure that content ranks well in Google US by focusing on high-volume and relevant keywords.

How exactly does Serpstat help to achieve the goals?

1
I started my keywords research in Serpstat using Domain and Keyword analysis. During collecting the keywords list, I checked for benchmarks in the tattoo business to search for the keywords of rivals.
2
I chose Google US, California, as a region for Clustering and Hard/Strong settings.
3
After export of my clustering results, working with filtering and sorting, I got the prototype of my future website structure.
4
My next move was to create a Mind Map.
5
Then I focused on technical requirements.
6
The categories and content were created. I opened my website for indexing to help Serpstat deal with specific keywords for clusters I had to put on existing pages, check the relevance in Text analytics, and work with Google Analytics and Tag Manager.

Results

In a week, my website was also shown in Serpstat Domain analysis by 5 keywords in the database from 22 to 85 positions by Google US!

I got the result I expected and even more just prototyping a website for fun.

FAQ

How to create a keyword cluster?

To create a keyword cluster, sort your keywords by similarity, and group all related keywords or use Serpstat Keyword Clustering Tool.

What is keyword grouping?

Keyword grouping is a process of creating groups with related keywords.

How to group keywords?

Sort keywords by similarity and group related keywords in one cluster or use Serpstat Keyword Clustering Tool.

Conclusions

When you conduct the keyword clustering for your landing pages, you show Google that your website is an authority in your industry and demonstrates a substantial breadth and depth of content. You also give search engine signals about the rich content clusters trained to help identify and promote your pages in search results by algorithms. Clustering improves your website's Google and user-friendliness.

In addition to increasing organic traffic through useful and well-structured content, clustering can also help organize internal linking more efficiently, and expand the keywords' pool in a certain niche easier.

Keyword Clustering is the easiest way to optimize one page for multiple keywords with the same search intent. For quick and easy work with the site's keyword pool, you can group up to 50K keywords into clusters using Serpstat. With keyword clustering, you will be able to conduct more accurate keyword research and save time.

Keyword clustering assists in planning and optimizing content that targets several related keywords on one web page. Based on matching search intent, this data grouping uses machine learning algorithms.

Using text analytics, you can improve the relevance of your content on top of the clusters and take into account the top 15 results in the region.

We investigated the clustering processes in the context of machine learning and compared the results using Text analytics. With similar SERP-based algorithms, we evaluated the relevance of each cluster created by the same dataset to improve your content.

Keyword clustering requires site owners to think bigger about their content.

Speed up your search marketing growth with Serpstat!

Keyword and backlink opportunities, competitors' online strategy, daily rankings and SEO-related issues.

A pack of tools for reducing your time on SEO tasks.

Get free 7-day trial
The opinion of the guest post authors may not coincide with the opinion of the Serpstat editorial staff and specialists.

Rate the article on a five-point scale

The article has already been rated by 6 people on average 5 out of 5
Found an error? Select it and press Ctrl + Enter to tell us

Share this article with your friends

Are you sure?

Introducing Serpstat

Find out about the main features of the service in a convenient way for you!

Please send a request, and our specialist will offer you education options: a personal demonstration, a trial period, or materials for self-study and increasing expertise — everything for a comfortable start to work with Serpstat.

Name

Email

Phone

We are glad of your comment
I agree to Serpstat`s Privacy Policy.

Thank you, we have saved your new mailing settings.

Report a bug

Cancel
Open support chat
mail pocket flipboard Messenger telegramm