This site uses cookies and other tracking technologies to make possible your usage of the website, assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide better user experience.

By using the website, you agree to our Privacy policy

Accept and continue

Report a bug

Cancel
786 11
SEO 14 min read November 28, 2018

How To Make A Niche Analysis
With Our Crawling Tool

How To Make A Niche Analysis With Our Crawling Tool

Alex Danilin
Lead Product Manager at Serpstat
Do you need to monitor your website's positions for hundreds of thousands of keywords? We do have a solution. Our SERP Crawling tool allows you to build your own Serpstat inside of your website! Are you intrigued?

Crawling overview

SERP Crawling is a service that allows you to:

  • scan Google top
  • choose any region (or several)
  • select a convenient scanning schedule
  • receive data using the API

Elements in search results that can be received via SERP Parsing:

  • site position (according to the region and schedule you've set)
  • domain
  • target URL
  • title
  • description
  • the number of results
  • paid search results on the phrase

A report looks like this:
The process is carried out in the following way:
1
Visit our page and leave us a request.
2
We prepare a special offer for you based on the number of keywords you want to monitor: just enter the number of keywords you want to check and the databases you want to check them in. Note that our minimum for this feature is 500.000 keywords.
3
You can get data for any database in any language. Your queries should not contain search operators.
4
Data on top results are gathered with our unique API method that is not available with regular API access.
5
Results are available for 24 hours. After this period results are removed from our database.
6
Data will be provided in JSON format.
You can store data obtained by the parsing in a database (My SQL or Big Query, etc.) and make reports, graphs via Data Studio (optionally).

Crawling features

1
Distribution of traffic share by market (one of the most popular cases):
in addition to positions, we can regularly scan volume on these phrases. Having the frequency and position of the phrase in your domain and competitor domains, it is already possible to predict the approximate traffic both in general and individual categories.
2
Tracking the dynamics of the site positions for the month:
you can find out which categories are in the lead, which fall behind and, accordingly, what are the keywords which make your website fall behind.
3
Identification of the leaders in the category:
which of the competitors came as close as possible to the positions of the promoted site and by what phrases.
4
Definition of phrases dropped from the top: both from the website being promoted and from the competitor's one.
5
Definition of phrases in the top:
sorting phrases/categories that have moved to the top. Based on these data, you can pay attention to these categories while promoting.
6
Tracking of regional search results:
you can track regional search results to understand in which regions competitors are in the lead and on which topics. It often happens that you are the leader in large cities, but there are cities where niche competitors bypass you on specific topics, and you don't notice that.
7
Regular context monitoring will show when your competitor runs out of the budget, so you can calculate the most convenient days for placing, and who the competitors are and on which days the ads run.
8
Tracking snippet:
crawling will fix the snippets of each site, so you can evaluate the snippets of competitors, learn their weaknesses and improve yours.
9
The combination with Google Analytics and Serpstat in one report.

How to make a niche analysis

Imagine that you want to get an opportunity to constantly analyze the niche of the site in terms of SEO. You want to watch what is happening in the top and take into account its changes in your promotion strategy. Let's see how the crawling will help you.

Data storage and extension

While crawling, Serpstat API gives out JSON objects, which can be quite easily converted to a table view. For example, the results of the parsing can be added to BigQuery and further analyzed there:
For a complete picture, we lack CTR and keyword volume. If you add this data, you can count the traffic for each domain from the top for each day. Or just the total traffic if you don't need such details.

Let's take a real example. We want to analyze the SEO niche in Google USA, and we have already collected 64 requests for this. Every day we parsed tops, received volume from the Serpstat API and found the CTR data in Google search results depending on the position.We'll not go into details of the technical implementation of this solution, because you are free to choose the method of processing and storing JSON objects. I used the R + Google Sheets + Google BigQuery bundle, but you can do without R by using Apps Script.

We'll add three columns to the final table with data — CTR, Queries (frequency) and Traffic. Then it's time to go to Data Studio and build interactive reports based on this table.

Analyzing crawling results

In Data Studio, you connect a data source, and in 15 minutes you collect a report on traffic and positions:
In the upper part of the report, you have the average position by days on the left graph, the average position by domain on the right graph, and several useful numbers (average, minimum, maximum position) separately. At the bottom is the sum of traffic by day on the left graph and the total amount of traffic on the domain on the right graph.

We'll not pay attention to some obvious or non-obvious numbers, but let's see how we can work with such a report. Also, we will not distribute positions and traffic to different sheets of the report — this is how all of our data will be placed in one screen. I created the CTR for this report myself, any reference is coincidental :)

Dynamics of positions and traffic changes by domain

We choose the domain we need (let it be udemy.com — we'll deal with the SEO courses area) and see how the average position and traffic changes for the selected period. In the example below, with a relatively small change in the average position, we see changes in traffic. The reason for the drop in traffic may be a traffic drawdown.

What happens to keywords in certain positions

Let's limit our report to data to top 10. Here we see the same — the average position has risen, and the traffic has fallen. This can be explained by the fact that the top 10 dropped out keywords that brought a fairly large amount of traffic.
We also see that most of the traffic for udemy.com is brought by keywords from the top 10 and any changes in this range in the positions will greatly influence the traffic.

Analysis of changes in the top by phrase

We can see all the phrases in a certain top in the drop-down list of the phrase selection. To do this, you can make a separate table Phrase — Date — Position. You could also create tables with lists of phrases by domain for different dates to compare changes in the top; or start storing changes in positions across the domain in the data source for the report.
Let's take the keyword SEO trainings that brings traffic for Udemy and see what happens there:
The Positions report was not informative, but looking at the Traffic we can see that Udemy is the winner of all and the main competitor is Search Engine Watch. Let's compare these two domains.
Here, the situation is strange: with a decrease in the average position, the traffic of Udemy has risen, while in Search Engine Watch these indicators behave the same way. This can be explained by the principle of report building. Udemy began to appear in other positions as well, and the average position fell. In this case, I count traffic simply as the sum of traffic for all positions, that's why it has grown.

Comparison with competitors

Let's compare Udemy and Search Engine Watch for all requests. We see that with a close average position, the traffic of the sites is very different.
For example, you can look at the traffic for each day right on the graph:
For each of the sites, you can see the proportion of traffic in the niche. To do this, you can create a separate report if you need to check this index regularly. We'll do a simple calculation.

We take the data for November 23. The total traffic is 237.3 thousand. The traffic on Udemy is 2.4 thousand, this is about 1%. At Search Engine Watch traffic is 1.7 thousand, this is 0.7%

What we can do with the crawling results

In our example, we received the data only for one region. If you parse the tops in different regions, you can compare the domain in different regions or monitor changes in the request results depending on the region.

We also didn't add the analysis of special elements and contextual advertising. Having this data, you can enter corrections for CTR depending on their availability, and this will allow you to get more accurate estimates of traffic.

Another possibility is parsing snippets. You can analyze snippets of sites in the top 10 and take additional information from there for optimizing pages. For example, all your competitors offer free shipping directly in the snippet, but you don't, and because of that you lose competitive advantage.

You can add all this information to a report in Data Studio, and it'll turn into a comprehensive top analytics system. Such a thing is always convenient to keep on hand. Start by collecting data and further improve it as needed.

Difference between Crawling, Rank Tracking and the API "Top for a keyword"

Here we collected the key differences between our popular tools:

Summing up

1
By working with Crawling, you can build a little Serpstat inside your site. You don't need to go to Serpstat to upload any data, you can build your own tool, based on the data we give, in any form.
2
You can build a regularly updated analytics for clients in their personal accounts on your website. That is, the clients enter their account on the website of their SEO agency, and there are millions of graphs on their project.
3
Serpstat Crawling is used by well-known marketplaces, aggregators, and large in-house teams. And you can easily follow their example :)
Do you have any questions? Ask them in the comments!

Rate the article on a five-point scale

(The article has already been rated by 4 people on average 3.5 out of 5)
Found an error? Select it and press Ctrl + Enter to tell us

Recommended posts

Subscribe to our newsletter
Keep up to date with our latest news, events and blog posts!
Sign In Free Sign Up

You’ve reached your query limit.

Or email
Forgot password?
Or email
Back To Login

Don’t worry! Just fill in your email and we’ll send over your password.

Are you sure?
Please pick the project to work on

Personal demonstration

Serpstat is all about saving time, and we want to save yours! One of our specialists will contact you and discuss options going forward.

These may include a personal demonstration, a trial period, comprehensive training articles & webinar recordings, and custom advice from a Serpstat specialist. It is our goal to make you feel comfortable while using Serpstat.

Name
Email
Phone
We are glad of your comment

Upgrade your plan

Upgrade your plan

Export is not available for your account. Please upgrade to Plan A or higher to get access to the tool. Learn more

Sign Up Free

Спасибо, мы с вами свяжемся в ближайшее время

Invite
E-mail
Message
Optional

You have run out of limits

You have reached the limit for the number of created projects. You cannot create new projects unless you increase the limits or delete existing projects.

I want more limits

Christmas is a time for miracles.

You are almost on the finish line of our Christmas quest. The last brick of your lego-promocode is left on the way up 55% discount.

Did not find previous lego-bricks? Fill the form anyway.

Name
Email
Phone