How To Perform A Niche Analysis Using Our SERP Crawling Tool
2. Crawling features
3. How to make a niche analysis
3.1. Data storage and extension
3.2. Analyzing crawling results
3.3. Dynamics of positions and traffic changes by domain
3.4. What happens to keywords in certain positions
3.5. Analysis of changes in the top by phrase
3.6. Comparison with competitors
3.7. What we can do with the crawling results
4. Difference between Crawling, Rank Tracking and the API "Top for a keyword"
5. Summing up
- scan Google top
- choose any region (or several)
- select a convenient scanning schedule
- receive data using the API
- site position (according to the region and schedule you've set)
- target URL
- the number of results
- paid search results on the phrase
A report looks like this:
in addition to positions, we can regularly scan volume on these phrases. Having the frequency and position of the phrase in your domain and competitor domains, it is already possible to predict the approximate traffic both in general and individual categories.
you can find out which categories are in the lead, which fall behind and, accordingly, what are the keywords which make your website fall behind.
which of the competitors came as close as possible to the positions of the promoted site and by what phrases.
sorting phrases/categories that have moved to the top. Based on these data, you can pay attention to these categories while promoting.
you can track regional search volume for keywords to understand in which regions competitors are in the lead and on which topics. It often happens that you are the leader in large cities, but there are cities where niche competitors bypass you on specific topics, and you don't notice that.
crawling will fix the snippets of each site, so you can evaluate the snippets of competitors, learn their weaknesses and improve yours.
How to analyze niche
Data storage and extension
Let's have a look at the example. We'd want to research the SEO niche in Google USA, and we've already received 64 requests in this regard. Every day, we analyzed tops, got volume from the Serpstat API, and, based on the position, found CTR statistics in Google search results. We won't go into the technical implementation of this solution because you can use whichever way you choose to process and store JSON objects. I used the R + Google Sheets + Google BigQuery combination, although Apps Script can be used instead of R.
We'll add three columns to the final table with data — CTR, Queries (frequency) and Traffic. Then it's time to go to Data Studio and build interactive reports based on this table.
Analyzing crawling results
We'll not pay attention to some obvious or non-obvious numbers, but let's see how we can work with such a report. Also, we will not distribute positions and traffic to different sheets of the report — this is how all of our data will be placed in one screen. I created the CTR for this report myself, any reference is coincidental :)
Dynamics of positions and traffic changes by domain
What happens to keywords in certain positions
Analysis of changes in the top by keyword
Comparison with competitors
We take the data for November 23. The total traffic is 237.3 thousand. The traffic on Udemy is 2.4 thousand, this is about 1%. At Search Engine Watch traffic is 1.7 thousand, this is 0.7%
What we can do with the crawling results
We also left out special element analysis and contextual advertising. With this information, you may make CTR corrections based on their availability, allowing you to generate more accurate traffic estimations.
Another possibility is parsing snippets. You can analyze snippets of sites in the top 10 and take additional information from there for optimizing pages. For example, all your competitors offer free shipping directly in the snippet, but you don't, and because of that you lose competitive advantage.
You can add all this information to a report in Data Studio, and it'll turn into a comprehensive top analytics system. Such a thing is always convenient to keep on hand. Start by collecting data and further improve it as needed.
Difference between Crawling, Rank Tracking and the API "Top for a keyword"
What is SERP crawling?
SERP crawling is a process of collecting data of the search engine results page.
Why you should try Serpstat SERP crawling?
Serpstat SERP crawling allows you to collect big data and build a regularly updated analytics for clients in their personal accounts on your website.
Learn how to get the most out of Serpstat
Want to get a personal demo, trial period or bunch of successful use cases?
Send a request and our expert will contact you ;)
Cases, lifehacks, researches and useful articles
Don’t you have time to follow the news? No worries!
Our editor Stacy will choose articles that will definitely help you with your work. Join our cozy community :)