Crawled page count in Search console
-
Hi Guys,
I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console.
History:
Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this:
- Noindex filterpages.
- Exclude those filterspages in Search Console and robots.txt.
- Canonical the filterpages to the relevant categoriepages.
This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day.
To complicate the situation:
We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well.
Questions:
- - Excluding in robots.txt should result in Google not crawling those pages right?
- - Is this number of crawled pages normal for a website with around 1000 unique pages?
- - What am I missing?
-
Ben,
I doubt that crawlers are going to access the robots.txt file for each request, but they still have to validate any url they find against the list of the blocked ones.
Glad to help,
Don
-
Hi Don,
Thanks for the clear explanation. I always though disallow in robots.txt would give a sort of map to Google (at the start of a site crawl) with the pages on the site that shouldn’t be crawled. So he therefore didn’t have to “check the locked cars”.
If I understand you correctly, google checks the robots.txt with every single page load?
That could definitely explain high number of crawled pages per day.
Thanks a lot!
-
Hi Bob,
About the nofollow vs blocked. In the end I suppose you have the same results, but in practice it works a little differently. When you nofollow a link it tells the crawler as soon as it encounters the link not to request or follow that link path. When you block it via robots the crawler still attempts to access the url only to find it not accessible.
Imagine if I said go to the parking lot and collect all the loose change in all the unlocked cars. Now imagine how much easier that task would be if all the locked cars had a sign in the window that said "Locked", you could easily ignore the locked cars and go directly to the unlocked ones. Without the sign you would have to physically go check each car to see if it will open.
About link juice, if you have a link, juice will be passed regardless of the type of link. (You used to be able to use nofollow to preserve link juice but no longer). This is bit unfortunate for sites that use search filters because they are such a valuable tool for the users.
Don
-
Hi Don,
You're right about the sitemap, noted it on the to do list!
Your point about nofollow is intersting. Isn't excluding in robots.txt giving the same result?
Before we went on with the robots.txt we didn't implant nofollow because we didn't want any linkjuice to pass away. Since we got robots.txt I assume this doesn’t matter anymore since Google won’t crawl those pages anyway.
Best regards,
Bob
-
Hi Bob,
You can "suggest" a crawl rate to Google by logging into your webmasters tools on Google and adjusting it there.
As for indexing pages.. I looked at your robots and site. It really looks like you need to employ some No Follow on some of your internal linking, specifically on the product page filters, that alone could reduce the total number of URLS that the crawlers even attempts to look at.
Additionally your sitemap http://premium-hookahs.nl/sitemap.xml shows a change frequency of daily, and probably should be broken out between Pages / Images so you end up using two sitemaps one for images and one for pages. You may also want to review what is in there. Using ScreamingFrog (free) the sitemap I made (link) only shows about 100 urls.
Hope it helps,
Don
-
Hi Don,
Just wanted to add a quick note: your input made go through the indexation state of the website again which was worse than I through it was. I will take some steps to get this resolved, thanks!
Would love to hear your input about the number of crawled pages.
Best regards,
Bob
-
Hello Don,
Thanks for your advice. What would your advice be if the main goal would be the reduction of crawled pages per day? I think we got the right pages in the index and the old duplicates are mostly deindexed. At this point I’m mostly worried about Google spending it’s crawlbudget on the right pages. Somehow it still crawls 40.000 pages per day while we only got around 1000 pages that should be crawled. Looking at the current setup (with almost everything excluded though robots.txt) I can’t think of pages it does crawl to reach the 40k. And 40 times a day sounds like way to many crawled pages for a normal webshop.
Hope to hear from you!
-
Hello Bob,
Here is some food for thought. If you disallow a page in Robots.txt, google for example will not crawl that page. That does not however mean they will remove it from the index if it had previously been crawled. It simply treats it as inaccessible and moves on. It will take some time, months before Google finally says, we have no fresh crawls of page x, its time to remove it from the index.
On the other hand if you specifically allow Google to crawl those pages and show a no-index tag on it, Google now has a new directive it can act upon immediately.
So my evaluation of the situation would be to do 1 of 2 things.
1. Remove the disallow from robots and allow Google to crawl the pages again. However, this time use no-index, no-follow tags.
2. Remove the disallow from robots and allow Google to crawl the pages again, but use canonical tags to the main "filter" page to prevent further indexing the specific filter pages.
Which option is best depends on the amount of urls being indexed, a few thousand canonical would be my choice. A few hundred thousand, then no index would make more sense.
Whichever option, you will have to insure Google re-crawls, and then allow them time to re-index appropriately. Not a quick fix, but a fix none the less.
My thoughts and I hope it makes sense,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
External 404 pages
A client of mine is linking to a third-party vendor from their main site. The page being linked to loads with a Page Not Found error and then replaces some application content once the Javascript kicks in. This process is not visible to users (the application loads fine for front-end users) but it is being picked up as a 404 error in broken link reports. This link is part of the site skin so it's on every page. Outside of the annoyance of having lots of 404 errors being flagged in a broken link report, does this cause any actual issue? Eg, do search enginges see that my client is linking to something that is a 404 error, and does that cause them any harm?
Intermediate & Advanced SEO | | mkleamy0 -
Why has my home page replaced my sub-category page for set of keywords? Happened 2x in last 2 weeks for day or so only to fix itself. What is going on?
Today I noticed a really weird problem. Our LED Step Lights page (https://www.pegasuslighting.com/led-step-lights.html) has been replaced in the search results with our home page. See screenshot below. As I started to research what was going on, I noticed that this same thing must have happened on January 26 and 27 because in my Analytics I can see that our LED Step Lights sub-cat page had a sudden drop in traffic on those two days only to bounce back again on the 28th. See screenshot below. Our LED Step Lights page has had no changes in content, meta information, or anything in months. We have done no recent link building to this page in years. I don't understand what is going on. This is a popular page for us generating decent traffic. I really don't understand what is going on or even how to try and resolve this problem. I checked our Search Console. No messages. No manual web spam actions. Nothing to suggest that anything is going on except for the weird drops in traffic. Has anyone ever seen this happen before? Does anyone have any ideas as to what may be going on? serp-led-step-lights.png organic-traffic-drops.png search-console-led-step-lights.png
Intermediate & Advanced SEO | | cajohnson0 -
On 1 of our sites we have our Company name in the H1 on our other site we have the page title in our H1 - does anyone have any advise about the best information to have in the H1, H2 and Page Tile
We have 2 sites that have been set up slightly differently. On 1 site we have the Company name in the H1 and the product name in the page title and H2. On the other site we have the Product name in the H1 and no H2. Does anyone have any advise about the best information to have in the H1 and H2
Intermediate & Advanced SEO | | CostumeD0 -
404 Pages. Can I change it to do this without getting penalized ? I want to lower our bounce rate from these pages to encourage the user to continue on the site
Hi All, We have been streaming our site and got rid of thousands of pages for redundant locations (Basically these used to be virtual locations where we didn't have a depot although we did deliver there and most of them was duplicate/thin content etc ). Most of them have little if any link value and I didn't want to 301 all of them as we already have quite a few 301's already We currently display a 404 page but I want to improve on this. Current 404 page is - http://goo.gl/rFRNMt I can get my developer to change it, so it will still be a 404 page but the user will see the relevant category page instead ? So it will look like this - http://goo.gl/Rc8YP8 . We could also use Java script to show the location name etc... Would be be okay ? or would google see this as cheating. basically I want to lower our bounce rates from these pages but still be attractive enough for the user to continue in the site and not go away. If this is not a good idea, then any recommendations on improving our current 404 would be greatly appreciated. thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
Zero search count and still they are earning good. How??????
Here is one website - listdose.com Alexa rank - 28,665 They have around 1000 pages, But 80% keywords used by them have 0 search count. They target only one keyword per page. So how are they earning good money and how are they ranking well in alexa without having any good search count kewyords ? Is this good idea to target 0 search count keywords to create a blog.
Intermediate & Advanced SEO | | ross254sidney0 -
Tips for improving this page
I have made a content placeholder for a keyword that will gain significant search volume in the future. Until then I am trying to optimize the page to rank when the game launches and the keyword gains volume. http://hiddentriforce.com/a-link-between-worlds/walkthrough/ Is there anything I can do to improve the optimization for the phrase 'a link between worlds walkthrough' A lot of my competitors are already setting up similar placeholder pages and doing the same thing. I have 2 fairly large gaming sites that will place a banner for my walkthrough on their site. I did not pay for the links. I do free writing/ other services in exchange for this. I have been sharing the link socially. It has almost 200 likes and a handful of shares, tweets, g+ votes
Intermediate & Advanced SEO | | Atomicx0 -
Wrong page being ranked
Hi there, This seems a bit of a strange one, I have a particular keyword which I am trying to rank for, all internal links with the appropriate anchor text are pointing to the page I want to rank for, for this particular keyword, all external links are pointing to the page I want to rank for, for this particular keyword, however Google is ranking another page on my website for this keyword and the bizarre things is the page which is being ranked is a .PDF I am really not sure what else to do to give Google the hint that they are ranking the wrong page, any ideas? Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Google swapped our website's long standing ranking home page for a less authoritative product page?
Our website has ranked for two variations of a keyword, one singular & the other plural in Google at #1 & #2 (for over a year). Keep in mind both links in serps were pointed to our home page. This year we targeted both variations of the keyword in PPC to a products landing page(still relevant to the keywords) within our website. After about 6 weeks, Google swapped out the long standing ranked home page links (p.a. 55) rank #1,2 with the ppc directed product page links (p.a. 01) and dropped us to #2 & #8 respectively in search results for the singular and plural version of the keyword. Would you consider this swapping of pages temporary, if the volume of traffic slowed on our product page?
Intermediate & Advanced SEO | | JingShack0