Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
-
Hi all,
I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design:
- URLs did not change
- Mobile URLs were redirected
- Images were moved from a subdomain (images.sitename.com) to Amazon S3
- Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel)
I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats.
User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
-
Yea that's definitely tricky. I'm assuming you haven't taken out any load balancing that was previously in place between desktop and m. meaning your server is struggling a lot more? The Page Speed Insights tool can be good info but if possible I'd have a look at that user experience index to get an idea of how other users are experiencing the site.
A next port of call could be your server logs? Do you have any other subdomains which are performing differently in search console?
In terms of getting Google to crawl more, unfortunately at this point my instinct would be to keep trying to optimise the site to make it as crawl-friendly as possible and wait for Google to start crawling more. It does look like the original spike in time spent downloading has subsided a bit but it's still higher than it was. Without doing the maths, given that pages crawled and kilobytes downloaded have dropped, the level of slowdown may have persisted and the drop in that graph could have been caused by Google easing back. I'd keep working on making the site as efficient and consistent as possible and try to get that line tracking lower as an immediate tactic.
-
Hi Robin,
Thanks a lot for the reply. A lot of good information there.
- The crawl delay has been on the site as long as I have known so it was left in place just to minimize changes
- Have not changed any of the settings in Search Console. It has remained at "Let Google optimize for my site"
- Have not received the notification for mobile first indexing
- The redirects were one to one for the mobile site. I do not believe there are any redirect chains from those.
- The desktop pages remained roughly the same size but on a mobile device, pages are slightly heavier compared to the sepatate m dot site. The separate m dot site had a lot of content stripped out and was pretty bare to be fast. We introduced more image compression than we have ever done and also deferred image loading to make the user experience as fast as possible. The site scores in the 90s on Google's page speed insights tool.
- Yes, resizing based on viewport. Content is basically the same between devices. We have some information in accordions on product detail pages on and show fewer products on the grids on mobile.
- They are not the same images files but they are actually smaller than they were previously as we were not compressing them and using different sizes in different locations to minimize page weight.
I definitely lean towards it being performance related as in the crawl stats there seems to be a correlation between time spent downloading a page and the other two stats. I just wonder how you get Google to start crawling more once the performance is fixed or if they will figure it out.
-
Hi there, thanks for posting!
Sounds like an interesting one, some questions that come to mind which I'd just like to run through to make sure we're not missing anything;
- Why do you have Crawl-delay set for all user agents? Officially it's not something Google supports but the reason for that could be the cause of this
- Have you changed any settings in search console? There is a slider for how often you want Google to crawl a site
- Have you had the Search Console notification that you're now on the mobile-first index?
- When you redirected the mobile site, was it all one-to-one redirects? Is there any possibility you've introduced redirect chains?
- After the redesign - are the pages now significantly bigger (in terms of amount of data needed to fully load the page)? Are there any very large assets that are now on every page?
- When you say responsive, is it resizing based on viewport? How much duplication has been added to the page? Is there a bunch of content that is there for mobile but not loaded unless viewed from mobile (and vice versa)?
- When you moved the images, were they the same exact image files or might they now be the full-size image files?
This is just first blush so I could be off the mark but those graphs suggest to me that Google is having to work harder to crawl your pages and, as a result, is throttling the amount of time spent on your site. If the redesign or switch to responsive involved making the pages significantly "heavier" where that could be additional JavaScript, bigger images, more content etc. that could cause that effect. If you've got any sitespeed benchmarking in place you could have a look at that to see whether things have changed. Google also uses pagespeed as a ranking factor so that could explain the traffic drop.
The other thing to bear in mind is that combining the mobile and desktop sites was essentially a migration, particularly if you were on the mobile-first index. It may be that the traffic dip is less related to the crawl rate, but I understand why we'd make the connection there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it ok to repeat a (focus) keyword used on a previous page, on a new page?
I am cataloguing the pages on our website in terms of which focus keyword has been used with the page. I've noticed that some pages repeated the same keyword / term. I've heard that it's not really good practice, as it's like telling google conflicting information, as the pages with the same keywords will be competing against each other. Is this correct information? If so, is the alternative to use various long-winded keywords instead? If not, meaning it's ok to repeat the keyword on different pages, is there a maximum recommended number of times that we want to repeat the word? Still new-ish to SEO, so any help is much appreciated! V.
Intermediate & Advanced SEO | | Vitzz1 -
Category Page as Shopping Aggregator Page
Hi, I have been reviewing the info from Google on structured data for products and started to ponder.
Intermediate & Advanced SEO | | Alexcox6
https://developers.google.com/search/docs/data-types/products Here is the scenario.
You have a Category Page and it lists 8 products, each products shows an image, price and review rating. As the individual products pages are already marked up they display Rich Snippets in the serps.
I wonder how do we get the rich snippets for the category page. Now Google suggest a markup for shopping aggregator pages that lists a single product, along with information about different sellers offering that product but nothing for categories. My ponder is this, Can we use the shopping aggregator markup for category pages to achieve the coveted rich results (from and to price, average reviews)? Keen to hear from anyone who has had any thoughts on the matter or had already tried this.0 -
Schema markup concerning category pages on an ecommerce site
We are adding json+ld data to an ecommerce site and myself and one of the other people working on the site are having a minor disagreement on things. What it comes down to is how to mark up the category page. One of us says it needs to be marked up with as an Itempage, https://schema.org/ItemPage The other says it needs to be marked up as products, with multiple product instances in the schema, https://schema.org/Product The main sticking point on the Itemlist is that Itemlist is a child of intangible, so there is a feeling that should be used for things like track listings or other arbitrary data.
Intermediate & Advanced SEO | | LesleyPaone2 -
What are best page titles for sub-domain pages?
Hi Moz communtity, Let's say a website has multiple sub-domains with hundreds and thousands of pages. Generally we will be mentioning "primary keyword & "brand name" on every page of website. Can we do same on all pages of sub-domains to increase the authority of website for this primary keyword in Google? Or it gonna end up as negative impact if Google consider as duplicate content being mentioned same keyword and brand name on every page even on website and all pages of sub domains? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Are there any negative effects to using a 301 redirect from a page to another internal page?
For example, from http://www.dog.com/toys to http://www.dog.com/chew-toys. In my situation, the main purpose of the 301 redirect is to replace the page with a new internal page that has a better optimized URL. This will be executed across multiple pages (about 20). None of these pages hold any search rankings but do carry a decent amount of page authority.
Intermediate & Advanced SEO | | Visually0