Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
-
Hi all,
I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design:
- URLs did not change
- Mobile URLs were redirected
- Images were moved from a subdomain (images.sitename.com) to Amazon S3
- Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel)
I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats.
User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
-
Yea that's definitely tricky. I'm assuming you haven't taken out any load balancing that was previously in place between desktop and m. meaning your server is struggling a lot more? The Page Speed Insights tool can be good info but if possible I'd have a look at that user experience index to get an idea of how other users are experiencing the site.
A next port of call could be your server logs? Do you have any other subdomains which are performing differently in search console?
In terms of getting Google to crawl more, unfortunately at this point my instinct would be to keep trying to optimise the site to make it as crawl-friendly as possible and wait for Google to start crawling more. It does look like the original spike in time spent downloading has subsided a bit but it's still higher than it was. Without doing the maths, given that pages crawled and kilobytes downloaded have dropped, the level of slowdown may have persisted and the drop in that graph could have been caused by Google easing back. I'd keep working on making the site as efficient and consistent as possible and try to get that line tracking lower as an immediate tactic.
-
Hi Robin,
Thanks a lot for the reply. A lot of good information there.
- The crawl delay has been on the site as long as I have known so it was left in place just to minimize changes
- Have not changed any of the settings in Search Console. It has remained at "Let Google optimize for my site"
- Have not received the notification for mobile first indexing
- The redirects were one to one for the mobile site. I do not believe there are any redirect chains from those.
- The desktop pages remained roughly the same size but on a mobile device, pages are slightly heavier compared to the sepatate m dot site. The separate m dot site had a lot of content stripped out and was pretty bare to be fast. We introduced more image compression than we have ever done and also deferred image loading to make the user experience as fast as possible. The site scores in the 90s on Google's page speed insights tool.
- Yes, resizing based on viewport. Content is basically the same between devices. We have some information in accordions on product detail pages on and show fewer products on the grids on mobile.
- They are not the same images files but they are actually smaller than they were previously as we were not compressing them and using different sizes in different locations to minimize page weight.
I definitely lean towards it being performance related as in the crawl stats there seems to be a correlation between time spent downloading a page and the other two stats. I just wonder how you get Google to start crawling more once the performance is fixed or if they will figure it out.
-
Hi there, thanks for posting!
Sounds like an interesting one, some questions that come to mind which I'd just like to run through to make sure we're not missing anything;
- Why do you have Crawl-delay set for all user agents? Officially it's not something Google supports but the reason for that could be the cause of this
- Have you changed any settings in search console? There is a slider for how often you want Google to crawl a site
- Have you had the Search Console notification that you're now on the mobile-first index?
- When you redirected the mobile site, was it all one-to-one redirects? Is there any possibility you've introduced redirect chains?
- After the redesign - are the pages now significantly bigger (in terms of amount of data needed to fully load the page)? Are there any very large assets that are now on every page?
- When you say responsive, is it resizing based on viewport? How much duplication has been added to the page? Is there a bunch of content that is there for mobile but not loaded unless viewed from mobile (and vice versa)?
- When you moved the images, were they the same exact image files or might they now be the full-size image files?
This is just first blush so I could be off the mark but those graphs suggest to me that Google is having to work harder to crawl your pages and, as a result, is throttling the amount of time spent on your site. If the redesign or switch to responsive involved making the pages significantly "heavier" where that could be additional JavaScript, bigger images, more content etc. that could cause that effect. If you've got any sitespeed benchmarking in place you could have a look at that to see whether things have changed. Google also uses pagespeed as a ranking factor so that could explain the traffic drop.
The other thing to bear in mind is that combining the mobile and desktop sites was essentially a migration, particularly if you were on the mobile-first index. It may be that the traffic dip is less related to the crawl rate, but I understand why we'd make the connection there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If a page ranks in the wrong country and is redirected, does that problem pass to the new page?
Hi guys, I'm having a weird problem: A new multilingual site was launched about 2 months ago. It has correct hreflang tags and Geo targetting in GSC for every language version. We redirected some relevant pages (with good PA) from another website of our client's. It turned out that the pages were not ranking in the correct country markets (for example, the en-gb page ranking in the USA). The pages from our site seem to have the same problem. Do you think they inherited it due to the redirects? Is it possible that Google will sort things out over some time, given the fact that the new pages have correct hreflangs? Is there stuff we could do to help ranking in the correct country markets?
Intermediate & Advanced SEO | Jun 13, 2018, 4:51 PM | ParisChildress1 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | Sep 4, 2016, 7:23 PM | THandorf0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | Mar 8, 2016, 11:23 AM | Bob_van_Biezen0 -
Do I have to optimize every page on my site?
Hi guys I run my own photography webstie (www.hemeravisuals.co.uk Going through the process optimizing my page for seo. I have one question I have a few gallery pages with no text etc? Do I still have to optimize these ? Would it rank my site lower if they weren't optimized? And how can i do this sucessfully with little text on these pages ( I have indepth text on these subjects on my services & pricing pages? Kind Regards Cam
Intermediate & Advanced SEO | Dec 3, 2014, 6:12 PM | hemeravisuals0 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | Jun 4, 2012, 9:15 PM | gregelwell0 -
Splitting a Site into Two Sites for SEO Purposes
I have a client that owns a business that really could be easily divided into two separate business in terms of SEO. Right now his web site covers both divisions of his business. He gets about 5500 visitors a month. The majority go to one part of his business and around 600 each month go to the other. So about 11% I'm considering breaking off this 11% and putting it on an entirely different domain name. I think I could rank better for this 11%. The site would only be SEO'd for this particular division of the company. The keywords would not be in competition with each other. I would of course link the two web sites and watch that I don't run into any duplicate content issues. I worry about placing the redirects from the pages that I remove to the new pages. I know Google is not a fan of redirects. Then I also worry about the eventual drop in traffic to the main site now. How big of a factor is traffic in rankings? Other challenges include that the business services 4 major metropolitan areas. Would you do this? Have you done this? How did it work? Any suggestions?
Intermediate & Advanced SEO | Oct 15, 2015, 5:04 PM | MSWD0 -
Are there any negative effects to using a 301 redirect from a page to another internal page?
For example, from http://www.dog.com/toys to http://www.dog.com/chew-toys. In my situation, the main purpose of the 301 redirect is to replace the page with a new internal page that has a better optimized URL. This will be executed across multiple pages (about 20). None of these pages hold any search rankings but do carry a decent amount of page authority.
Intermediate & Advanced SEO | Feb 6, 2012, 11:04 AM | Visually0 -
Should the sitemap include just menu pages or all pages site wide?
I have a Drupal site that utilizes Solr, with 10 menu pages and about 4,000 pages of content. Redoing a few things and we'll need to revamp the sitemap. Typically I'd jam all pages into a single sitemap and that's it, but post-Panda, should I do anything different?
Intermediate & Advanced SEO | Jul 14, 2011, 3:44 AM | EricPacifico0