Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
-
Hi all,
I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design:
- URLs did not change
- Mobile URLs were redirected
- Images were moved from a subdomain (images.sitename.com) to Amazon S3
- Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel)
I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats.
User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
-
Yea that's definitely tricky. I'm assuming you haven't taken out any load balancing that was previously in place between desktop and m. meaning your server is struggling a lot more? The Page Speed Insights tool can be good info but if possible I'd have a look at that user experience index to get an idea of how other users are experiencing the site.
A next port of call could be your server logs? Do you have any other subdomains which are performing differently in search console?
In terms of getting Google to crawl more, unfortunately at this point my instinct would be to keep trying to optimise the site to make it as crawl-friendly as possible and wait for Google to start crawling more. It does look like the original spike in time spent downloading has subsided a bit but it's still higher than it was. Without doing the maths, given that pages crawled and kilobytes downloaded have dropped, the level of slowdown may have persisted and the drop in that graph could have been caused by Google easing back. I'd keep working on making the site as efficient and consistent as possible and try to get that line tracking lower as an immediate tactic.
-
Hi Robin,
Thanks a lot for the reply. A lot of good information there.
- The crawl delay has been on the site as long as I have known so it was left in place just to minimize changes
- Have not changed any of the settings in Search Console. It has remained at "Let Google optimize for my site"
- Have not received the notification for mobile first indexing
- The redirects were one to one for the mobile site. I do not believe there are any redirect chains from those.
- The desktop pages remained roughly the same size but on a mobile device, pages are slightly heavier compared to the sepatate m dot site. The separate m dot site had a lot of content stripped out and was pretty bare to be fast. We introduced more image compression than we have ever done and also deferred image loading to make the user experience as fast as possible. The site scores in the 90s on Google's page speed insights tool.
- Yes, resizing based on viewport. Content is basically the same between devices. We have some information in accordions on product detail pages on and show fewer products on the grids on mobile.
- They are not the same images files but they are actually smaller than they were previously as we were not compressing them and using different sizes in different locations to minimize page weight.
I definitely lean towards it being performance related as in the crawl stats there seems to be a correlation between time spent downloading a page and the other two stats. I just wonder how you get Google to start crawling more once the performance is fixed or if they will figure it out.
-
Hi there, thanks for posting!
Sounds like an interesting one, some questions that come to mind which I'd just like to run through to make sure we're not missing anything;
- Why do you have Crawl-delay set for all user agents? Officially it's not something Google supports but the reason for that could be the cause of this
- Have you changed any settings in search console? There is a slider for how often you want Google to crawl a site
- Have you had the Search Console notification that you're now on the mobile-first index?
- When you redirected the mobile site, was it all one-to-one redirects? Is there any possibility you've introduced redirect chains?
- After the redesign - are the pages now significantly bigger (in terms of amount of data needed to fully load the page)? Are there any very large assets that are now on every page?
- When you say responsive, is it resizing based on viewport? How much duplication has been added to the page? Is there a bunch of content that is there for mobile but not loaded unless viewed from mobile (and vice versa)?
- When you moved the images, were they the same exact image files or might they now be the full-size image files?
This is just first blush so I could be off the mark but those graphs suggest to me that Google is having to work harder to crawl your pages and, as a result, is throttling the amount of time spent on your site. If the redesign or switch to responsive involved making the pages significantly "heavier" where that could be additional JavaScript, bigger images, more content etc. that could cause that effect. If you've got any sitespeed benchmarking in place you could have a look at that to see whether things have changed. Google also uses pagespeed as a ranking factor so that could explain the traffic drop.
The other thing to bear in mind is that combining the mobile and desktop sites was essentially a migration, particularly if you were on the mobile-first index. It may be that the traffic dip is less related to the crawl rate, but I understand why we'd make the connection there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site move-Redirecting and Indexing dynamic pages
I have an interesting problem I would like to pick someone else’s brain. Our business has over 80 different products, each with a dedicated page (specs, gallery, copy etc.) on the main website. Main site itself, is used for presentation purpose only and doesn’t offer a direct path to purchase. A few years ago, to serve a specific customer segment, we have created a site where customers can perform a quick purchase via one of our major strategic partners. Now we are looking to migrate this old legacy service, site and all its pages under the new umbrella (main domain/CMS). Problem #1 Redirects/ relevancy/ SEO equity Ideally, we could simply perform 1:1 - 301 redirect from old legacy product pages to the relevant new site products pages. The problem is that Call to action (buy), some images and in some cases, parts of the copy must be changed to some degree to accommodate this segment. The second problem is in our dev and creative team. There are not enough resources to dedicate for the creation of the new pages so we can perform 1:1 301 redirects. So, the potential decision is to redirect a visitor to the dynamic page URL where parent product page will be used to apply personalization rules and a new page with dynamic content (buy button, different gallery etc.) is displayed to the user (see attached diagram). If we redirect directly to parent URL and then apply personalization rules, URL will stay the same and this is what we are trying to avoid (we must mention in the URL that user is on purchase path, otherwise this redirect and page where the user lands, can be seen as deceptive). Also Dynamic pages will have static URLs and unique page/title tag and meta description. Problem #2 : Indexation/Canonicalization The dynamic page is canonicalized to the parent page and does have nearly identical content/look and feel, but both serve a different purpose and we want both indexed in search. Hope my explanation is clear and someone can chip in. Any input is greatly appreciated! vCm2Dt.jpg
Intermediate & Advanced SEO | | bgvsiteadmin1 -
Google Adsbot crawling order confirmation pages?
Hi, We have had roughly 1000+ requests per 24 hours from Google-adsbot to our confirmation pages. This generates an error as the confirmation page cannot be viewed after closing or by anyone who didn't complete the order. How is google-adsbot finding pages to crawl that are not linked to anywhere on the site, in the sitemap or linked to anywhere else? Is there any harm in a google crawler receiving a higher percentage of errors - even though the pages are not supposed to be requested. Is there anything we can do to prevent the errors for the benefit of our network team and what are the possible risks of any measures we can take? This bot seems to be for evaluating the quality of landing pages used in for Adwords so why is it trying to access confirmation pages when they have not been set for any of our adverts? We included "Disallow: /confirmation" in the robots.txt but it has continued to request these pages, generating a 403 page and an error in the log files so it seems Adsbot doesn't follow robots.txt. Thanks in advance for any help, Sam
Intermediate & Advanced SEO | | seoeuroflorist0 -
Google is indexing wrong page for search terms not on that page
I’m having a problem … the wrong page is indexing with Google, for search phrases “not on that page”. Explained … On a website I developed, I have four products. For example sake, we’ll say these four products are: Sneakers (search phrase: sneakers) Boots (search phrase: boots) Sandals (search phrase: sandals) High heels (search phrase: high heels) Error: What is going “wrong” is … When the search phrase “high heels” is indexed by Google, my “Sneakers” page is being indexed instead (and ranking very well, like #2). The page that SHOULD be indexing, is the “High heels” page (not the sneakers page – this is the wrong search phrase, and it’s not even on that product page – not in URL, not in H1 tags, not in title, not in page text – nowhere, except for in the top navigation link). Clue #1 … this same error is ALSO happening for my other search phrases, in exactly the same manner. i.e. … the search phrase “sandals” is ALSO resulting in my “Sneakers” page being indexed, by Google. Clue #2 … this error is NOT happening with Bing (the proper pages are correctly indexing with the proper search phrases, in Bing). Note 1: MOZ has given all my product pages an “A” ranking, for optimization. Note 2: This is a WordPress website. Note 3: I had recently migrated (3 months ago) most of this new website’s page content (but not the “Sneakers” page – this page is new) from an old, existing website (not mine), which had been indexing OK for these search phrases. Note 4: 301 redirects were used, for all of the OLD website pages, to the new website. I have tried everything I can think of to fix this, over a period of more than 30 days. Nothing has worked. I think the “clues” (it indexes properly in Bing) are useful, but I need help. Thoughts?
Intermediate & Advanced SEO | | MG_Lomb_SEO0 -
Old site penalised, we moved: Shall we cut loose from the old site. It's curently 301 to new site.
Hi, We had a site with many bad links pointing to it (.co.uk). It was knocked from the SERPS. We tried to manually ask webmasters to remove links.Then submitted a Disavow and a recon request. We have since moved the site to a new URL (.com) about a year ago. As the company needed it's customer to find them still. We 301 redirected the .co.uk to the .com There are still lots of bad links pointing to the .co.uk. The questions are: #1 Do we stop the 301 redirect from .co.uk to .com now? The .co.uk is not showing in the rankings. We could have a basic holding page on the .co.uk with 'we have moved' (No link). Or just switch it off. #2 If we keep the .co.uk 301 to the .com, shall we upload disavow to .com webmasters tools or .co.uk webmasters tools. I ask this because someone else had uploaded the .co.uk's disavow list of spam links to the .com webmasters tools. Is this bad? Thanks in advance for any advise or insight!
Intermediate & Advanced SEO | | SolveWebMedia0 -
New Site (redesign) Launched Without 301 Redirects to New Pages - Too Late to Add Redirects?
We recently launched a redesign/redevelopment of a site but failed to put 301 redirects in place for the old URL's. It's been about 2 months. Is it too late to even bother worrying about it at this point? The site has seen a notable decrease in site traffic/visits, perhaps due to this issue. I assume that once the search engines get an error on a URL, it will remove it from displaying in search results after a period of time. I'm just not sure if they will try to re-crawl those old URLs at some point and if so, it may be worth it to have those 301 redirects in place. Thank you.
Intermediate & Advanced SEO | | BrandBuilder0 -
Site Merge Strategy: Choosing Target Pages for 301 Redirects
I am going to be merging two sites. One is a niche site, and it is being merged with the main site. I am going to be doing 301 redirects to the main site. My question is, what is the best way of redirecting section/category pages in order to maximize SEO benefits. I will be redirecting product to product pages. The questions only concerns sections/categories. Option 1: Direct each section/category to the most closely matched category on the main site. For example, vintage-t-shirts would go to vintage-t-shirt on main site. Option 2: Point as many section/category pages to larger category on main site with selected filters. We have filtered navigation on our site. So if you wanted to see vintage t-shirts, you could go to the vintage t-shirt category, OR you could go to t-shirts and select "vintage" under style filter. In the example above, the vintage-t-shirt section from the niche site would point to t-shirts page with vintage filter selected (something like t-shirts/#/?_=1&filter.style=vintage). With option 2, I would be pointing more links to a main category page on the main site. I would likely have that page rank higher, because more links are pointing to it. I may have a better overall user experience, because if the customer decides to browse another style of t-shirt, they can simply unselect the filter and make other selections. Questions: Which of these options is better as far as: (1) SEO, (2) User experience If I go with option 2, the drawback is that the page titles will all be the same (i.e vintage-t-shirts pointing to the page with filter selected would have "t-shirts" as page title instead of a more targeted page with page title "vintage t-shirts." I believe a workaround would be to pull filter values from the URL and append them to the page title. That way page title for URL t-shirts/#/?=1&filter.style=vintage_ would be something like "vintage, t-shirts." Is this the appropriate way to deal with it? Any thoughts, suggestions, shared experiences would be appreciated.
Intermediate & Advanced SEO | | inhouseseo0 -
SEOMOZ crawl all my pages
SEOMOZ crawl all my pages including ".do" (all web pages after sign up ) . Coz of this it finishes all my 10.000 crawl page quota and be exposed to dublicate pages. Google is not crawling pages that user reach after sign up. Because these are private pages for customers I guess The main question is how we can limit SEOMOZ crawl bot. If the bot can stay out of ".do" java extensions it'll perfect to starting SEO analysis. Do you know think about it? Cheers Example; .do java extension (after sign up page) (Google can't crawl) http://magaza.turkcell.com.tr/showProductDetail.do?psi=1001694&shopCategoryId=1000021&model=Apple-iPhone-3GS-8GB Normal Page (Google can crawl) http://magaza.turkcell.com.tr/telefon/Apple-iPhone-3GS-8GB/1001694/.html
Intermediate & Advanced SEO | | hcetinsoy0 -
What causes internal pages to have a page rank of 0 if the home page is PR 5?
The home page PageRank is 5 but every single internal page is PR 0. Things I know I need to address each page has 300 links (Menu problem). Each article has 2-3 duplicates caused from the CMS working on this now. Has anyone else had this problem before? What things should I look out for to fix this issue. All internal linking is follow there is no page rank sculpting happening on the pages.
Intermediate & Advanced SEO | | SEOBrent0