Crawl rate dropped to zero
-
Hello, I recently moved my site in godaddy from cpanel to managed wordpress. I bought this transfer directly from GoDaddy customer service. in this process they accidentally changed my domain from www to non www. I changed it back after the migration, but as a result of this sites craw rate from search console fell to zero and has not risen at all since then.
In addition to this website does not display any other errors, i can ask google manually fetch my pages and it works as before, only the crawl rates seems to be dropped permanently. GoDaddy customer service also claims that do not see any errors but I think, however, that in some way they caused this during the migration when the url changed since the timing match perfectly. also when they accidentally removed the www, crawl rate of my sites non www version got up but fell back to zero when I changed it back to www version. Now the crawl rate of both www and non www version is zero. How do I get it to rise again? Customer service also said that the problem may be related to ftp-data of search console? But they were not able to help any more than .Would someone from here be able to help me with this in anyway please?
-
Hello, asnwers to the questions bolded:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)? Over 50 days, i still cannot believe that it would be just a coincidence that crawl rate dropped so suddenly only because google suddenly thinks that my page should not be crawled that often. After all, amount of new content, quality of new links and all the other factors are much better all the time on my site, and before the drop, crawl rate increased steadily. It has to be some technical issue?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off. No, it has actually went down a little bit (not much though)
-
Interesting. I have 2 more thoughts:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off.
-
Crawl rate still is extremely slow, average 10-15 per day except when i sent pages to be manually crawled, then it crawls those page. Before the drop the crawl rate was never under 200 per day and it was usually over 1000. anything more I can do? It seems to have no effect my rankings or anything else as l can see, but I still would like this be fixed. It has be something to do with the fact that i changed my hosting to godaddy managed wordpress hosting. but they have no clue about what could cause this. robot.txt file change seemed to have no effect or very minimum effect
-
Not that I'm aware of, unfortunately. Patience is an important skill when dealing with Google
-
Thanks! I will try that. I see that search console shows crawl rates with few days delay, is there somewhere i could check if it works instantly?
-
I thought of one other possibility: Your sitemap.xml is probably auto-generated, so this shouldn't be a problem, but check to make sure that the URLs in the sitemap.xml have the www.
Other than that I'm out of ideas - I would wait a few days to see what happens, but maybe someone else with more experience watching Google will have seen this before. If it does resolve, I'd like to know what worked.
-
I'm not convinced that robots.txt is causing your problem, but it can't hurt to change it back. In fact, while looking for instructions on how to change it I came across this blog post by Joost de Valk, (aka Yoast), that pretty much says you should remove everything that's currently in your robots.txt - and his arguments are right for everything:
- Blocking wp-content/plugins will stop Google from loading JS and/or CSS resources that it might need to render the page properly.
- Blocking wp-admin is redundant, because the wp-admin if it's linked it can still be found, and important pages already have an X-Robots HTTP header that says not to index them.
If you're using Yoast SEO, here are instructions on how to change the robots.txt file.
-
Hi, one more thing. Are you 100% sure tht robot.txt file hs nothing to do with this? It changed at the sime time when the problems started to occur. It used to be :
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.phpBut now it is :
User-agent: *
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/At the sime time "blocked resources" notifications started to occur in search console.
Blocked Resources > Rendering without certain resources can impair the indexing of your web pages. Learn more.Status: 3/19/16152 Pages with blocked resources
This has to have something to do with it right?
-
Thank you for your answer, my answers bolded here below
- Do you see any crawl errors in the Google Search Console? **Nothing new after the crawl rate dropped, just some old soft 404 errors and old not found errors. **
- If you search for your site on Google, what do you see, (does your snippet look normal)? Yes everything looks perfectly normal, just like before when the crawl rate dropped
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?) I dont thin this is possible, since the cralw rate dropped lmost instantly from average 400 to zero after the site migration.
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect. If this is the problem, how should i start to get it fixed?
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in_ both the www and non-www versions of your site_ in Google Search Console. Yes that is how it has been all the time
- In the Search Console for the www-version of your site, re-submit your sitemap. Done
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do). I have done this many times since i noticed the problem, fetch as google works normally without any issues
Is there anything more i can do? If i want hire someone to fix this, is there any recommendations? I am not a tech guy so this is quite difficult task for me
-
I don't know why this is happening, but this is what I would check:
- Do you see any crawl errors in the Google Search Console?
- If you search for your site on Google, what do you see, (does your snippet look normal)?
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?)
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect.
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in both the www and non-www versions of your site in Google Search Console.
- In the Search Console for the www-version of your site, re-submit your sitemap.
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do).
Good luck!
-
That's not so horrible - it just says not to crawl the plugins directory or the admin, and to delay a second between requests. You probably don't want your plugins or admin directories being indexed, and according to this old forum post Google ignores the crawl-delay directive, so the robots.txt isn't the problem.
-
Hi, my robot.txt file looks like this:
User-agent: * Crawl-delay: 1 Disallow: /wp-content/plugins/ Disallow: /wp-admin/ This is not how it suppose to look like, right? could this cause the problem?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any crawl issues with TLS 1.3?
Not a techie here...maybe this is to be expected, but ever since one of my client sites has switched to TLS 1.3, I've had a couple of crawl issues and other hiccups. First, I noticed that I can't use HTTPSTATUS.io any more...it renders an error message for URLs on the site in question. I wrote to their support desk and they said they haven't updated to 1.3 yet. Bummer, because I loved httpstatus.io's functionality, esp. getting bulk reports. Also, my Moz campaign crawls were failing. We are setting up a robots.txt directive to allow rogerbot (and the other bot), and will see if that works. These fails are consistent with the date we switched to 1.3, and some testing confirmed it. Anyone else seeing these types of issues, and can suggest any workarounds, solves, hacks to make my life easier? (including an alternative to httpstatus.io...I have and use screaming frog...not as slick, I'm afraid!) Do you think there was a configuration error with the client's TLS 1.3 upgrade, or maybe they're using a problematic/older version of 1.3?? Thanks -
Technical SEO | | TimDickey0 -
Crawl issues
Hello there, I have found that when crawling my site I have errors regarding the meta description and it says it is missing from few pages. I checked these pages but there is a meta description. I also ran the same report with other tools and it comes up the same issues. What should I do?
Technical SEO | | PremioOscar0 -
Can Googlebot crawl the content on this page?
Hi all, I've read the posts in Google about Ajax and javascript (https://support.google.com/webmasters/answer/174992?hl=en) and also this post: http://moz.com/ugc/can-google-really-access-content-in-javascript-really. I am trying to evaluate if the content on this page, http://www.vwarcher.com/CustomerReviews, is crawlable by Googlebot? It appears not to be. I perused the sitemap and don't see any ugly Ajax URLs included as Google suggests doing. Also, the page is definitely indexed, but appears the content is only indexed via its original source (Yahoo!, Citysearch, Google+, etc.). I understand why they are using this dynamic content, because it looks nice to an end-user and requires little to no maintenance. But, is it providing them any SEO benefit? It appears to me that it would be far better to take these reviews and simply build them into HTML. Thoughts?
Technical SEO | | danatanseo0 -
Pagination/Crawl Errors
Hi, Ive only just joined SEO moz and after they crawled my site they came up with 3600 crawl errors mostly being duplicate content and duplicate urls. After researching this it soon became clear it was due to on page pagination and after speaking with Abe from SEO mozhe advised me to take action by getting our developers to implement rel=”next” & rel=”prev” to review. soon after our developers implemented this code ( I have no understanding of this what so ever) 90% of my keywords I had been ranking for in the top 10 have dropped out the top 50! Can anyone explain this or help me with this? Thanks Andy
Technical SEO | | beck3980 -
What to do about Google Crawl Error due to Faceted Navigation?
We are getting many crawl errors listed in Google webmaster tools. We use some faceted navigation with several variables. Google sees these as "500" response code. It looks like Google is truncating the url. Can we tell Google not to crawl these search results using part of the url ("sort=" for example)? Is there a better way to solve this?
Technical SEO | | EugeneF0 -
Rankings Dropped Dramatically
Hi All, I need some advice on this one, one of our websites is performing very badly and I am lost to know why. We migrated from an old website www.myclient.co.uk and re-directed to www.myclient.com. The issue we had was that the old site relied on search results i.e. www.myclient.co.uk/search+villas+in+marbella However most ranked url's were not as tidy as the example above and had lots of characters. Because of this and the fact that all incoming links were to the homepage we do not carry out any 301 re-directs. In some keywords we have increased our rankings but for others we have dropped dramatically. keyword 1 4th Jan position 22 9th Jan position 90 keyword 2 4th jan position 34 9th jan position 89 keyword 3 4th jan position 12 9th jan position 16 (a smaller drop than the others) We do have some top 10s for this domain so I don't think we have been penalised for anything but I am shocked to see such drops in rankings. I have setup the website so that each page targets different keywords and I have checked and they get A grades within SEOmoz. Any advice would be really appreciated, as the client is not too happy at the moment. Many thanks Andy
Technical SEO | | iprosoftware0 -
Slight Drop after Minor Link Building
Hi, Last week I build around 10-15 high quality related links to my trampoline pads website which was ranking number 2 for "trampoline pads" and number 1 for "trampoline pad". After building the links this week it has dropped in the serps to around number 5-6 for both. Is this an effect of the link building and will it bounce back in? the url is http://www.trampolinepad.co.uk/. Kind Regards,
Technical SEO | | GardenGamer
Simon0 -
False Negative Warnings with Crawl Diagnostic Test
Ok... I will try to explain as clear as possible. This issue is regarding close to 5000 'Warnings' from our most recent seomoz pro crawl diagnostic test. The top three warnings have about 6000 instances among them: : 1. Duplicate Page Title 2. Duplicate Page Content 3. 302 (Temporary Redirect) We understand that duplicate titles and content are "no-no's" and have made it top priority to avoid duplication on any level. Here is the issue lies... we are using the Volusion eCommerce solution and they have a variety of value add shopping features such as "Email A Friend" and "Email Me When Back In-Stock" on each product page. If one of these options is clicked, you are then directed to the appropriate page. Now each page has a different url with the sole variable of each individual product code. But with it being a part of Volusion's ingrained functionality... the META title is the same for each page. It takes from the title of our store homepage. Example below: Online Beauty Supply Store | Hair Care Products | Nail Care | Flat Irons http://www.beautystoponline.com/Email_Me_When_Back_In_Stock.asp?ProductCode=AN1PRO7130 Online Beauty Supply Store | Hair Care Products | Nail Care | Flat Irons http://www.beautystoponline.com/Email_Me_When_Back_In_Stock.asp?ProductCode=BI8BIOSI34 The same goes for the duplicate content warnings. If you click on one of these features, it directs you to a page with pretty much the same content except for different product. Basically each page has both duplicate content and duplicate title. SEOMOZ description is Duplicate Title: Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings. Duplicate Page Content: You should use unique titles for your different pages to ensure that they describe each page uniquely and don't compete with each other for keyword relevance. Because I know SEO is not an exact science, the question here is does Google recognize that although they are duplicates, it actually is generated from a feature that makes us even more of a legitimate eCommerce site? Or, from seomoz description, if duplication is bad only because you do not want your pages to be competing with each other... should I not worry because i could care less if these pages don't get traffic. Or does it effect my domain authority as whole? Then as for a solution. I am still trying to work out with Volusion how we can change the META title of the pages. It's highly unlikely but we'll see. As for the duplicate content, there is no way to change one of these pages. It's hard coded. Solution... so if it is bad (even though it shouldn't be) would it be worth it to disable these features. I hope not. Wouldn't that defeat the purpose of Google trying to provide the most legitimate, value add sites to searchers? As for the 302 (Temporary Redirect) warning... this is only appearing on all of our shopping cart pages. Such as the "Email A Friend" feature, there is a page for every product. For example: http://www.beautystoponline.com/ShoppingCart.asp?ProductCode=AN1HOM8040 http://www.beautystoponline.com/ShoppingCart.asp?ProductCode=AN1HOM8050 The description semoz provides is: 302 (Temporary Redirect): Using a 302 redirect will cause search engine crawlers to treat the redirect as temporary and not pass any link juice (ranking power). We highly recommend that you replace 302 redirects with 301 redirects. So the probably solution... I do have the ability to change to a 301 redirect but do I want to do this for my shopping cart? Does Google realize the dead end is legitimate? Or... does it matter if link juice is passed through my shopping cart? And again, does it impact my site as a whole? It is greatly appreciated if anyone could help me out with this stuff 🙂 Thank you
Technical SEO | | anthonyjamesent1