Googlebot take 5 times longer to crawl each page
-
Hello All
From about mid September my GWMT has show that the average time to crawl a page on my site has shot up from an average of 130ms to an average of 700ms and peaks at 4000ms.
I have checked my server error logs and found nothing there, I have checked with the hosting comapny and there are no issues with the server or other sites on the same server.
Two weeks after this my ranking fell by about 950 places for most of my keywords etc.I am really just trying to eliminate this as a possible cause, of these ranking drops. Or was it the Pand/ EMD algo that has done it.
Many Thanks
Si
-
Thank you for having a look
I made no strcutural changes around the time of the issues starting.
On the third graph in GWMT yes there is was a spike on the time spent downloading at it is still a lot higher than previously. I have add an image of it below.
There were two google update about two weeks later the latest Panda and the new EMD.
Most of the content has been written by myself from my own experience etc. There are some pages that I am in the process of removing / changing that are the same as other sites.
Until 4 months ago the layout was in fixed size nested tables etc, I am just about getting my head around CSS etc., to try and drag it in the 21st century.
-
Hi.
Based on the site size (number of pages) and format (code, elements and structure) and two speed test I just run on it and a trace-route (from Austria) looks like you don't have any issues with it from a technical point of view.
One thing you need to check is still possibile is the time spent downloding a page graph (the third one) from within GWMT. Did this spiked up in the same time when crawl pages went down ?
A few other questions you should consider:
-
did you do any changes - especially structure changes around the same time you've notice the issues ?
-
are there any public google updates in the same timeframe with those changes that you've notice ?( you can check them here: http://www.seomoz.org/google-algorithm-change )
-
is your content duplicate ? (with external sources I mean - not internally)
Please don't get me wrong - i would be ok with the format of the site if it will be very old - before 2000. But the domain is from 2008 - you should get on track with new trends as far as layout, content format and web site format in general.
Hope it helps.
-
-
Hi
I am as sure as I can be but not being a full expert on these things I may have missed something technical.
I have be making changes to the site since mainly on the css layout.
The site is www.growingyourownveg.com
Thanks
-
Hi,
As far as I know a low crawl rate won't end up with bad rankings but bad rankings will end up with a lower crawl rate.
If you are sure and I mean really sure you don't have any technical issues on your side that will influence the crawl rate and possibile also rankings then you should take in consideration that maybe you do actually have a -950 filter that is causing your rankings to drop, google dosen't consider your site an authority and for thsi reason it won't crawl your site often or as often as it used to do it.
Can you share the url of the site ? Just to have a look and see if at a first glance there are any obvios reason for google to dislike your site.
Cheers !
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Contact Page
I'm currently designing a new website for my wife, who just started her own wedding/engagement photography business. I'm trying to build it as SEO friendly as possible, but she brought up an idea that she likes that I've never tried before. Typically on all the websites I've ever built, I've had a dedicated contact page that has the typical contact form. Because that contact form on a wedding photographers website is almost as important as selling a product on an e-commerce site, she brought up the possibility of putting the contact form in the footer site-wide (minus maybe the homepage) rather than having a dedicated contact page. And in the navigation, where you have links such as "Home", "Portfolio", "About", "Prices", "Contact", etc. the "Contact" navigation item would transfer the user to the bottom of the page they are on rather than a new page. Any thoughts on which way would be better for a case like this, and any positives/negatives for doing it each way? One thought I had is that if it's in the footer rather than it's own page, it would lose it's search-ability as it's technically duplicate content on each page. But then again, that's what a footer is. Thanks, Mickey
Technical SEO | | shannmg10 -
Can I use a 410'd page again at a later time?
I have old pages on my site that I want to 410 so they are totally removed, but later down the road if I want to utilize that URL again, can I just remove the 410 error code and put new content on that page and have it indexed again?
Technical SEO | | WebServiceConsulting.com0 -
Issue: Duplicate Page Content > Wordpress Comments Page
Hello Moz Community, I've create a campaign in Moz and received hundreds of errors, regarding "Duplicate Page Content". After some review, I've found that 99% of the errors in the "Duplicate Page Content" report are occurring due to Wordpress creating a new comment page (with the original post detail), if a comment is made on a blog post. The post comment can be displayed on the original blog post, but also viewable on a second URL, created by Wordpress. http://www.Example.com/example-post http://www.Example.com/example-post/comment-page-1 Anyone else experience this issue in Wordpress or this same type of report in Moz? Thanks for your help!
Technical SEO | | DomainUltra0 -
Why is my crawl taking so long?
Hi There, My crawl for albertcuyp.nl is taking very long, it started on the 10th of april. I don't know whats going on but i think 2 weeks for a crawl is extremely long. Can you help me?
Technical SEO | | KnowHowww0 -
Index page
To the SEO experts, this may well seem a silly question, so I apologies in advance as I try not to ask questions that I probably know the answer for already, but clarity is my goal I have numerous sites ,as standard practice, through the .htaccess I will always set up non www to www, and redirect the index page to www.mysite.com. All straight forward, have never questioned this practice, always been advised its the ebst practice to avoid duplicate content. Now, today, I was looking at a CMS service for a customer for their website, the website is already built and its a static website, so the CMS integration was going to mean a full rewrite of the website. Speaking to a friend on another forum, he told me about a service called simple CMS, had a look, looks perfect for the customer ... Went to set it up on the clients site and here is the problem. For the CMS software to work, it MUST access the index page, because my index page is redirected to www.mysite.com , it wont work as it cant find the index page (obviously) I questioned this with the software company, they inform me that it must access the index page, I have explained that it wont be able to and why (cause I have my index page redirected to avoid duplicate content) To my astonishment, the person there told me that duplicate content is a huge no no with Google (that's not the astonishing part) but its not relevant to the index and non index page of a website. This goes against everything I thought I knew ... The person also reassured me that they have worked within the SEO area for 10 years. As I am a subscriber to SEO MOZ and no one here has anything to gain but offering advice, is this true ? Will it not be an issue for duplicate content to show both a index page and non index page ?, will search engines not view this as duplicate content ? Or is this SEO expert talking bull, which I suspect, but cannot be sure. Any advice would be greatly appreciated, it would make my life a lot easier for the customer to use this CMS software, but I would do it at the risk of tarnishing the work they and I have done on their ranking status Many thanks in advance John
Technical SEO | | Johnny4B0 -
Time on site
From what I understand, if you search for a keyword say "blue widgets" and you click on a result, and then spend 10 seconds there, and go back to google and click on a different result google will track that first result as being not very relevant. What I don't understand is what happens when (and this happens all the time, i did it today) you click on a result go to that page, find it (not?) relevant and then get distracted, phone call, or someone calls you into another room in the office. You end up accidentally leaving the tab open all day long, and never go back to the google search. So your time on site to google is what? infinity? there must be an upper cap here? at some point they must say, ok, the user is gone, time on site = our maximum = 5 minutes?!? Get me? any insight?
Technical SEO | | adriandg0 -
Is there any issue with using the same structured data property multiple times on the same page?
Im working on implementing structured data properties into my product detail pages. (http://schema.org/Book) My site sells books and many books have both a 13 digit ISBN # and a 10 Digit ISBN. Should I apply the itemprop "isbn" to both of them or just the one with higher search volume? Some books also have multiple authors, how should I handle that?
Technical SEO | | myork07240 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0