Google has deindexed 40% of my site because it's having problems crawling it
-
Hi
Last week i got my fifth email saying 'Google can't access your site'. The first one i got in early November. Since then my site has gone from almost 80k pages indexed to less than 45k pages and the number is lowering even though we post daily about 100 new articles (it's a online newspaper).
The site i'm talking about is http://www.gazetaexpress.com/
We have to deal with DDoS attacks most of the time, so our server guy has implemented a firewall to protect the site from these attacks. We suspect that it's the firewall that is blocking google bots to crawl and index our site. But then things get more interesting, some parts of the site are being crawled regularly and some others not at all. If the firewall was to stop google bots from crawling the site, why some parts of the site are being crawled with no problems and others aren't?
In the screenshot attached to this post you will see how Google Webmasters is reporting these errors.
In this link, it says that if 'Error' status happens again you should contact Google Webmaster support because something is preventing Google to fetch the site. I used the Feedback form in Google Webmasters to report this error about two months ago but haven't heard from them. Did i use the wrong form to contact them, if yes how can i reach them and tell about my problem?
If you need more details feel free to ask. I will appreciate any help.
Thank you in advance
-
Great news - strange that these 608 errors didn't appear while crawling the site with Screaming Frog.
-
We found the problem. It was about website compression (GZIP). I found this after crawling my site with Moz, and saw lot's of pages with 608 Error code. Then i searched in Google and saw a response by Dr. Pete in another question here in Moz Q/A (http://moz.com/community/q/how-do-i-fix-608-s-please)
After we removed the GZIP, Google could crawl the site with no problems.
-
Dirk
Thanks a lot for your help. Unfortunately the problem remains the same. More than 65% of site has been de-indexed and it's making our work very difficult.
I'm hoping that somebody here might have any idea of what is causing this so we can find a solution to fix it.
Thank you all for your time.
-
Hi
Not sure if the indexing problem is solved now, but I did a few other checks. Most of the tools I used where able to capture the problem url without much issues even from California ip's & simulating Google bot.
I noticed that some of the pages (example http://www.gazetaexpress.com/fun/) are quite empty if you browse them without Javascript active. Navigating through the site with Javascript is extremely slow, and a lot of links don't seem to respond. When trying to go from /fun/ to /sport/ without Javascript - I got a 504 Gateway Time-out
Normally Google is now capable of indexing content by executing the javascript, but it's always better to have a non-javascript fallback that can always be indexed (http://googlewebmastercentral.blogspot.be/2014/05/understanding-web-pages-better.html) - the article states explicitly
- If your web server is unable to handle the volume of crawl requests for resources, it may have a negative impact on our capability to render your pages. If you’d like to ensure that your pages can be rendered by Google, make sure your servers are able to handle crawl requests for resources.
This could be the reason for the strange errors when trying to fetch like Google.
Hope this helps,
Dirk
-
Hi Dirk
Thanks a lot for your reply.
Today we turned off the firewall for a couple hours and tried to fetch the site as Google. It didn't work. The results we're the same as before.
This problem is starting to be pretty ugly since Google has started now not showing our mobile results as 'mobile-friendly' even though we have a mobile version of site, we are using rel=canonical and rel=alternate and 302 redirects for mobile users from desktop pages to mobile ones when they are browsing via smartphone.
Any other idea what might be causing this?
Thanks in advance
-
Hi,
It seems that you're pages are extremely heavy to load - I did 2 tests - on your homepage & on the /moti-sot page
Your homepage needed a whopping 73sec to load (http://www.webpagetest.org/result/150312_YV_H5K/1/details/) - the moti-sot page is quicker - but 8sec is still rather high (http://www.webpagetest.org/result/150312_SK_H9M/)
I sometimes noticed a crash of the Shockwave flash plugin, but not sure if this is related to your problem;I crawled your site with Screaming Frog, but it didn't really find any indexing problems - while you have a lot of pages very deep in your sitestructure, the bot didn't seem to have any specific troubles to access your page. Websniffer returns a normal 200 code when checking your sites - even with useragent "Google"
So I guess you're right about the firewall - may be it's blocking the ip addresses used by Google bot - do you have reporting from the firewall which traffic is blocked? Try to search for the useragent Googlebot in your logfiles and see if this traffic is rejected. The fact that some sections are indexed and others not could be related to the configuration of the firewall, and/or the ip addresses used by Google bot to check your site (the bot is not always using the same ip address)
Hope this helps,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Problem with Yoast not seeing any of this website's text/content
Hi, My client has a new WordPress site http://www.londonavsolutions.co.uk/ and they have installed the Yoast Premium SEO plug-in. They are having issues with getting the lights to go green and the main problem is that on most pages Yoast does not see any words/content – although there are plenty of words on the pages. Other tools can see the words, however Yoast is struggling to find any and gives the following message:- Bad SEO score. The text contains 0 words. This is far below the recommended minimum of 300 words. Add more content that is relevant for the topic. Readability - You have far too little content. Please add some content to enable a good analysis. They have contacted the website developer who says that there is nothing wrong, but they are frustrated that they cannot use the Yoast tools themselves because of this issue, plus Yoast are offering no support with the issue. I hope that one of you guys has seen this problem before, or can spot a problem with the way the site has been built and can perhaps shed some light on the problem. I didn't build the site myself so won't be offended if you spot problems with it. Thanks in advance, Ben
Technical SEO | | bendyman0 -
Google not indexing /showing my site in search results...
Hi there, I know there are answers all over the web to this type of question (and in Webmaster tools) however, I think I have a specific problem that I can't really find an answer to online. site is: www.lizlinkleter.com Firstly, the site has been live for over 2 weeks... I have done everything from adding analytics, to submitting a sitemap, to adding to webmaster tools, to fetching each individual page as googlebot and then submitting to index via webmaster tools. I've checked my robot files and code elsewhere on the site and the site is not blocking search engines (as far as I can see) There are no security issues in webmaster tools or MOZ. Google says it has indexed 31 pages in the 'Index Status' section, but on the site dashboard it says only 2 URLS are indexed. When I do a site:www.lizlinketer.com search the only results I get are pages that are excluded in the robots file: /xmlrpc.php & /admin-ajax.php. Now, here's where I think the issue stems from - I developed the site myself for my wife and I am new to doing this, so I developed it on the live URL (I now know this was silly) - I did block the content from search engines and have the site passworded, but I think Google must have crawled the site before I did this - the issue with this was that I had pulled in the Wordpress theme's dummy content to make the site easier to build - so lots of nasty dupe content. The site took me a couple of months to construct (working on it on and off) and I eventually pushed it live and submitted to Analytics and webmaster tools (obviously it was all original content at this stage)... But this is where I made another mistake - I submitted an old site map that had quite a few old dummy content URLs in there... I corrected this almost immediately, but it probably did not look good to Google... My guess is that Google is punishing me for having the dummy content on the site when it first went live - fair enough - I was stupid - but how can I get it to index the real site?! My question is, with no tech issues to clear up (I can't resubmit site through webmaster tools) how can I get Google to take notice of the site and have it show up in search results? Your help would be massively appreciated! Regards, Fraser
Technical SEO | | valdarama0 -
Could using our homepage Google +1's site wide harm our website?
Hello Moz! We currently have the number of Google +1's for our homepage displaying on all pages of our website. Could this be viewed as black hat/manipulative by Google, and result in harming our website? Thanks in advance!
Technical SEO | | TheDude0 -
How to remove all sandbox test site link indexed by google?
When develop site, I have a test domain is sandbox.abc.com, this site contents are same as abc.com. But, now I search site:sandbox.abc.com and aware of content duplicate with main site abc.com My question is how to remove all this link from goolge. p/s: I have just add robots.txt to sandbox and disallow all pages. Thanks,
Technical SEO | | JohnHuynh0 -
Google appending keyword to local search result(s)?
I noticed an interesting change today in how one of my clients appears in the SERPs. Google seems to be appending a keyword to his listing title. Client website: www.mycalgarydentist.com Keyword: Calgary dentist Rank: #2 or #1 lately Title tag: Calgary Dentist | Ambiance Dental Google+ Local listing title: Ambiance Dental Link title in SERP: Ambiance Dental: Calgary Dentist That last point is what's interesting, and new. As of a couple weeks ago (before I went on holidays) his link would simply show "Ambiance Dental", which makes sense because that's the title of his Google+ Local listing. Given the above information, I can't see why his link in Google's SERP is "Ambiance Dental: Calgary Dentist" when doing a search for that keyword. When I do a search for "Calgary dentists" or other similar searches, he simply shows as "Ambiance Dental", not "Ambiance Dental: Calgary Dentists" To test yourself, use the Google AdWords Preview Tool (https://adwords.google.com/d/AdPreview/), change locality to "Calgary, AB, Canada" and search. I suspect this doesn't mean he's violating Google's guidelines for business listings (i.e. businesses aren't supposed to add keywords to their business title). I'm certainly curious why this is happening though. Can anyone provide any insight? Has anyone seen anything similar? calgary-dentist-search.png
Technical SEO | | Kenoshi0 -
301 redirects tanked our site on google - what now?
We had several hundred old pages on the site with duplicate content and new pages with fresh info on the same topics. So I redirected the old pages to the new pages. Next day, plop, we're dumped off google for almost every keyword. Dang I thought they didn't want duplicate content and old funky pages. What did I do wrong and what can I do to fix it? Thanks so much for anyone who can share their expertise. Jean
Technical SEO | | JeanYates0 -
Https-pages still in the SERP's
Hi all, my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content. Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow> I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up. Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index? Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))... thanx in advance 😉
Technical SEO | | accessKellyOCG0 -
Do any short url's pass link juice? googles own? twitters?
I've read a few posts saying not shorten links at all but we have a lot to tweet and need to. Is googles shortener the best option? I've considered linking to the category index page the article is on and expect the user to find the article and click on the article, I don't like the experience that creates though. I've considered making the article permalink tiny but I would lose the page title being in the url. Is this the best option?
Technical SEO | | Aviawest0