Google not crawling the website from 22nd October
-
Hi, This is Suresh. I made changes to my website and I see that google is unable to crawl my website from 22nd October. Even it is not showing any content when I use Cache:www.vonexpy.com. Can any body help me in knowing why Google is unable to crawl my website. Is there any technical issue with the website? Website is www.vonexpy.com
Thanks in advance.
-
Looks like its not just us - this is starting to get more attention https://www.seroundtable.com/google-cache-dated-19497.html
-
Our home page was cached today, but many of the pages on our site haven't been indexed since late October. It looks like this is a widespread issue.
-
I can see 90% of the pages I look at havent been cached since 22nd Oct, and I mean pages picked at random from hotels.com, booking.com and other big sites - is there a wider google cache issue going on?
-
I understand but we are not talking about black hat seo, just to crawl a site that hasn´t be crawled since october. And we all already know that sometimes Google says half-truths. I was suggesting to do something knew to get different results that he already had. Anyway i appreciate your point of view.
-
Google the queries: black hat seo, link buying. Click the first 100 results for both + read the guide on SEO by Google and then you probably get the point ;-).
-
Why not Martijn? That tool has helped me several times not in the same case but for backlinks.
-
You are not being serious?
-
Thanks Martin,
I made some design changes to my website. When you check the cache you an see that the website content is not crawled as the website contains some java script links. I replaced them with text links. So now it should crawl and index the website even though there is no new content. As the old spider is unable to crawl the website because of the image based links. And also I submitted the sitemap to webmaster tools only after the design change. It just indexed all the 30 urls 2 days back.@ Hemani I already submitted to webmaster tools and used fetch as google on the site but still there is no crawling.
-
Hi Suresh,
It looks like there are no problems on the site itself that prevent Google from crawling it, so that's a good thing. As the sitemap only consists of 31 (different) URLs, chances are that Google found out that the pages aren't any different any time that they crawled the pages. If that is really the case then I wouldn't mind so much about Google not crawling the content every now and then.
-
This is a normal problem and I have accord this quite a few times on my blog. The best way to do that is to submit your sitemap in Google Webmaster tool and get social tweets and social likes on your website and you will see Google will start crawling your website in days!
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website Migration - Very Technical Google "Index" Question
This is my understanding of how Google's search works, and I am unsure about one thing in specifc: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" connects to the "page directory". I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I ask is I am starting to work with a client who has a newly developed website. The old website domain and files were located on a GoDaddy account. The new websites files have completely changed location and are now hosted on a separate GoDaddy account, but the domain has remained in the same account. The client has setup domain forwarding/masking to access the files on the separate account. From what I've researched domain masking and SEO don't get along very well. Not only can you not link to specific pages, but if my above assumption is true wouldn't Google have a hard time crawling and storing each page in the cache?
Technical SEO | | reidsteven750 -
How to optimize for different google seach center (google.de, google.ch) ?
We all use Deutsch language and (.com) domains for the sites. I ranked well in google.com ,but not so well in google.de , google.ch , my competitors ranked much better in google.de,google.ch. I checked most of their outbound-links, but get few information. Links from (.DE) domains or links from sites located in German help the rank for special google seach center ? (google.de, google.ch) . Or some other factors i missed? please help.
Technical SEO | | sunvary0 -
I am trying to figure out why a website is not getting fully indexed by google. Any ideas?
I am trying to figure out why a website is not getting fully indexed by google. The website was built with Godaddy's website designer so maybe this is the problem. Originally, the internal links throughout the navigation were linked to “pages” within the site. I went in and changed all of these navigation links to point to the actual url links throughout the site instead of relative links pointing to pages on the server. I thought this would have solved the problem because I thought that perhaps google was not able to follow the original relative links. When I check to see how many pages are in the google index I still see the same #. What is going on? Should this website be rebuilt using more search engine friendly code like wordpress? Is there a simple fix that will enable google to find all of this content created by Godaddy design software? I appreciate any help offered. Here is the site- http://www.securehomeusa.com/
Technical SEO | | ULTRASEM0 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Crawl issue
Hi I have a problem with crawl stats. Crawls Only return 3k pages while my site have 27k pages indexed(mostly duplicated content pages), why such a low number of pages crawled any help more than welcomed Dario PS: i have more campaign in place, might that be the reason?
Technical SEO | | Mrlocicero0 -
Will a "blog=example "parameter at the end of my URLs affect google's crawling them?
For example, I'm wondering if www.example.com/blog/blog-post is better than www.example.com/blog/blog-post?blog=example? I'm currently using the www.example.com/blog/blog-post?blog=example structure as our canonical page for content. I'm also wondering, if the parameter doesn't affect crawling, if it would hurt rankings in any way. Thanks!
Technical SEO | | Intridea0 -
Our Development team is planning to make our website nearly 100% AJAX and JavaScript. My concern is crawlability or lack thereof. Their contention is that Google can read the pages using the new #! URL string. What do you recommend?
Discussion around AJAX implementations and if anybody has achieved high rankings with a full AJAX website or even a partial AJAX website.
Technical SEO | | DavidChase0 -
Google Sandboxing
I have a new site with a new domain that ranked well the 1st week or so after it was indexed then it totally dropped off the SERP. My question is, does Google Sandboxing affect new sites on new domains that don't have any incoming links? The site dropped off before I began link building - from what I've read unnatural link build is often the cause. Can you still be sandboxed without any link building? If this is the case, are there things I can do to get out of the sandbox? Thanks folks, Jason
Technical SEO | | OptioPublishing0