Why might Google be crawling via old sitemap, when the new one has been submitted and verified?
-
We have recently relaunched Scoutzie.com and re-submitted our new sitemap to Google. When I look on Webmaster tools, our new sitemap has been submitted just fine, but at the same time, Google is finding a lot of 404s when crawling the site. My understanding, it is still using crawling the old links, which do not exists. How can I tell Google to refresh it's index and to stop looking at all the old links?
-
Yes it should. However, as Alan mentioned below, if you still have links pointing to the 404 pages, Google will always attempt to crawl them, and will keep you informed that you have errors.
If you do have external links to those 404 pages, you can 301 redirect them to an appropriate page using .htaccess. This way you'll keep the link value and also get rid of the Webmaster Tools error.
If you don't have any links to them, then yes, Google will eventually stop trying to crawl them.
-
It's very likely that we do. Given that I cannot track down a 1000+ links that now 404, will they eventually fall out by themselves, or do I have to tell Google that everything that's 404'ed should be dropped from crawl index? Thanks!
-
What if I simply pushed the new sitemap over the old one? In other words, scoutzie.com/sitemap is the same link, except now it contains the new map. That should be okay, right?
-
you may still have links pointing to those 404 pages on your site or externally. If not then eventually they will fall out of the index
-
Hey scoutzie,
This is actually covered pretty well in Joe Robison's blog post on fixing Webmaster Tools crawl errors: http://moz.com/blog/how-to-fix-crawl-errors-in-google-webmaster-tools
I'll quote the related info:
"One frustrating thing that Google does is it will continually crawl old sitemaps that you have since deleted to check that the sitemap and URLs are in fact dead. If you have an old sitemap that you have removed from Webmaster Tools, and you don’t want being crawled, make sure you let that sitemap 404 and that you are not redirecting the sitemap to your current sitemap."
Hope this helps, good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
How to fix the Crawl Diagnostics error and warnings
hi im new to the seo world and i dont know a lot about it , so after my site get crawled i found 1 error and 151 warning and 96 notices , it that bad ?? and plz cam someone explain to me how to fix thos problem , a will be very thankful
Moz Pro | | medlife0 -
Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
I'm curious as to how accurately rogerbot replicates google's searchbot I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing. I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors. Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
Moz Pro | | KJDMedia0 -
Is there a way to view where old backlinks came from?
I lost a bunch of backlinks according to my seo moz reports. I am looking to figure out where they were from.
Moz Pro | | no6thgear0 -
Integration of Google+, LinkedIn, YouTube?
I ran across a blog on Moz from last November that was introducing the social monitoring feature. It mentioned adding new social platforms that would eventually be included in SEOMoz. Any word on when/if that is happening? Excited to have SEO and Social under the same dashboard! Jake
Moz Pro | | AESEO2 -
Canonical tags and SEOmoz crawls
Hi there. Recently, we've made some changes to http://www.gear-zone.co.uk/ to implement canonical tags to some dynamically generated pages to stop duplicate content issues. Previously, these were blocked with robots.txt. In Webmaster Tools, everything looks great - pages crawled has shot up, and overall traffic and sales has seen a positive increase. However the SEOmoz crawl report is now showing a huge increase in duplicate content issues. What I'd like to know is whether SEOmoz registers a canonical tag as preventing a piece of duplicate content, or just adds to it the notices report. That is, if I have 10 pages of duplicate content all with correct canonical tags, will I still see 10 errors in the crawl, but also 10 notices showing a canonical has been found? Or, should it be 0 duplicate content errors, but 10 notices of canonicals? I know it's a small point, but it could potentially have a big difference. Thanks!
Moz Pro | | neooptic0 -
How can I clean up my crawl report from duplicate records?
I am viewing my Crawl Diagnostics Report. My report is filled with data which really shouldn't be there. For example I have a page: http://www.terapvp.com/forums/Ghost/ This is a main forum page. It contains a list of many threads. The list can be sorted on many values. The page is canonicalized, and has been since it was created. My crawl report shows this page listed 15 times. http://www.terapvp.com/forums/Ghost/?direction=asc http://www.terapvp.com/forums/Ghost/?direction=desc http://www.terapvp.com/forums/Ghost/?order=post_date and so forth. Each of those pages uses the same canonicalization reference shared above. I have three questions: Why is this data appearing in my crawl report? These pages are properly canonicalized. If these pages are supposed to appear in the report for some reason, how can I remove them? My desire is to focus on any pages which may have an issue which needs to be addressed. This site has about 50 forum pages and when you add an extra 15 pages per forum, it becomes a lot harder to locate actionable data. To make matters worse, these forum indexes often have many pages. So if I have a "Corvette" forum there that is 10 pages long, then there will be 150 extra pages just for that particular forum in my crawl report. Is there anything I am missing? To the best of my knowledge everything is set up according to the best SEO practices. If there is any other opinions, I would like to hear them.
Moz Pro | | RyanKent0