Sitemap as Referrer in Crawl Error Report
-
I have just downloaded the SEOMoz crawl error report, and I have a number of pages listed which all show FALSE.
The only common denominator is the referrer - the sitemap.
I can't find anything wrong, should I be worried this is appearing in the error report?
-
Thanks Tom.
The site map is pointing to the correct pages, and when visiting the pages in the search engines no problems arises.
I don't understand why these pages are listed in the crawl error report when I can't see any obvious issue.
-
Hi Christina
If the referrer is the sitemap, it means that the SEOMoz crawler has been directed to that page because of the sitemap you have submitted.
If you're getting 404 errors or access errors for certain pages and they are only able to be accessed via the sitemap, then it's a good idea to remove those URLs from the sitemap altogether. It doesn't make sense to have URLs listed in your sitemap if those URLs don't exist or have restricted access.
A cleaner sitemap will ultimately help in the long run. Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Subdomain 403 error
Hi Everyone, A crawler from our SEO tool detects a 403 error from a link from our main domain to a a couple of subdomains. However, these subdomains are perfect accessibly. What could be the problem? Is this error caused by the server, the crawlbot or something else? I would love to hear your thoughts.
Technical SEO | | WeAreDigital_BE
Jens0 -
Why Google crawl parameter URLs?
Hi SEO Masters, Google is indexing this parameter URLs - 1- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-6109-4191-6110&mode=li_23&p=2&filterable_stone_shapes=4114 2- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-4169-4195&mode=li_23&p=2&filterable_stone_shapes=4115&filterable_metal_types=4163 I have handled by Google parameter like this - jewelry_styles= Narrows Let Googlebot decide mode= None Representative URL p= Paginates Let Googlebot decide filterable_stone_shapes= Narrows Let Googlebot decide filterable_metal_types= Narrows Let Googlebot decide and Canonical for both pages - xyz.com/f1/f2/page?p=2 So can you suggest me why Google indexed all related pages with this - xyz.com/f1/f2/page?p=2 But I have no issue with first page - xyz.com/f1/f2/page (with any parameter). Cononical of first page is working perfectly. Thanks
Technical SEO | | Rajesh.Prajapati
Rajesh0 -
Duplicate content and 404 errors
I apologize in advance, but I am an SEO novice and my understanding of code is very limited. Moz has issued a lot (several hundred) of duplicate content and 404 error flags on the ecommerce site my company takes care of. For the duplicate content, some of the pages it says are duplicates don't even seem similar to me. additionally, a lot of them are static pages we embed images of size charts that we use as popups on item pages. it says these issues are high priority but how bad is this? Is this just an issue because if a page has similar content the engine spider won't know which one to index? also, what is the best way to handle these urls bringing back 404 errors? I should probably have a developer look at these issues but I wanted to ask the extremely knowledgeable Moz community before I do 🙂
Technical SEO | | AliMac260 -
Absurdly High Crawl Stats
Over the past month and a half, our crawl stats have been rising violently. A few weeks ago, our crawl stats rose, such that the pages crawled per day worked out to the entire site being crawled 6 times a day, with a corresponding rise in KB downloaded per day. Last week, the crawl rate jumped again, such that the site is being crawled roughly 30x a day. I'm not seeing any chatter at there about an algorithm change, and I've checked and double-checked the site for signs of duplicate content, changes in our backlink profile, or anything else. We haven't seen appreciable changes in our search volume, either impressions or clicks. Any ideas what could be going on?
Technical SEO | | Tyler-Brown0 -
Crawl errors which ones should i sort out
Hi, just had my website updated to joomla 3.0 and i have around 4000 urls not found. now i have been told i need to redirect these but i would just like to check on here to make sure i am doing the right thing and the advice i have been given is not correct. I have been told these errors are the reason for the drop in rankings. I need to know if i should redirect all of these 4,000 urls or only the ones that are being linked to from outside of the site. I think about 3,000 of these have no links from outside of the site, but if i do not redirect them all then i am going to keep getting the error messages. around 2,000 of these url not found are from the last time we updated the site which was a couple of years ago and i thought they would have died off now. any advice on what i should do would be great
Technical SEO | | ClaireH-1848860 -
Document.referrer Is that harmful to my website?
Someone (maybe my Competitors) open a subdomain on tumblr.com. Just like keywordxxx.tumblr.com , and use following script to redirect to my website. var s=document.referrer; if(s.indexOf("google")>0 || s.indexOf("bing")>0 || s.indexOf("yahoo")>0 || s.indexOf ("aol")>0){ self.location='mywebsiteurl'; } so , If anyone seach in google ,and click on keywordxxx.tumblr.com , this script will direct the user to my website. I have found many cases like that, is that harmful ? how can i avoid that?
Technical SEO | | sunvary0 -
Increase in authorization permission errors error after site switch
We launched our new site 2 days ago , since site was down for 12 hours for maintenance, we saw google webmaster tool shows this error . Since then google hasnt crawled, its been 36 hours. Do we need to do anyting? We have close to a million page google crawled before and I am wondering if this will effect anything.
Technical SEO | | tpt.com0 -
Domain restructure, sitemaps and indexing
I've got a handcoded site with around 1500 unique articles and a handcoded sitemap. Very old school. The url structure is a bit of a mess, so to make things easier for a developer who'll be making the site database-driven, I thought I'd recategorise the content. Same content, but with new url structure (I thought I'd juice up the urls for SEO purposes while I was at it) To this end, I took categories like: /body/amazing-big-shoes/
Technical SEO | | magdaknight
/style/red-boots/
/technology/cyber-boots/ And rehoused all the content like so, doing it all manually with ftp: /boots/amazing-boots/
/boots/red-boots/
/boots/cyber-boots/ I placed 301 redirects in the .htaccess file like so: redirect 301 /body/amazing-boots/ http://www.site.co.uk/boots/amazing-boots/ (not doing redirects for each article, just for categories which seemed to make the articles redirect nicely.) Then I went into sitemap.xml and manually overwrote all the entries to reflect the new url structure, but keeping the old dates of the original entries, like so: <url><loc>http://www.site.co.uk/boots/amazing-boots/index.php</loc>
<lastmod>2008-07-08</lastmod>
<changefreq>monthly</changefreq>
<priority>0.5</priority></url> And resubmitted the sitemap to Google Webmasters. This was done 4 days ago. Webmaster said that the 1400 of 1500 articles indexed had dropped to 860, and today it's climbed to 939. Did I adopt correct procedure? Am I going about things the right way? Given a little time, can I expect Google to re-index the new pages nicely? I appreciate I've made a lot of changes in one fell swoop which could be a bit of a no-no... ? PS Apologies if this question appears twice on Q&A - hopefully I haven't double-posted0