XML and Disallow
-
I was just curious about any potential side effects of a client Basically utilizing a catch-all solution through the use of a spider for generating their XML Sitemap and then disallowing some of the directories in the XML sitemap in the robots.txt.
i.e.
XML contains 500 URLs
50 URLs contain /dirw/
I don't want anything with /dirw/ indexed just because they are fairly useless. No content, one image.They utilize the robots.txt file to " disallow: /dirw/ "
Lets say they do this for maybe 3 separate directories making up roughly 30% of the URL's in the XML sitemap.
I am just advising they re-do the sitemaps because that shouldn't be too dificult but I am curious about the actual ramifications of this other than "it isn't a clear and concise indication to the SE and therefore should be made such" if there are any.
Thanks!
-
Hi Thomas,
I don't think that technically there is a problem with adding url's to a sitemap & then blocking part of them with robots.txt.
I wouldn't do it however - and I would give the same advice as you did: regenerate the sitemap without this content. Main reason would be that it goes against the main goals of a sitemap: helping bots to crawl your site and to provide valuable metadata (https://support.google.com/webmasters/answer/156184?hl=en). Another advantage is that Google indicates the % of url's of each sitemap which is index. From that perspective, url's which are blocked for indexing have no use in a sitemap. Normally webmaster tools will generate errors, to let you know that there are issues with the sitemap.
If you take it one step further, Google could consider you a bit of a lousy webmaster, if you keep these url's in the sitemap. Not sure if this is the case, but for something which can easily be corrected, not sure if I would take this risk (even if it's a very minor one).
There are crawlers (like screamingfrog) which can generate sitemaps, while respecting the directives of the robots.txt - this would in my opinion be a better option.
rgds,
Dirk
-
For syntax I think you'll want:
User-agent: *
Disallow: /dirw/If the content of /dirw/ isn't worthwhile to the engines then it should be fine to disallow. It's important to note though that Google asks for CSS and Javascript to not be disallowed. Run the site through their Page Speed tool to see how this setup currently impacts that interaction. Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow: /jobs/? is this stopping the SERPs from indexing job posts
Hi,
Intermediate & Advanced SEO | | JamesHancocks1
I was wondering what this would be used for as it's in the Robots.exe of a recruitment agency website that posts jobs. Should it be removed? Disallow: /jobs/?
Disallow: /jobs/page/*/ Thanks in advance.
James0 -
Can you disallow links via Search Console?
Hey guys, Is it possible in anyway to nofollow links via search console (not disavow) but just nofollow external links pointing to your site? Cheers.
Intermediate & Advanced SEO | | lohardiu90 -
Yoast XML Sitemap Taxonomies
Hey all, Quick question about taxonomies and sitemap settings. On our e-commerce site, we noindex product tags and post tags. Under the "Taxonomies" settings in Yoast I'm seeing taxonomies such as Product Color (pa_color). Would it be wise to remove such taxonomies from our sitemap if we already include product colors, attributes, etc. in our page titles and product descriptions? Thanks in advance, Andrew
Intermediate & Advanced SEO | | mostcg0 -
XML Sitemaps settings for simple websites
We have numerous simple websites that are not updated very often (maybe once every 12-18 months). We are trying to create the perfect XML sitemap for them. Which brings me to my 2 questions:- 1. Should we include the "lastmod" tag in the XML sitemap? 2. What would you set the "changefreq" at? Once a month? Remember these sites are never updated as they are simple sites for industry sectors such as building and property maintenance. Please justify your answer and explain why as we are trying to understand.
Intermediate & Advanced SEO | | JohnW-UK0 -
XML Site Validators...Any Good Ones?
Before submitting to Google, I was wondering if anyone had any suggestions for testing sitemaps out before submitting?
Intermediate & Advanced SEO | | alrockn0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Should i remove sitemap from the mainsite at a webshop (footer link) and only submit .XML in Webmaster tools?
Case: Webshop with over 2000 products. I want to make a logical sitemap for Google to follow. What is best practice at this field? Should i remove the on-page sitemap there is in html with links (is shown as a footer link called "sitemap") and only have the domain.com/sitemap.xml ? Links for great articles about making sitemaps are appreciated to. The system is Magento, if that changes anything.
Intermediate & Advanced SEO | | Mickelp0 -
Which is best structure for Multiple XML Sitemap?
I have read such a great blog posts on Multiple XML Sitemaps on following websites before a week. SEOmoz Distilled Google Webmaster Central Blog Search Engine Land SEO Inc I have created multiple XML sitemaps for my eCommerce website with following structure and submitted to Google webmaster tools. http://www.vistastores.com/main_sitemap.xml http://www.vistastores.com/products_sitemap.xml But, I am not satisfy with my second XML sitemap because it contain more than 7K+ product page URLs and looks like very slow crawling by Google! I want to separate my XML sitemap with following structure. With Root Level Category http://www.vistastores.com/outdoor_sitemap.xml http://www.vistastores.com/furniture_sitemap.xml http://www.vistastores.com/kitchen_dining_sitemap.xml http://www.vistastores.com/home_decor_sitemap.xml OR::: End Level Category http://www.vistastores.com/table_lamps_sitemap.xml http://www.vistastores.com/floor_lamps_sitemap.xml . . . . . . . etc.... So, Which is best structure for Multiple XML Sitemap?
Intermediate & Advanced SEO | | CommercePundit0