Will an XML sitemap override a robots.txt
-
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed.
I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why.
Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
-
The robots file will avoid google to show further information on the disallowed pages but it doesn't prevent indexation.
They're still indexed (that's why you're seeing them) but with no meta desc nor text taken from the page because google wasn't allowed to retrieve more information.
If you want them to start showing info, you'll jsut need to remove that rule from the robots.txt and soon you'll start seeing those pages information showing, but if you want them out of the index you can use GWT to remove them from the index after you've included in each page the noindex meta tag which is the only command which will prevent indexation.
-
I assumed the same thing, but I performed a site command search while they were prospects, and they had 1 result present with the explanation of "A description for this result is not available because of this site's robots.txt – learn more"
They uploaded an xml sitemap before I could tell them to remove the robots.txt. and 1 week later, the entire site is now in the index.
I have used the robots.txt to properly block websites, it usually takes 2-3 for all results to drop out the index, so I don't know how that could explain it either.
-
I agree, the only way I could think this would work would be if the robotx.txt file was on the root domain. I agree, check Webmaster tools, they will tell you under the sitemaps section about "Error: URL was blocked by robots.txt).
One thing to remember is that robots.txt is technically a suggestion to ask search engines not to crawl your site. They can choose to ignore it, though personally I don't know of any cases in which this happenned.
-
An XML sitemap shouldn't override robots.txt. If you have Google Webmaster Tools setup, you will see warnings on the sitemaps page that pages being blocked by robots are being submitted.
Now, robots.txt does not prevent indexation, just crawling. So if the pages were indexed before they implemented robots.txt, they may continue to be indexed. Google will also display just the URL for pages that it's discovered, but can't crawl because of robots.txt.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Would a Search Engine treat a sitemap hosted in the cloud in the same way as if it was simply on /sitemap.htm?
Mainly to allow updates without the need for publishing - would Google interpret any differently? Thanks
Technical SEO | | RichCMF0 -
Can you keep you old HTTP xml sitemape when moving to HTTPS site wide?
Hi Mozers, I want to keep the HTTP xml sitemape live on my http site to keep track of indexation during the HTTPS migration. I'm not sure if this is doable since once our tech. team forces the redirects every http page will become https. Any ideas? Thanks
Technical SEO | | znotes0 -
What will be the future of seo ?
this question may sound like old one but i want to know something in depth. i mean will seo live forever ? Will i loose my seo job ? so just wanted to know what will be the future of seo ?
Technical SEO | | isolve1 -
Does anyone know a sitemap generation tool that updates your sitemap based on changes on your website?
We have a massive site with thousands of pages which we update everyday. Is there a sitemap generator that can create google sitemaps on the fly and change only based on changes in the site? Our site is much too large to create new sitemaps on regular basis. Is there a tool that will run on server that does this automatically?
Technical SEO | | gwynethmarta0 -
Robots.txt Sitemap with Relative Path
Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?
Technical SEO | | MRCSearch0 -
More than 1 XML Sitemap
I recently took over administration of my site and I have 2 XML sitemaps for my main site and 1 XML sitemap for my blog (which is a sub-page of the main site). Don't I only need 1 sitemap for my site and one for my blog? I don't know which one to delete - they both has the same page authority. Also, only 1 of them is accessible by browser search. http://www.rmtracking.com/rmtracking-sitemap.xml - accessible in browser http://www.rmtracking.com/sitemap.xml - regularly updated in Google Webmaster Tools but not accessible in search browser. I don't have any error messages in Webmaster tools.
Technical SEO | | BradBorst0 -
Sitemap question
My sitemap includes www.example.com and www.example.com/index.html, they are both the same page, will this have any negative effects, or can I remove the www.example.com/index.html?
Technical SEO | | Aftermath_SEO0