Sitemap_index.xml = noindex,follow
-
I was running a rapport with Sreaming Frog SEO Spider and i saw:
(Tab) Directives > NOindex :
https://compleetverkleed.nl/sitemap_index.xml/ is set on X-Robots-Tag 1 > noindex,follow
Does this mean my sitemap isn't indexed?
If anyone has some more tips for our website, feel free to give some suggestions (Website is far from complete)
-
Top, thanks!
-
Hi There
I don't think you need to worry about the sitemap being indexed or not - it's an XML sitemap, not an HTML page users will need to find. It's accessible to Google, and they will use it to crawl the site. Have you submitted the XML sitemap to webmaster tools? If so, make sure it's free of errors and you should be all set!
-
Hi Patrick,
Thanks for the support. I'm only wondering how to fix this problem with a Wordpress website:
https://www.compleetverkleed.nl/sitemap_index.xml/ There should not be a trailing at the end of this URL. This needs to be fixed as soon as possible.
It was set in our footer and linked to the sitemap, and i removed the "/". But in Screaming frog i still see:
https://compleetverkleed.nl/sitemap_index.xml is set on X-Robots-Tag 1 > noindex,followI would also make sure that your non www. sitemap redirects to https://www.compleetverkleed.nl/sitemap_index.xml. Where can i fix this?
This should clear up your problem. Make sure this URL reflects in your Google and Bing Webmaster Tools. Like this? (Google Webmaster tools copy)
|
# Sitemap Type Verwerkt Problemen Items Verzonden Geïndexeerd --- --- --- --- --- --- --- --- --- 1 /sitemap_index.xml Sitemapindex 19 mei 2015 - In behandeling In behandeling -1 van 1 -
Hi there
This doesn't appear to be your sitemap. Your sitemap lives at:
https://www.compleetverkleed.nl/sitemap_index.xml
However, this works as well:
https://www.compleetverkleed.nl/sitemap_index.xml/
There should not be a trailing at the end of this URL. This needs to be fixed as soon as possible.
I would also make sure that your non www. sitemap redirects to https://www.compleetverkleed.nl/sitemap_index.xml.
This should clear up your problem. Make sure this URL reflects in your Google and Bing Webmaster Tools.
Let me know if this helps - good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap.xml strategy for site with thousands of pages
I have a client that has a HUGE website with thousands of product pages. We don't currently have a sitemap.xml because it would take so much power to map the sitemap. I have thought about creating a sitemap for the key pages on the website - but didn't want to hurt the SEO on the thousands of product pages. If you have a sitemap.xml that only has some of the pages on your site - will it negatively impact the other pages, that Google has indexed - but are not listed on the sitemap.xml.
Technical SEO | | jerrico10 -
XML sitemap and rel alternate hreflang requirements for Google Shopping
Our company implemented Google Shopping for our site for multiple countries, currencies and languages. Every combination of language and country is accessible via a url path and for all site pages, not just the pages with products for sale. I was not part of the project. We support 18 languages and 14 shop countries. When the project was finished we had a total of 240 language/country combinations listed in our rel alternate hreflang tags for every page and 240 language/country combinations in our XML sitemap for each page and canonicals are unique for every one of these page. My concern is with duplicate content. Also I can see odd language/country url combinations (like a country with a language spoken by a very low percentage of people in that country) are being crawled, indexed, and appearing in serps. This uses up my crawl budget for pages I don't care about. I don't this it is wise to disallow urls in robots.txt for that we are simultaneously listing in the XML sitemap. Is it true that these are requirements for Google Shopping to have XML sitemap and rel alternate hreflang for every language/country combination?
Technical SEO | | awilliams_kingston0 -
Sitemap & noindex inconstancy?
Hey Moz Community! On a the CMS in question the sitemap and robots file is locked down. Can't be edited or modified what so ever. If I noindex a page in the But it is still on the xml sitemap... Will it get indexed? Thoughts, comments and experience greatly appreciate and welcome.
Technical SEO | | paul-bold0 -
Noindex Success?
Has anyone had success implementing noindex/follow to pages from their site which has been hit by a Panda penalty? Our site has a lot of duplicate content for products descriptions that we had permission to use from our distributor (who is also online). We went ahead and noindex/follow those pages in the hopes that google will focus on the products that we carry that do have original descriptions (about 1/3 of our products). We didn't want to just remove those products since they are actually beneficial to our customers. Most of the duplication of content is in the form of ingredients lists.
Technical SEO | | dustyabe0 -
Should I no follow all external links?
I have worked with a few different SEO firms lately and a lot of them have recommended on the sites I was working on to "no-follow" all external links on the site. On one hand this traps all the link equity/Pagerank. On the other I would think this practice is frowned upon by Google. What are some opinions on this?
Technical SEO | | MarloSchneider0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Google SERPs and NoIndex directives.
We have pages that have been added to robots.txt as url patterns in DisAllow. Also, we have the meta noindex tags on the pages themselves. But we are finding the pages in index. I don't think they are higher up in the rankings and they don't have any descriptions, any previews or any cached pages. Why does Google show these pages? Could it be due to internal or external linking?
Technical SEO | | gaganc0 -
How to add "no follow" to feeds
Hey all, I just had a crawl test done on my site(created using wordpress) and I received a ton of missing meta tag descriptions to fix. The odd thing is though I use "All in One" SEO Tool and the actual pages or posts on the site do have meta tag descriptions, however I noticed for every post an RSS Feed is being automatically generated and this Feed is the culprit without meta tag descriptions. I am totally clueless on how to resolve these errors as I havent installed any WP plugins that generate feeds automatically. Has anyone encountered this problem before or know how to fix this?? The site url is http:// GovernmentGrantsAustralia . org I have left spaces above to avoid being a link dropper 🙂 Would really appreciate if anyone can help! Thanks a million, Jus
Technical SEO | | justin990