Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
I have two robots.txt pages for www and non-www version. Will that be a problem?
-
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
-
It wont affect your SEO, you just don;t need the the non-https version
-
Hi ramb,
Short answer: No, it won't affect your ability to rank in Google. Unless both sites (non-www and www version) compete for the same search term and one of them isn't blocked in the correspondent robots.txt file.
If you can, make sure to have a redirection rule so as everything in the non-www goes to the www.
It bugs me why aren't you redirecting the complete non-www to the www version.
Two possibilities come to my mind:- You can't redirect the whole non-www due to some app or technical need.
In this case, both versions, if accessible to Google, will be treated as different sites. Thus, you must be sure that both robots file are correct for the given subdomain. - You have a separate website, which contains different content from the www version (this usually happens with subdomains with different page types, such as products.abc.com and categories.abc.com)
In this case, please be sure that you know what you want to be blocked and have each robots.txt file in their subdomain.
Keep in mind that Robots file only controls where you don't want googlebot to access in the public version of your website. When a certain page or group of pages are blocked in robots.txt, google won't access them anymore thus not knowing if that page has what it needs to rank for any given search term. Google might rank lower and users will see a note in search results, leading to a lower CTR.
Hope it helps.
Best Luck.
Gaston - You can't redirect the whole non-www due to some app or technical need.
-
Are you redirecting everything on www to non-www? If so, you don't really need a robots.txt to be served for the www subdomain. Google will ignore the original robots.txt file if it is given a 301 anyway.
-
Hi Gatson
Thank you for your response. Currently, www version of the site is redirected to non-www version, which is the primary(or root) domain.
But the problem is, I have 2 robots.txt files running for the same site. i.e. same robots.txt file loads on both www and non-www version. (Example https://www.abc.com/robots.txt and https://abc.com/robots.txt).
Does it affect my site's SEO ??
Should I redirect www-version of the file to non-www version?
Your feedback will be highly appreciated.Thank you,
R.
-
Hi ramb,
It's totally fine to have different robots.txt files for different subdomains.
Thus said, http://domain.com and http://www.domain.com are different subdomains. Consider the one with non-www as the full root domain.In case it is needed, here you have Google's official resource about robots.txt:
Learn about Robots.txt file - Search Console helpHope it helps.
Best luck.
Gast
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved Should I consolidate my "www" and "non-www" pages?
My page rank for www and non-www is the same. In one keyword instance, my www version performs SO much better. Wanting to consolidate to one or the other. My question is as to whether all these issues would ultimately resolve to my chosen consolidated domain (i.e. www or non-www) regardless of which one I choose. OR, would it be smart to choose the one where I am already ranking high for this significant keyword phrase? Thank you in advance for your help.
Technical SEO | | meditationbunny0 -
Robots.txt in subfolders and hreflang issues
A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
Technical SEO | | lauralou82
US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
Determining When to Break a Page Into Multiple Pages?
Suppose you have a page on your site that is a couple thousand words long. How would you determine when to split the page into two and are there any SEO advantages to doing this like being more focused on a specific topic. I noticed the Beginner's Guide to SEO is split into several pages, although it would concentrate the link juice if it was all on one page. Suppose you have a lot of comments. Is it better to move comments to a second page at a certain point? Sometimes the comments are not super focused on the topic of the page compared to the main text.
Technical SEO | | ProjectLabs1 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Can I Disallow Faceted Nav URLs - Robots.txt
I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
Technical SEO | | tylerfraser
and
/category.html?price=1%2C1000&product_material=88 Thanks!0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0