Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
I have two robots.txt pages for www and non-www version. Will that be a problem?
-
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
-
It wont affect your SEO, you just don;t need the the non-https version
-
Hi ramb,
Short answer: No, it won't affect your ability to rank in Google. Unless both sites (non-www and www version) compete for the same search term and one of them isn't blocked in the correspondent robots.txt file.
If you can, make sure to have a redirection rule so as everything in the non-www goes to the www.
It bugs me why aren't you redirecting the complete non-www to the www version.
Two possibilities come to my mind:- You can't redirect the whole non-www due to some app or technical need.
In this case, both versions, if accessible to Google, will be treated as different sites. Thus, you must be sure that both robots file are correct for the given subdomain. - You have a separate website, which contains different content from the www version (this usually happens with subdomains with different page types, such as products.abc.com and categories.abc.com)
In this case, please be sure that you know what you want to be blocked and have each robots.txt file in their subdomain.
Keep in mind that Robots file only controls where you don't want googlebot to access in the public version of your website. When a certain page or group of pages are blocked in robots.txt, google won't access them anymore thus not knowing if that page has what it needs to rank for any given search term. Google might rank lower and users will see a note in search results, leading to a lower CTR.
Hope it helps.
Best Luck.
Gaston - You can't redirect the whole non-www due to some app or technical need.
-
Are you redirecting everything on www to non-www? If so, you don't really need a robots.txt to be served for the www subdomain. Google will ignore the original robots.txt file if it is given a 301 anyway.
-
Hi Gatson
Thank you for your response. Currently, www version of the site is redirected to non-www version, which is the primary(or root) domain.
But the problem is, I have 2 robots.txt files running for the same site. i.e. same robots.txt file loads on both www and non-www version. (Example https://www.abc.com/robots.txt and https://abc.com/robots.txt).
Does it affect my site's SEO ??
Should I redirect www-version of the file to non-www version?
Your feedback will be highly appreciated.Thank you,
R.
-
Hi ramb,
It's totally fine to have different robots.txt files for different subdomains.
Thus said, http://domain.com and http://www.domain.com are different subdomains. Consider the one with non-www as the full root domain.In case it is needed, here you have Google's official resource about robots.txt:
Learn about Robots.txt file - Search Console helpHope it helps.
Best luck.
Gast
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Role of Robots.txt and Search Console parameters settings
Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?
Technical SEO | | LivDetrick0 -
What happens when you replace a page with a new version that has the same URL?
a new page template was created the plan is to publish the new page (which has the same URL as before) to web and delete the old page that has the URL , will that have an SEO implications ?
Technical SEO | | lina_digital1 -
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Robots.txt in subfolders and hreflang issues
A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
Technical SEO | | lauralou82
US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?0 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
Good robots txt for magento
Dear Communtiy, I am trying to improve the SEO ratings for my website www.rijwielcashencarry.nl (magento). My next step will be implementing robots txt to exclude some crawling pages.
Technical SEO | | rijwielcashencarry040
Does anybody have a good magento robots txt for me? And what need i copy exactly? Thanks everybody! Greetings, Bob0 -
Will Adding Publish Date at end of Page Title for Blog posts Hurt SEO?
I'd like to be able to easily track blog posts by month but in Google reports when you set a date range obviously older blog post still appear and with amount of blog posts we generate without seeing the date in the title it's not obvious what was published and when it was published. For example if a Blog Title was "/dangers-of-sharing-KM-knowledge-01-11-15 would it hurt SEO? The reason is I'd like to have a quick way to know how new posts do each month compared to older content
Technical SEO | | inhouseninja0 -
Hreflang on non-canonical pages
Hi! I've been trying to figure out what is the best way to solve this dilemma with duplicate content and multiple languages across domains. 1 product info page 2 same product but GREEN
Technical SEO | | LarsEriksson
3 same product but RED
4 same product but YELLOW **Question: ** Since pages 2,3,4 just varies slightly I use the canonical tag to indicate they are duplicates of page 1. Now I also want to indicate there are other language versions with the_ rel="alternate" hreflang="x" _element. Should I place the _rel="alternate" hreflang="x" _on the canonical page only pointing to the canonical page with "x" language. Should I place the _rel="alternate" hreflang="x" _on all pages pointing to the canonical page with the "x" language? Should I place the _rel="alternate" hreflang="x" _on all pages and then point it to the translated page (even if it is not a canonical page) ? /Lars0