Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
I have two robots.txt pages for www and non-www version. Will that be a problem?
-
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
-
It wont affect your SEO, you just don;t need the the non-https version
-
Hi ramb,
Short answer: No, it won't affect your ability to rank in Google. Unless both sites (non-www and www version) compete for the same search term and one of them isn't blocked in the correspondent robots.txt file.
If you can, make sure to have a redirection rule so as everything in the non-www goes to the www.
It bugs me why aren't you redirecting the complete non-www to the www version.
Two possibilities come to my mind:- You can't redirect the whole non-www due to some app or technical need.
In this case, both versions, if accessible to Google, will be treated as different sites. Thus, you must be sure that both robots file are correct for the given subdomain. - You have a separate website, which contains different content from the www version (this usually happens with subdomains with different page types, such as products.abc.com and categories.abc.com)
In this case, please be sure that you know what you want to be blocked and have each robots.txt file in their subdomain.
Keep in mind that Robots file only controls where you don't want googlebot to access in the public version of your website. When a certain page or group of pages are blocked in robots.txt, google won't access them anymore thus not knowing if that page has what it needs to rank for any given search term. Google might rank lower and users will see a note in search results, leading to a lower CTR.
Hope it helps.
Best Luck.
Gaston - You can't redirect the whole non-www due to some app or technical need.
-
Are you redirecting everything on www to non-www? If so, you don't really need a robots.txt to be served for the www subdomain. Google will ignore the original robots.txt file if it is given a 301 anyway.
-
Hi Gatson
Thank you for your response. Currently, www version of the site is redirected to non-www version, which is the primary(or root) domain.
But the problem is, I have 2 robots.txt files running for the same site. i.e. same robots.txt file loads on both www and non-www version. (Example https://www.abc.com/robots.txt and https://abc.com/robots.txt).
Does it affect my site's SEO ??
Should I redirect www-version of the file to non-www version?
Your feedback will be highly appreciated.Thank you,
R.
-
Hi ramb,
It's totally fine to have different robots.txt files for different subdomains.
Thus said, http://domain.com and http://www.domain.com are different subdomains. Consider the one with non-www as the full root domain.In case it is needed, here you have Google's official resource about robots.txt:
Learn about Robots.txt file - Search Console helpHope it helps.
Best luck.
Gast
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Role of Robots.txt and Search Console parameters settings
Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?
Technical SEO | | LivDetrick0 -
Non Published Wordpress Pages
Hi, Is there any negative SEO consequences from having too many pages private or not published. Can it like slow the site down or does it not matter? Someone in my dept. has so many pages started/not complete and besides being messy, I wonder if it has any negative impact on the site. Thanks
Technical SEO | | aua1 -
Multiple robots.txt files on server
Hi! I have previously hired a developer to put up my site and noticed afterwards that he did not know much about SEO. This lead me to starting to learn myself and applying some changes step by step. One of the things I am currently doing is inserting sitemap reference in robots.txt file (which was not there before). But just now when I wanted to upload the file via FTP to my server I found multiple ones - in different sizes - and I dont know what to do with them? Can I remove them? I have downloaded and opened them and they seem to be 2 textfiles and 2 dupplicates. Names: robots.txt (original dupplicate)
Technical SEO | | mjukhud
robots.txt-Original (original)
robots.txt-NEW (other content)
robots.txt-Working (other content dupplicate) Would really appreciate help and expertise suggestions. Thanks!0 -
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
Product Pages Outranking Category Pages
Hi, We are noticing an issue where some product pages are outranking our relevant category pages for certain keywords. For a made up example, a "heavy duty widgets" product page might rank for the keyword phrase Heavy Duty Widgets, instead of our Heavy Duty Widgets category page appearing in the SERPs. We've noticed this happening primarily in cases where the name of the product page contains an at least partial match for the desired keyword phrase we want the category page to rank for. However, we've also found isolated cases where the specified keyword points to a completely irrelevent pages instead of the relevant category page. Has anyone encountered a similar issue before, or have any ideas as to what may cause this to happen? Let me know if more clarification of the question is needed. Thanks!
Technical SEO | | ShawnHerrick0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0