Using one robots.txt for two websites
-
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this:
User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap
Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
-
Hi @gpainter,
Thanks for your help. I can't see anything specific in that link that says you can't have two sitemaps in one robots.txt. Where it mentions the sitemap it does say "You can specify multiple sitemap fields", although I'm not sure whether this means having multiple sitemap URLs under one mention of 'sitemap'?
-
@ciehmoz Hey I've replied to the other thread too.
The best case here will be to utilize different robots.txt files for both the websites.
You could've used the same robots.txt file only if the other site was on the same subdomain.
Don't forget to include the corresponding sitemaps to the new robots.txt file, hope this works out, cheers.
-
Hey @ciehmoz
Just replied to your other thread, you will need one robot.txt per site. Referring to two sitemaps in one robots.txt will confuse Google.
Info here - https://developers.google.com/search/docs/advanced/robots/robots_txt
Good Luck
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I found that the link from my YouTube channel to my website was do-follow through Moz bar but no one is indexing it. Why?
_ I found that the link from my YouTube channel to my website was do-follow. I checked through Moz bar but no one is indexing it. Why is it so? _
Technical SEO | | knoiwtall0 -
One robots.txt file for multiple sites?
I have 2 sites hosted with Blue Host and was told to put the robots.txt in the root folder and just use the one robots.txt for both sites. Is this right? It seems wrong. I want to block certain things on one site. Thanks for the help, Rena
Technical SEO | | renalynd270 -
Can You Use More Then One Google Local Rich Snippet on a single site/ on a single page.
I am currently working on a website for a business that has multiple office locations. As I am trying to target all four locations I was wondering if it is okay to have more then one Local Rich Snippet on a single page. (For example they list all four locations and addresses within their footer and I was wondering if I could make these local rich snippets). What about having more then one on a single website. For example if a company has multiple offices located in several different cities and have set up individual contact pages for these cities, can each page have it's own Local Rich Snippet? Will Google look at these multiple "local rich snippets" as spaming or will they recognize the multiple locations and count it towards their local seo?
Technical SEO | | webdesignbarrie1 -
Merge two different domains into one
Hi all SEO folks We want to merge two different domains into one. The products are similar to each other but domain1 is the online product and domain2 is the offline version of the product (new launced product). Both the domains have the domain Authority 47 but domain1 is optimized to the online product and most keywords (branded keywords) are in top 3 , where as the domain2 has been used for another concept which has now been closed down (beginning of may), and replaced with the offline product. To boost the sale of the offline product, we want to move the content from domain1(online product) to domain2(offline product), and off course we want to keep the ranking in top 3. So this is what I know I have to do: Redirect 301 from old content url to new content url, contact external domains and ask them to link to the new domain. Furthermore I read that it is important to place everything from the old site exactly the same way on the new site (and keep the content the same), and that way keep the internal url structure/linkjuice. This is were my question comes in. If we move all the content to the exactly same location on the new site, there will be way to many links on the frontpage and the new offline product won't have the dominating position.
Technical SEO | | Bulpen
Do any of you have experience with moving the content from one domain1.com to domain2.com/online and still be able to keep the ranking in SERP, or would you recommend always to move the content from root to root? Ps: Sorry about my English. English is not my main language. Hope you understand my question anyway :).0 -
Is having no robots.txt file the same as having one and allowing all agents?
The site I am working on currently has no robots.txt file. However, I have just uploaded a sitemap and would like to point the robots.txt file to it. Once I upload the robots.txt file, if I allow access to all agents, is this the same as when the site had no robots.txt file at all; do I need to specify crawler access on can the robots.txt file just contain the link to the sitemap?
Technical SEO | | pugh0 -
RegEx help needed for robots.txt potential conflict
I've created a robots.txt file for a new Magento install and used an existing site-map that was on the Magento help forums but the trouble is I can't decipher something. It seems that I am allowing and disallowing access to the same expression for pagination. My robots.txt file (and a lot of other Magento site-maps it seems) includes both: Allow: /*?p= and Disallow: /?p=& I've searched for help on RegEx and I can't see what "&" does but it seems to me that I'm allowing crawler access to all pagination URLs, but then possibly disallowing access to all pagination URLs that include anything other than just the page number? I've looked at several resources and there is practically no reference to what "&" does... Can anyone shed any light on this, to ensure I am allowing suitable access to a shop? Thanks in advance for any assistance
Technical SEO | | MSTJames0 -
Robots.txt
Hi everyone, I just want to check something. If you have this entered into your robots.txt file: User-agent: *
Technical SEO | | PeterM22
Disallow: /fred/ This wouldn't block /fred-review/ from being crawled would it? Thanks0 -
Robots.txt
My campaign hse24 (www.hse24.de) is not being crawled any more ... Do you think this can be a problem of the robots.txt? I always thought that Google and friends are interpretating the file correct, seen that he site was crawled since last week. Thanks a lot Bernd NB: Here is the robots.txt: User-Agent: * Disallow: / User-agent: Googlebot User-agent: Googlebot-Image User-agent: Googlebot-Mobile User-agent: MSNBot User-agent: Slurp User-agent: yahoo-mmcrawler User-agent: psbot Disallow: /is-bin/ Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-DE-Site/de_DE/-/EUR/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-AT-Site/de_DE/-/EUR/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-CH-Site/de_DE/-/CHF/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-DE-Site/de_DE/-/EUR/hse24_DisplayProductInformation-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-AT-Site/de_DE/-/EUR/hse24_DisplayProductInformation-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-CH-Site/de_DE/-/CHF/hse24_DisplayProductInformation-Start Allow: /is-bin/intershop.static/WFS/HSE24-Site/-/Editions/ Allow: /is-bin/intershop.static/WFS/HSE24-Site/-/Editions/Root%20Edition/units/HSE24/Beratung/
Technical SEO | | remino630