Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Block Domain in robots.txt
-
Hi.
We had some URLs that were indexed in Google from a www1-subdomain. We have now disabled the URLs (returning a 404 - for other reasons we cannot do a redirect from www1 to www) and blocked via robots.txt. But the amount of indexed pages keeps increasing (for 2 weeks now). Unfortunately, I cannot install Webmaster Tools for this subdomain to tell Google to back off...
Any ideas why this could be and whether it's normal?
I can send you more domain infos by personal message if you want to have a look at it.
-
Hi Philipp,
I have not heard of Google going rogue like this before, however I have seen it with other search engines (Baidu).
I would first verify that the robots.txt is configured correctly, and verify there is no links anywhere to the domain. The reason I mentioned this prior, was due to this official notification on Google: https://support.google.com/webmasters/answer/156449?rd=1
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.
My next thought would be, did Google start crawling the site before the robots.txt blocked them from doing so? This may have caused Google to start the indexing process which is not instantaneous, then you have the new urls appear after the robots.txt went into effect. The solution is add the meta tag noindex, or block put an explicit block on the server as I mention above.
If you are worried about duplicate content issues you maybe able to at least canonical the subdomain urls to the correct url.
Hope that helps and good luck
-
Hi Don
Thanks for your hint. It doesn't look like there are any links to the www1 subdomain. Also, since we've let the www1-Subdomain return 404's and blocked it with robots, the indexed pages increased from 39'300 to 45'100 so this is more than anybody would link to... Really strange why Google just ignores robots and keeps indexing...
-
Hi Phil,
Is it possible that google is find the links on another site (like somebody else has your links on their site)? Depending on your situation a good catch all block is to secure the www1 domain with (.htaccess/**.**htpasswd ) this would force anybody (even bots) to provide credentials to see or explore the site. Of course everybody who needs access to the site would have the credentials. So in theory you shouldn't see any more urls getting indexed.
Hope that helps,Don
-
Thanks for the resource Chris! The strange thing is that Google keeps indexing new URLs even though it is clearly blocked via robots.txt...
But I guess I'll just wait for these 90 days to pass then...
-
Phillip,
If you've deleted the URLs, there's not much else for you to do. You're experiencing the lag between when Google crawls and indexes pages new pages and when it finds and removes a 404 URL from it's index.
You should think 90 days as an approximate time frame for your page count in the index to start dropping. Here's more from google:
https://support.google.com/webmasters/answer/1663419
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Multiple robots.txt files on server
Hi! I have previously hired a developer to put up my site and noticed afterwards that he did not know much about SEO. This lead me to starting to learn myself and applying some changes step by step. One of the things I am currently doing is inserting sitemap reference in robots.txt file (which was not there before). But just now when I wanted to upload the file via FTP to my server I found multiple ones - in different sizes - and I dont know what to do with them? Can I remove them? I have downloaded and opened them and they seem to be 2 textfiles and 2 dupplicates. Names: robots.txt (original dupplicate)
Technical SEO | Jan 10, 2018, 8:06 AM | mjukhud
robots.txt-Original (original)
robots.txt-NEW (other content)
robots.txt-Working (other content dupplicate) Would really appreciate help and expertise suggestions. Thanks!0 -
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | Mar 7, 2016, 2:13 PM | Kilgray0 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | Nov 29, 2012, 7:30 AM | Mikkehl0 -
.ca and. com domains
Hello, currently the main site im working on is a .com, but have the .ca version purchased from register.com. should i have this setup to redirect to the .com site. will google see these as dup content. We have the .ca for our canadian customers but both sites are identical. Thank you
Technical SEO | Sep 1, 2012, 1:41 AM | TP_Marketing0 -
Invisible robots.txt?
So here's a weird one... Client comes to me for some simple changes, turns out there are some major issues with the site, one of which is that none of the correct content pages are showing up in Google, just ancillary (outdated) ones. Looks like an issue because even the main homepage isn't showing up with a "site:domain.com" So, I add to Webmaster Tools and, after an hour or so, I get the red bar of doom, "robots.txt is blocking important pages." I check it out in Webmasters and, sure enough, it's a "User agent: * Disallow /" ACK! But wait... there's no robots.txt to be found on the server. I can go to domain.com/robots.txt and see it but nothing via FTP. I upload a new one and, thankfully, that is now showing but I've never seen that before. Question is: can a robots.txt file be stored in a way that can't be seen? Thanks!
Technical SEO | Jan 1, 2017, 8:34 PM | joshcanhelp0 -
Block a sub-domain from being indexed
This is a pretty quick and simple (i'm hoping) question. What is the best way to completely block a sub domain from getting indexed from all search engines? One item i cannot use is the meta "no follow" tag. Thanks! - Kyle
Technical SEO | Aug 25, 2011, 5:24 AM | kchandler0 -
Is blocking RSS Feeds with robots.txt necessary?
Is it necessary to block an rss feed with robots.txt? It seems they are automatically not indexed (http://googlewebmastercentral.blogspot.com/2007/12/taking-feeds-out-of-our-web-search.html) And, google says here that it's important not to block RSS feeds (http://googlewebmastercentral.blogspot.com/2009/10/using-rssatom-feeds-to-discover-new.html) I'm just checking!
Technical SEO | Jul 9, 2011, 12:15 PM | nicole.healthline0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | May 6, 2011, 3:55 PM | JordanJudson0