Duplicate site (disaster recovery) being crawled and creating two indexed search results
-
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain.
Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm.
There seem to be two potential fixes. Which is best for this case?
- use the robots.txt to block Google from crawling the .gtm site
2) canonicalize the the gtm urls to toptable.co.uk
In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best.
Thanks in advance to the SEOmoz community!
-
It's a little tricky. While Andrea is right about Robots.txt - it's not great for removal once pages/domains are indexed, you can block the sub-domain with robots.txt and then request removal in Google Webmaster Tools (you need to create a separate account for the sub-domain itself). That's often the fastest way to remove something from the index, and if it has no search value, I might go that route. Just proceed with caution - it's a delicate procedure.
Doing 1-to-1 canonicalization or adding 301 redirects may be the next strongest signal (NOINDEX is a bit weaker, IMO). However, Google will have to re-crawl the sub-domain to do that, so you'll need to keep the paths open.
-
First, if the pages are already indexed then a robots.txt won't make them go away. A meta tag no index on the pages is the better solution. This allows search engines to "read" you page, see the no index tag and then work to remove the pages from index. A robots.txt doesn't necessarily accomplish the same result.
-
If you can do a 1-to-1 page canonicalization (each page on .co.uk is canonicaled to the equivalent page on the .com) then I would do that.
Otherwise, I would noindex the backup site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate page url crawl report
Details: Hello. Looking at the duplicate page url report that comes out of Moz, is the best tactic to a) use 301 redirects, and b) should the url that's flagged for duplicate page content be pointed to the referring url? Not sure where the 301 redirect should be applied... should this url, for example: <colgroup><col width="452"></colgroup>
Intermediate & Advanced SEO | | compassseo
| http://newgreenair.com/website/blog/ | which is listed in the first column of the Duplicate Page Content crawl, be pointed to referring url in the same spreadsheet? Or, what's the best way to apply the 301 redirect? thanks!0 -
On-site Search - Revisited (again, *zZz*)
Howdy Moz fans! Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose? It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter? Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value" I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have - _Robots.txt - _Remove the search pages from Google _No Index - _Allow the crawl but don't index the search pages. _No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there. _Just leave it alone - _Some of your search results might get ranked and bring traffic in. It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
Intermediate & Advanced SEO | | Mark_Elton0 -
Getting too many links on Google search results, how do I fix?
I'm a total newbie so I apologize for what I am sure is a dumb question — I recently followed Moz suggestions for increasing visibility on my site for a specific keyword by including that keyword in more verbose page descriptions for multiple pages. This worked TOO well as now that keyword is bringing up too many results in Google for these different pages on my site . . . is there a way to compile them into one result with the subpages like for instance, the attached image for a search on Apple? Do I need to change something in my robots.txt file to direct these to my main page? Basically, I am a photographer and a search for my name now brings up each of my different photo gallery pages in multiple results, it's a little over the top. Thanks for any and all help! CNPJZgb
Intermediate & Advanced SEO | | jason54540 -
Glossary index and individual pages create duplicate content. How much might this hurt me?
I've got a glossary on my site with an index page for each letter of the alphabet that has a definition. So the M section lists every definition (the whole definition). But each definition also has its own individual page (and we link to those pages internally so the user doesn't have to hunt down the entire M page). So I definitely have duplicate content ... 112 instances (112 terms). Maybe it's not so bad because each definition is just a short paragraph(?) How much does this hurt my potential ranking for each definition? How much does it hurt my site overall? Am I better off making the individual pages no-index? or canonicalizing them?
Intermediate & Advanced SEO | | LeadSEOlogist0 -
Indexing falling/search queries the same - concerned
Hello, I posted abou this a few days ago but didn't really get anywhere and now have new information after looking into it more. This is my site - http://www.whosjack.org My page indexing has been falling steadily daily currently from thousands of pages indexed to just a couple of hundred. My search queries don't seem to be currently affected, I have done crawl tests to see if the site can be crawled and put the site:whosjack.org into Google and had 12,000 results come back when goole has said it has indexed 133 and falling. However all pages indexed on the site:whosjack.org search seem to be stories with just two words in the title? I am sure I am missing out on traffic here but can't work out what the issue is and how to fix it. I have no alerts on my dashboard and when I submit sitemaps to webmaster tools I get 15,115 URLs submitted 12,088 URLs indexedwhich cant be bad?Any help/suggestions really appreciated.
Intermediate & Advanced SEO | | luwhosjack0 -
To index search results or to not index search results?
What are your feelings about indexing search results? I know big brands can get away with it (yelp, ebay, etc). Apart from UGC, it seems like one of the best ways to capture long tail traffic at scale. If the search results offer valuable / engaging content, would you give it a go?
Intermediate & Advanced SEO | | nicole.healthline0 -
When crawls occur - when will my links show up in Open Site Explorer
Hello everyone, I've been building links for a while now and none of them show up in Explorer. My domain authority hasn't changed for about a month or so. When does Google do crawls and when does SEOMoz do crawls? Thanks
Intermediate & Advanced SEO | | Harbor_Compliance0 -
Local Search Results Tanked My 1st Page Ranking
My site was routinely ranking in the top 2-3 in Google for my relevant search terms. Then I started working on my local SEO. Now I'm in the map list at 1-2, but my site no longer shows up with the rest of the search results. I've heard that this has been happening to other local businesses with a big Google presence. I'm thinking that I should create some micro sites for each location listing that gives a location specific intro and then links to my main site. Then I can sever my main site from Google places. Here are my two questions: 1) Is this going to kill my placement in the map results; and, How long will it take for my main site to get back to its 2-3 spot rankings in Google's regular results?
Intermediate & Advanced SEO | | ToughTimesLawyer0