Canonical OR redirect
-
Hi,
i've a site about sport which cover matches. for each match i've a page.
last week there was a match between: T1 v T2
so a page was created:
www.domain.com/match/T1vT2 - Page1
this week T2 host T1, so there's a new page
www.domain.com/match/T2vT1 - Page2
each page has a unique content with Authorship, but the URL, Title, Description, H1
look very similar cause the only difference is T2 word before T1.
though Page2 is available for a few days, on site links & sitemap, for the search query "T2 T1 match" Page1 appears on the SERP (high location).
of course i want Page2 to be on SERP for the above query cause it's the relevant match.
i even don't see Page2 anywhere on the SERP and i think it wasn't indexed.
Questions:
1. do you think google see both pages as duplicated though the content is different?
2. is there a difference when you search for
T1 vs T2
OR
T2 vs T1
?
3. should i redirect 301 Page1 to Page2? consider that all content for Page1 and the Authorship G+ will be lost.
4. should i make rel=canonical on Page1 to Page2?
5. should i let google sort it out?
i know it's a long one, thanks for your patience.
Thanks,
Assaf
-
Thanks for everything.
i'll stick to the slower method and see what's going on in the index.
-
(2) It could take a while, yes. There is no speedy way to de-index a lot of content that is no longer crawlable, I'm afraid, unless it's currently in a directory that can be removed in Google Webmaster Tools.
(3) So, basically, let's say all the pages live under "/events" - you'd create "/events2", put all the new events in that going forward, and them remove "/events" in GWT?
It could work for removal, but changing your site architecture that way carries a significant amount of risk. You'll also have to make sure that you have a plan going forward for de-indexing new content that becomes outdated, because this is not something you want to do every couple of months. Honestly, unless you know the old content is harming your rankings, I probably wouldn't do this. I'd stick to the slower method.
-
Dear Dr. Meyers,
very insightful!!!
i must clear all the irrelevant page and the sooner the better.
(1) could take months or years
(2) sounds as a very good approach - i'm building my Sitemap with code so it's not a problem. the only problem is with a few hundreds at a time it could also take a long time. and wouldn't google spend a lot of time on crawling those pages and index less of the fresh new ones?
(3) what about google removal tool - and it's connected to my point on last post about setting a new site architecture:
- for all new matches=Pages create a new directory (without the irrelevant pages)
- ask WMT removal tool to remove the old directory and with it all the irrelevant pages (of course according to the guidelines for this tool)
what do you think about this approach?
Thanks again for all your help, i really appreciate it!
Assaf.
-
Oh, wow - yeah if only 2K are current and 120K are indexed, you definitely should be proactive about this. Unfortunately de-indexing content that's already been indexed is tough. Robots.txt isn't terribly effective after-the-fact, and the folder-based approach you've described won't work. You can move the pages and remove the folder (either with Robots.txt or in Webmaster Tools), but you haven't tied the old URLs to the new URLs. To remove them, first you have to tell Google they've moved.
First, pick your method. If these old events have any links/traffic/etc., then you may want to rel=canonical or 301-redirect. Otherwise, you could META NOINDEX or even 404. It depends a bit on their value. Then, a couple of options:
(1) You can wait and see. Let Google clear out the old events over time. If you're not at any risk, this may be fine. Monitor and see what happens.
(2) Encourage Google to re-crawl the old pages by creating a new, stand-alone sitemap. Then, monitor that sitemap in GWT for indexation. You don't have to do all 120K at once, but you could start with a few hundred (hopefully, you can build the XML with code, not by hand) and see how it progresses).
-
Dear Dr. Meyers,
i'm starting to understand i've a much bigger problem.
all finished matches are not relevant anymore and though you can reach them (their Page) from SERP or direct URL, they don't appear on site links OR sitemap. so the best idea is to remove all these old pages from google index - they don't contribute + they made my index status contain 120k pages while only 2000 are currently relevant.
this causes waste of google crawling on irrelevant pages and a potential that google may see some of them as dupes cause in some cases most of the page is relatively similar.
one suggestion i got is - after a match finishes pragmatically add to the page and google will remove it from it's index. - will it remove it if there're no links/sitemap to this page???
but i also have to handle the problem of the huge index - the above approach may/or not handle pages from now on, but what about all the other far past pages with finished matches??? how can i remove them all from the index.
-
adding <meta name="robots" content="noindex,follow">to all of them could take months or more to clean the index cause they're probably rarely crawled.</meta name="robots" content="noindex,follow">
-
more aggressive approach would be to change this site architecture and restrict by robot.txt the folder that holds all the past irrelevant pages.
so if today a match URL is like this: www.domain.com/sport/match/T1vT2
restrict www.domain.com/sport/match/ on robots.txt
and from now on create all new matches on different folder like: www.domain.com/sport/new-match/T1vT2
-
is this a good solution?
-
wouldn't google penalize me for removing a directory with 100k pages?
-
if it's a good approach, how much time it will take for google to clear all those pages from it's index?
I know it's a long one and i'll really appreciate your response.
Thanks a lot,
Assaf.
-
-
The problem with (2) is that, if you cut the crawl path, Google can't process any on-page directives, like 301s, canonicals, etc. Now, eventually, they might try to re-crawl from the index (knowing the URL used to exist), but that can take a long time. So, while canonical is probably appropriate here, you may have to leave the old event/URL active long enough for Google to process the tag.
If these are really isolated cases, I wouldn't worry too much. Maybe rel=canonical them, and eventually Google will flush out the old URL. If this starts happening a lot, I'd really consider some kind of permanent URL for certain match-ups and events.
There's no easy answer. This stuff is very site-specific and can be tricky.
-
i've got some good responses, but i'm not sure what to do.
any other opinions will be highly appreciated.
Thanks!
-
Hi Dr. Meyers,
thanks for your detailed response.
just wanted to refine my scenario:
1. the case of pairs (repeat match after a short term) is rare, but i encountered it.
2. there're no links or sitemap entry for the match that already finished. but google keeps it in the index. the page is reachable ONLY by direct URL address or from the SERP.
3. i don't think i can enforce google to automatically remove the old match from the index and doing it manually for 1000's of matches is not an option.
4. i thought google recognize the content of each page to determine if it's duplicate and not only by the URL/title - by tool the content is only 66% similar.
5. currently i've this problem twice - so for one case i've made rel=canonical and the other one i'm letting google to decide. when google encounters a rel=canonical does it goes to the URL of the canonical?
Thanks,
Assaf.
-
This is a pretty common problem with event-oriented sites, and there's no easy solution. It's a trade-off - if you keep creating new URLs every time a new event is listed, you risking producing a lot of near duplicates and eventually diluting your index. At best, you could have dozens or hundreds of pages competing for the same keywords.
You could canonical or 301-redirect to the most recent event, but that has trade-offs, too. For one, a huge number of either can look odd to Google. Also, the latest event may not always be an appropriate target page, especially if more than just the data is changing. Unfortunately, without seeing the content, it's really tough to tell.
The other option is to create a static URL for every pairing and update the content on that page (maybe creating archival URLs for the old content, that are lower priority in the site architecture). That way, the most current URL never changes. Again, this depends a lot on the site and the scope.
If you'r just talking about a couple of URLs for a handful of events, I wouldn't worry too much about it. I probably wouldn't reverse the URL ("A vs. B" --> "B vs. A"), as it doesn't gain you much, but I also wouldn't lose sleep over it. If each pairing can generate dozens of URLs, though, I think you may want to consider a change in your site architecture.
-
Thanks Jesse!
1. the content is different - according to a comparison tool they are 64% similar and considering the menus, header of the site and other element that appears on each page - you can say they're unique - don't i? even so google haven't indexed the 2nd page and it's up for 5 days - sitemap indexing rate is 90% according to google webmaster tools. so what wrong here?
2. including the date seems like a good idea! but 2 questions about it:
-won't the URL look messy with these numeric inputs?
- the same same match can be repeated in the future isn't it a good idea that the page is already indexed? i mean the URL will stay the same, just the content will be different.
Thanks,
Assaf.
-
Highland thanks for your quick response.
the pages are created dynamically cause at every moment we have more then 1000 matches on our DB. it's impossible to create a manual URL for each page.
the case i described is rare, but it happened for a very important match.
-
1. If the content is different then you should have no problem and you can allow both pages to be indexed without needing to noindex or canonicalize either page
2. Could you perhaps include the date in the url?
As long as each page does have different content, I would say you are fine. I would definately consider adding the date to the url. What if the two teams play again at a later date, adding the date would help differienate those pages even more and I believe help Google.
-
You need to better differentiate the content. T1vsT2 is not the best way to segment your content. So I would actually change URL structures to something like
www.domain.com/match/week1/T1vT2
www.domain.com/match/week2/T2vT1It better segments your content and makes it obvious there's a difference because, to an end user, the original URLs are confusing and that confusion has extended to Google. Google will not see the order as important unless you quote your search (which normal users won't do). Google matches content and context first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Penalty after 301 redirect?
We run a training center. We had 1 main website and 2 dedicated websites to certain themes. The 2 dedicated websites are older and the main website is about 6 months old. The 2 dedicated websites had a top 5 ranking for their most important keywords. 2 weeks ago we imported all the content from the dedicated websites into the main website. Then immediately after we did a perfect 301 redirect of these websites to the main website. 2 SEO companies checked it for us and so I'm very sure this is done right. Google immediately caught this up and gave the main website a boost. We where in the top 10 for many important keywords for 1 week. The next week all our rankings dropped. We only have a top 50 ranking for 10 keywords. Before it was 75 keywords in the top 20. Do you know what could have caused this? Any suggestion, thought, ... is welcome!
Intermediate & Advanced SEO | | wellnesswooz0 -
Any SEO value in gTLD redirect?
So, my client is thinking of purchasing several gTLDs with second level keywords important to us. Stuff like this...we don't want .popsicles, just the domain with the second level keyword. Those cost anywhere from $20-30 right now: grape.popsicles cherry.popsicles rocket.popsicles companyname.popsicles The thinking is that it's best to be defensive, not let a competitor get the gTLD with our name in it (agreed) and not let them capitalize on a keyword-rich gTLD (hmm). The theory was that we or a competitor could buy this gTLD and redirect it to our relevant page for, say, cherry popsicles. They wonder if that would help that gTLD page rank well - and sort of work in lieu of AdWords for pages that are not ranking well. I don't think this will work. A redirected page shouldn't rank better that the page it links to...unless Google gave it points for Exact Match in the URL. Do you think they will -- does Google grade any part of a URL that redirects? Viewing this video from Matt Cutts, I surmise that a gTLD would be ranked like any other page -- if its content, inbound links, etc. support a high DA, well, ok then, you get graded like every domain. In the case of a redirect, the page would not be indexed as a standalone so that is a moot point, right? So, any competitor buying a gTLD with the hopes of ranking well against us would have to build up pagerank in that new domain...and for our purposes I see that being hugely difficult for anyone - even us. Still, a defensive purchase of some of these might not be a bad idea since it's a fairly low cost investment. Other thoughts?
Intermediate & Advanced SEO | | Jen_Floyd0 -
What is the difference between link rel="canonical" and meta name="canonical"?
Hi mozzers, I would like to know What is the difference between link rel="canonical" and meta name="canonical"? and is it dangerous to have both of these elements combined together? One of my client's page has the these two elements and kind of bothers me because I only know link rel="canonical" to be relevant to remove duplicates. Thanks!
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
Canonical url issue
Canonical url issue My site https://ladydecosmetic.com on seomoz crawl showing duplicate page title, duplicate page content errors. I have downloaded the error reports csv and checked. From the report, The below url contains duplicate page content.
Intermediate & Advanced SEO | | trixmediainc
https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=40&brands=66&click=brnd And other duplicate urls as per report are,
https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&click=colorsu&brands=66 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40 https://www.ladydecosmetic.com/unik-colours-lipstick-plum-red-o-14-item-157&category_id=40&brands=66&click=brnd But on every these url(all 4) I have set canonical url. That is the original url and an existing one(not 404). https://www.ladydecosmetic.com/unik-colours-lipstick-caribbean-peach-o-27-item-162&category_id=0 Then how this issues are showing like duplicate page content. Please give me an answer ASAP.0 -
Canonical Related question
I have a site where we have search and result pages, google webmaster tool was giving me duplicate content error for page 1 / 2 / 3 etc etc so i have added canonical on these pages like http://www.business2sell.com/businesses/california/ Is this is correct way of using canonical ?
Intermediate & Advanced SEO | | manish_khanna0 -
After the 301 redirect
Hi all, A quick question, after you have setup your 301 re-directs in .htaccess - is it necessary to keep your content in the original domains directory? My thinking is that requests do get as far as referencing the directory, thus it should be safe to delete all the files on the old domain? Thanx!
Intermediate & Advanced SEO | | gazza7770 -
Duplicate content even with 301 redirects
I know this isn't a developer forum but I figure someone will know the answer to this. My site is http://www.stadriemblems.com and I have a 301 redirect in my .htaccess file to redirect all non-www to www and it works great. But SEOmoz seems to think this doesn't apply to my blog, which is located at http://www.stadriemblems.com/blog It doesn't seem to make sense that I'd need to place code in every .htaccess file of every sub-folder. If I do, what code can I use? The weirdest part about this is that the redirecting works just fine; it's just SEOmoz's crawler that doesn't seem to be with the program here. Does this happen to you?
Intermediate & Advanced SEO | | UnderRugSwept0 -
Rel=Canonical URLs?
If I had two pages: PageA about Cats PageB about Dogs If PageA had a link rel=canonical to PageB, but the content is different, how would Google resolve this and what would users see if they searched "Cats" or "Dogs?" If PageA 301 redirected to PageB, (no content in PageA since it's 301 redirected), how would Google resolve this and what would users see if they searched "Cats" or "Dogs?"
Intermediate & Advanced SEO | | visionnexus0