Canonical for stupid _GET parameters or not? [deep technical details]
-
Hi,
Im currently working on www.kupwakacje.pl which is something like travel agency. People can search for holidays and buy/reserve them. I do know plenty of problems on my website, and thx to seomoz hopefully I will be able to fix them but one is crucial and it's kind of hard to fix I think. The search engine is provided by external party in form of simple API which is in the end responding with formatted HTML - which is completly stupid and pointless, but that's not the main problem. Let's dive in:
So for example the visitor goes to homepage, selects Egypt and hit search button. He will be redirected to
and this is not a joke
'wczasy-egipt' is my invention obviously and it means 'holidays-egypt'. I've tried to at least have 'something' in the url that makes google think it's related to Egypt indeed. Rest which is the complicated ep3[] thingy is a bunch of encoded parameters. This thing renders in first step a list of hotels, in next one hotel specific offer and in next one the reservation page. Problem is that all those links generated by this so-called API are only changing subparameters in ep3[] parameter so for example clicking on a single hotel changes to url to:
www.kupwakacje.p/wczasy-egipt/?url=wczasy-egipt/&ep3[]=%3Fsid%3Db5onrj4hdnspb5eku4s2iqm1g3lomq91%26l ang%3Dpl%26drt%3D30%26sd%3D10.06.2011%26ed%3D30.12.1999%26px%3D99999 %26dsr%3D11%253A%26ds%3D11%253A%26sp%3D
which is obviously looking not very different to the first one. what I would like to know is shall i make all pages starting with 'wczasy-egipt' a rel-canonical to the first one (www.kupwakacje.pl/wczasy-egipt) or shoudn't I? google recognizes the webpage according to webmasters central, and recognizes the url but responses with mass duplicate content. What about positioning my website for the hotel names - so long tail optimalization?
I know it's a long and complicated post, thx for reading and I would be very happy with any tip or response.
-
Also, here's a blog post from SEOmoz discussing the idea of Google, internal search results pages, and thin content: http://www.seomoz.org/blog/fat-pandas-and-thin-content
"Google has often taken a dim view of internal search results (sometimes called “search within search”, although that term has also been applied to Google’s direct internal search boxes). Essentially, they don’t want people to jump from their search results to yours – they want search users to reach specific, actionable information.
While Google certainly has their own self-interest in mind in some of these cases, it’s true that internal search can create tons of near duplicates, once you tie in filters, sorts, and pagination. It’s also arguable that these pages create a poor search experience for Google users.
The Solution
This can be a tricky situation. On the one hand, if you have clear conceptual duplicates, like search sorts, you should consider blocking or NOINDEXing them. Having the ascending and descending version of a search page in the Google index is almost always low value.
Likewise, filters and tags can often create low-value paths to near duplicates.
Search pagination is a difficult issue and beyond the scope of this post, although I’m often in favor of NOINDEXing pages 2+ of search results. They tend to convert poorly and often look like duplicates." -
Yeah, the iframe idea seems to be the easiest to implement and would give you a nice amount of control over both the URLs and the content on the pages. Generally Google tries to avoid indexing other sites' internal search results pages, so if you can add content around the iframe that helps make those search pages unique, that will help.
-
ok, will try all of these advices to be honest. I'm 99% sure I can't do much about the GET parameters, but will check.
Second thing which is making some kind of static pages and linking them with an iframe response seems really nice idea and is definetely doable. I will dive into that.
Third one is the most obvious one but I doubt I will manage to do it (even though I'm really not a bad developer ;)) there are about 30 parameters which need to be rewritten probably. It might be a better idea just to overwrite a few main ones (like which step user is at, which direction, which hotel etc). But can apache decode javascript?
hmm..
Thx for answers so far!
-
First, I'd look for a way to shorten the URL via the API. There are a TON of blank variables in that URL so I'm guessing the API has everything turned on, even though you're not pulling results for all those variables. If you can, get it to return data on only the things being searched for.
Next, if the API is just too unmanageable, I'd look into building static pages that pull search results into them via an iFrame. That way you could control all the URLs and content for several hundred popular searches, have nice clean URLs, but still have the dynamic search results as a portion of the page.
A last option, if possible, would be to setup URL rewrites to change the popular searches into normal sounding pages, but that could be difficult and cause things to break if the API changes suddenly or throws more random variables into the mix.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google selecting incorrect URL as canonical: 'Duplicate, submitted URL not selected as canonical'
Hi there, A number of our URLs are being de-indexed by Google. When looking into this using Google Search Console the same message is appearing on multiple pages across our sites: 'Duplicate, submitted URL not selected as canonical' 'IndexingIndexing allowed? YesUser-declared canonical - https://www.mrisoftware.com/ie/products/real-estate-financial-software/Google-selected canonical - https://www.mrisoftware.com/uk/products/real-estate-financial-software/'Has anyone else experienced this problem?How can I get Google to select the correct, user-declared canoncial? Thanks.
Technical SEO | | nfrank0 -
Canonical Advice - ?
Hi everyone, I have a bit of problem with duplicate content on a newly launched site and looking for some advice on which pages to canonicalize. Our legacy site had product "information" pages that now 301 to new product information pages. The reason for the legacy having these pages (instead of pages where you can purchase) is because we used our vendors "cart link", which was an iframe inside the website. So in order to get ranked for these products, we created these pages, that had links to the frame where they could buy. The strategy worked, and we got ranked for our products. Now with the new site, we have those same product information pages, but when you click the link to buy, it goes to a page which now is on our actual site, where you can make the purchase, but this page contains the same basic information, though it looks very different. So my question --- the product "information" pages, are the new 301 homes and are the pages with the rank. The purchase pages are new and have no rank, but are essentially duplicate content. Should I put the canonical link element on the purchase page and tell Google to regard the information pages since those are ranked? It just seems weird to me to direct Google away from the place where people can purchase, however, the purchase pages aren't nearly as "pretty" as the information pages are, and wouldn't be the greatest landing pages. We have an automotive site, and the purchase page you have to enter vehicle information. The information page is nicer, and if the visitor is interested, its just one click to get to that page to buy. What to do here? I am fairly new to Moz, and I couldn't determine whether I am permitted to include an example link from our site of what I am referring to. Is that permitted? Thanks for any help anyone can provide.
Technical SEO | | yogitrout1
Kristin0 -
Canonical redirects
Hello, I have a quick question: I use wordpress for my website. I have a plugin for translating the website in other languages. Thus, I have 2 versions of urls, one with /en, one without (original languale). This has been seen as duplicate content. I have been advised that the best to do is to use canonical redirect. Should I use it on the general header.php (the only header I can find in the CMS), or should I redirect each page singularly? I believe the second is the best way, but I can't find headers and txt documents for each page in my FTP. As well I have seen this post, in which is explained that canonical redirects can be done directly in the general header.php http://www.bin-co.com/blog/2009/02/avoid-duplicate-content-use-canonical-url-in-wordpress-fix-plugin/ Is it true? You have any suggestion?
Technical SEO | | socialengaged
Thanks! 🙂 Eugenio0 -
Importance of correction of technical errors
Hello everyone!!! I have question that i know it has been asked so many times. However i am looking for an idea for my specific situation. I own a website about commercial steel. My main focus has been getting incoming links from important companies and sites, while maintaining a good quality site. Ive been struggling with ranks and Page Authority. Ive never put attention to technical errors such as Duplicate Content, 4XX Errors and critical warnings such as Redirects. I have around 70 errors and around 400 warnings. Someone told me that as long as the website is "user friendly" i should worry about that. I have scarce resources to my SEO efforts. Which aspect should i put more effort?. Link Building and Quality Content vs Technical SEO ??? Is there a recommended balance mix towards a better PA, DA and Overall Quality?? I know is difficult, but it would be extremely helpful to hear from you!! Regards.
Technical SEO | | JesusD0 -
Canonical tags pointing at old URLs that have been 301'd
I have a site which has various white label sites with the same content on each. I have canonical tags on the white label sites pointing to the main site. I have changed some URLs on the main site and 301'd the previous URL to the new ones. Is it ok to have the canonicals pointing to the old URLs that now have a 301 redirect on them.
Technical SEO | | BeattieGroup0 -
How long to reverse the benefits/problems of a rel=canonical
If this wasn't so serious an issue it would be funny.... Long store cut short, a client had a penalty on their website so they decided to stop using the .com and use the .co.uk instead. They got the .com removed from Google using webmaster tools (it had to be as it was ranking for a trade mark they didn't own and there are legal arguments about it) They launched a brand new website and placed it on both domains with all seo being done on the .co.uk. The web developer was then meant to put the rel=canonical on the .com pointing to the .co.uk (maybe not needed at all thinking about it, if they had deindexed the site anyway). However he managed to rel=canonical from the good .co.,uk to the ,com domain! Maybe I should have noticed it earlier but you shouldn't have to double check others' work! I noticed it today after a good 6 weeks or so. We are having a nightmare to rank the .co.uk for terms which should be pretty easy to rank for given it's a decent domain. Would people say that the rel=canonical back to the .com has harmed the co.uk and is harming with while the tag remains in place? I'm off the opinion that it's basically telling google that the co.uk domain is a copy of the .com so go rank that instead. If so, how quickly after removing this tag would people expect any issues caused by it's placement to vanish? Thanks for any views on this. I've now the fun job of double checking all the coding done by that web developer on other sites!
Technical SEO | | Grumpy_Carl0 -
Getting rid of duplicate content with rel=canonical
This may sound like a stupid question, however it's important that I get this 100% straight. A new client has nearly 6k duplicate page titles / descriptions. To cut a long story short, this is mostly the same page (or rather a set of pages), however every time Google visits these pages they get a different URL. Hence the astronomical number of duplicate page titles and descriptions. Now the easiest way to fix this looks like canonical linking. However, I want to be absolutely 100% sure that Google will then recognise that there is no duplicate content on the site. Ideally I'd like to 301 but the developers say this isn't possible, so I'm really hoping the canonical will do the job. Thanks.
Technical SEO | | RiceMedia0 -
Canonical Tag
Does it do anything to place the Canonical tag on the unique page itself? I thought this was only to be used on the offending pages that are the copies. Thanks
Technical SEO | | poolguy0