Need help in de-indexing URL parameters in my website.
-
Hi,
Need some help.
So this is my website _https://www.memeraki.com/ _
If you hover over any of the products, there's a quick view option..that opens up a popup window of that product
That popup is triggered by this URL. _https://www.memeraki.com/products/never-alone?view=quick _
In the URL you can see the parameters "view=quick" which is infact responsible for the pop-up. The problem is that the google and even your Moz crawler is picking up this URL as a separate webpage, hence, resulting in crawl issues, like missing tags.
I've already used the webmaster tools to block the "view" parameter URLs in my website from indexing but it's not fixing the issue
Can someone please provide some insights as to how I can fix this? -
Hi Imran - Oh, that must be frustrating! Have you tried contacting the theme developer for support? You may also be able to get guidance in the Shopify forum, especially if you are using a more common theme.
-
Haven't found a feasible solution yet,. the code of the popup is not available to edit so the inserting manual canonical tag is not possible. Anybody has something else to offer?
-
Awesome. Good luck!
-
Okay I'll try and see if I can do that. It's a shopify based website so I don't have as much freedom of tweaking stuff. The pop-up came in-built with the theme. Will try and see if I can include the canonical link in the pop-up source code.
-
This is exactly my point, the view=quick version of the page needs a canonical pointing back to the proper version of the page. Yes, it's not a normal webpage but that is now search engines are viewing it because it's missing the tag.
This will signify to search engines that the 'view=quick' version of the page is a duplicate of the normal page and that it should not be ranked within search results.
-
You mean the never alone one?
As I said, that one is not actually a webpage, it's just a URL which triggers the popup.
If you check the URL without the parameter "view=quick" , you'll find a canonical tag in there.I've added the screenshots for HTML of our home page and that product page.
-
I've clicked on the link you mentioned above and there is no canonical on the page?
-
We've already added the canonical tags in the header section. Do you mean we have to this to each and every individual product page?
This is the script we've added : -
Hey there,
This is a funny issue that has been plagueing SEO for years - I find that the best fix is to ensure that canonicals are automatically added to query parameter versions of pages.
In this instance, I would ensure a canonical tag is added in the below format:
This would ensure that search engines understand the main page you want to rank and it encourages them not to rank these query parameter pages.
All the best,
Sean
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate URLs on eCommerce site caused by parameters
Hi there, We have a client with a large eCommerce site with about 1500 duplicate URLs caused by the parameters in the URLs (such as the sort parameter where the list of products are then sorted by price, age etc.) Example: www.example.com/cars/toyota First duplicate URL: www.example.com/cars/toyota?sort=price-ascending Second duplicate URL: www.example.com/cars/toyota?sort=price-descending Third duplicate URL: www.example.com/cars/toyota?sort=age-descending Originally we had advised to add a robots.txt file to block search engines from crawling the URLs with parameters but this hasn't been done. My question: If we add the robots.txt now and exclude all URLs with filters - how long will it take for Google to disregard the duplicate URLs? We could ask the developers to add canonical tags to all the duplicates but these are about 1500... Thanks in advance for any advice!
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Need some help/input about my Joomla sitemap created by XMap
Here is my current sitemap for my site http://www.yakangler.com/index.php?option=com_xmap&view=xml&tmpl=component&id=1 I have some questions about it's current settings. I have a component called JReviews that xmap produces a separate link for each category. ex: http://www.yakangler.com/fishing-kayak-review/265-2013-hobie-mirage-adventure-island 2014-09-03T20:46:25Z monthly 0.4 http://www.yakangler.com/fishing-kayak-review/266-2012-wilderness-systems-tarpon-140 2014-06-03T15:49:00Z monthly 0.4
Intermediate & Advanced SEO | | mr_w
http://www.yakangler.com/fishing-kayak-review/343-wilderness-systems-tarpon-120-ultralite 2013-11-25T06:39:05Z monthly 0.4 Where as my other articles are only linked by the content category. ex: http://www.yakangler.com/news monthly 0.4
http://www.yakangler.com/tournaments monthly 0.4
http://www.yakangler.com/kayak-events monthly 0.4
http://www.yakangler.com/spotlight monthly 0.4 Which option is better?0 -
Blocking Certain Site Parameters from Google's Index - Please Help
Hello, So we recently used Google Webmaster Tools in an attempt to block certain parameters on our site from showing up in Google's index. One of our site parameters is essentially for user location and accounts for over 500,000 URLs. This parameter does not change page content in any way, and there is no need for Google to index it. We edited the parameter in GWT to tell Google that it does not change site content and to not index it. However, after two weeks, all of these URLs are still definitely getting indexed. Why? Maybe there's something we're missing here. Perhaps there is another way to do this more effectively. Has anyone else ran into this problem? The path we used to implement this action:
Intermediate & Advanced SEO | | Jbake
Google Webmaster Tools > Crawl > URL Parameters Thank you in advance for your help!0 -
Please help :) Troubles getting 3 types of content de-indexed
Hi there,
Intermediate & Advanced SEO | | Ltsmz
I know that it takes time and I have already submitted a URL removal request 3-4 months ago.
But I would really appreciate some kind advice on this topic. Thank you in advance to everyone who contributes! 1) De-indexing archives Google had indexed all my:
/tag/
/authorname/
archives. I have set them as no-index a few months ago but they still appear in search engine.
Is there anything I can do to speed up this de-indexing? 2) De-index /plugins/ folder in wordpress site They have also indexed all my /plugins/ folder. So I have added a disallow /plugin/ in my robots.txt 3-4 months ago, but /plugins/ still appear in search engine. What can I do to get the /plugins/ folder de-indexed?
Is my disallow /plugins/ in robots.txt making it worse because google has already indexed it and not it can't access the folder? How do you solve this? 3) De-index a subdomain I had created a subdomain containing adult content, and have it completely deleted it from my cpanel 3months ago, but it still appears in search engines. Anything else I can do to get it de-indexed? Thank you in advance for your help!0 -
Hash URLs
Hi Mozzers, Happy Friday! I have a client that has created some really nice pages from their old content and we want to redirect the old ones to the new pages. The way the web developers have built these new pages is to use hashbang url's for example www.website.co.uk/product#newpage My question is can I redirect urls to these kind of pages? Would it be using the .htaccess file to do it? Thanks in advance, Karl
Intermediate & Advanced SEO | | KarlBantleman0 -
Links with Parameters
The links from the home page to some internal pages on my site have been coded in the following format by my tech guys: www.abc.com/tools/page.html?hpint_id=xyz If I specify within my Google Webmaster tools that the parameter ?hpint_id should be ignored and content for the user does not change, Will Google credit me for a link from the home page or am I losing something here. Many thanks in advance
Intermediate & Advanced SEO | | harmit360 -
What Should I Do With My URL Names?
I release property on my blog each week, and it has come to the point we will get property in the same area as we have had in the past. So, I name my URL /blah-blah-blah-[area of property]/ for the first property in that area right. Now I get a different property in that same area and the URL will have to be named /blah-blah-blah-[area of property]-2/. Now I'm not sure if this is a major issue or not, but I'm sure there must be a better way than this, and I don't really want to take down our past properties - unless you can give me good reason too, of course? So before I start getting URLs like this: /blah-blah-blah-[area of property]-2334343534654/ (well, ok, maybe not that bad! But you get my point) I wanted to see what everyones opinion on it is 🙂 Thanks in advance!
Intermediate & Advanced SEO | | JonathanRolande0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180