Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can using nofollow on magento layered navigation hurt?
-
Howdy Mozzers!
We would like to use no follow, no index on our magento layered navigation pages after any two filters are selected. (We are using single filter pages as landing page, so we would liked them indexed)
Is it ok to use nofollow, noindex on these filter pages? Are there disadvantages of using nofollow on internal pages?
Matt mentioned refraining from using nofollow internally https://www.youtube.com/watch?v=4SAPUx4Beh8
But we would like to conserve crawling bandwidth and PR flow on potentially 100's of thousands of irrelevant/duplicate filter pages.
-
I understand I might be a little late, but I had experienced this issue first hand with a Magento site. Once I added a wildcard exclusion in the robots.txt file my impressions and click improved noticeably.
-
HI,
That is quite a few pages!
If the main issue is crawling related then robots.txt is probably the best way to go, I think the meta tags will still allow the pages to be crawled (they have to be for the tag to be read). Check out the comments in this and this post for wildcard matching in robots.txt which should do what you need. If the pages are indexed then it might be wise to leave a bit of time so that the noindex tags are picked up and then implement the crawl blocking in the robots.txt (and test in GWT to make sure you are not accidentally blocking more then you think). In this case I think you could still leave out the nofollow meta tag but this might just be personal opinion now - I'm not sure if in practice it would make much difference once you have no indexed and blocked crawling!
-
Hi Lynn,
Thank you for your valuable input on the matter. Yes, using meta tags in the header. We are currently submitting filter pages that we want indexed through the site map, so google bot should be able to reach these pages. Also, we are displaying noindex, nofollow tags only on filter pages which have a combination of more than two filters selected as we do not need to go any deeper than that.
I understand your point of using noindex, follow instead of noindex, nofollow to prevent unexpected crawl issues. But on the contrary, don't you think we could conserve crawling bandwidth using noindex, nofollow tags on filter pages that serve no purpose being crawled and probably wont be externally linked to either?
We currently have around 7 filters, some with many values. This can create combinations of more than 500,000 filter pages...
Thanks
-
Hi,
I assume you mean in a meta header tag for these pages? As a general rule I would avoid using nofollow and simply noindex the pages in question. If you are implementing this with a meta tag then the pages will be reached from the layered navigation links anyway so they would then be a dead end for both PR and the crawler - with the potential to cause unexpected crawl problems rather than optimising it.
As long as you are addressing as best you can any duplicate content issues caused by the layered navigation (check out this post for a good rundown on the various solutions) then I would leave the noindex in place and let the crawler follow the links as normal.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How safe is it to use a meta-refresh to hide the referrer?
Hi guys, So I have a review site and I'm affiliated with several partnership programs whose products I advertise on my site. I don't want these affiliate programs to see the source of my traffic (my site), so I'm looking for a safe solution to hide the referrer URL. I have recently added a rel="noreferrer" tag to all my affiliate links, but this method isn't perfect as not all browsers respect that rule. After doing some research and checking my competitors I noticed that some of them use meta-refresh, which seems more reliable in this regard. So, how safe is it to use meta-refresh as means of hiding referrer URL? I'm worrying that implementing a meta-refresh redirect might negatively affect my SEO. Does anybody have any suggestions on how to hide the referrer URL without damaging SEO? Thank you.
Intermediate & Advanced SEO | | Ibis150 -
Can you index a Google doc?
We have updated and added completely new content to our state pages. Our old state content is sitting in a our Google drive. Can I make these public to get them indexed and provide a link back to our state pages? In theory it sounds like a great link building strategy... TIA!
Intermediate & Advanced SEO | | LindsayE1 -
Using "nofollow" internally can help with crawl budget?
Hello everyone. I was reading this article on semrush.com, published the last year, and I'd like to know your thoughts about it: https://www.semrush.com/blog/does-google-crawl-relnofollow-at-all/ Is that really the case? I thought that Google crawls and "follows" nofollowed tagged links even though doesn't pass any PR to the destination link. If instead Google really doesn't crawl internal links tagged as "nofollow", can that really help with crawl budget?
Intermediate & Advanced SEO | | fablau0 -
Does it hurt your SEO to have an inaccessible directory in your site structure?
Due to CMS constraints, there may be some nodes in our site tree that are inaccessible and will automatically redirect to their parent folder. Here's an example: www.site.com/folder1/folder2/content, /folder2 redirects to /folder1. This would only be for the single URL itself, not the subpages (i.e. /folder1/folder2/content and anything below that would be accessible). Is there any real risk in this approach from a technical SEO perspective? I'm thinking this is likely a non-issue but I'm hoping someone with more experience can confirm. Another potential option is to have /folder2 accessible (it would be 100% identical to /folder1, long story) and use a canonical tag to point back to /folder1. I'm still waiting to hear if this is possible. Thanks in advance!
Intermediate & Advanced SEO | | digitalcrc0 -
Using the same content on different TLD's
HI Everyone, We have clients for whom we are going to work with in different countries but sometimes with the same language. For example we might have a client in a competitive niche working in Germany, Austria and Switzerland (Swiss German) ie we're going to potentially rewrite our website three times in German, We're thinking of using Google's href lang tags and use pretty much the same content - is this a safe option, has anyone actually tries this successfully or otherwise? All answers appreciated. Cheers, Mel.
Intermediate & Advanced SEO | | dancape1 -
Using subdomains for related landing pages?
Seeking subdomain usage and related SEO advice... I'd like to use multiple subdomains for multiple landing pages all with content related to the main root domain. Why?...Cost: so I only have to register one domain. One root domain for better 'branding'. Multiple subdomains that each focus on one specific reason & set of specific keywords people would search a solution to their reason to hire us (or our competition).
Intermediate & Advanced SEO | | nodiffrei0 -
Follow or nofollow to subdomain
Hi, I run a hotel booking site and the booking engine is setup on a subdomain.
Intermediate & Advanced SEO | | vmotuz
The subdomain is disabled from being indexed in robots.txt Should the links from the main domain have a nofollow to the subdomain? What are you thoughts? Thanks!0 -
Can PDF be seen as duplicate content? If so, how to prevent it?
I see no reason why PDF couldn't be considered duplicate content but I haven't seen any threads about it. We publish loads of product documentation provided by manufacturers as well as White Papers and Case Studies. These give our customers and prospects a better idea off our solutions and help them along their buying process. However, I'm not sure if it would be better to make them non-indexable to prevent duplicate content issues. Clearly we would prefer a solutions where we benefit from to keywords in the documents. Any one has insight on how to deal with PDF provided by third parties? Thanks in advance.
Intermediate & Advanced SEO | | Gestisoft-Qc1