Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Paging. is it better to use noindex, follow
-
Is it better to use the robots meta noindex, follow tag for paging, (page 2, page 3) of Category Pages which lists items within each category
or just let Google index these pages
Before Panda I was not using noindex because I figured if page 2 is in Google's index then the items on page 2 are more likely to be in Google's index. Also then each item has an internal link
So after I got hit by panda, I'm thinking well page 2 has no unique content only a list of links with a short excerpt from each item which can be found on each items page so it's not unique content, maybe that contributed to Panda penalty. So I place the meta tag noindex, follow on every page 2,3 for each category page. Page 1 of each category page has a short introduction so i hope that it is enough to make it "thick" content (is that a word :-)) My visitors don't want long introductions, it hurts bounce rate and time on site.
Now I'm wondering if that is common practice and if items on page 2 are less likely to be indexed since they have no internal links from an indexed page
Thanks!
-
Hi Theo, This is an old post you commented on, but I wanted to expand on the question and ask your thoughts: I have a real estate website where I show MLS listings (properties for sale shared by Realtors) which means these MLS listings also exit on 100+ other real estate sites. For my various MLS result pages I use rel=prev / next for paginated pages. Now, here is the question: should I also ad a "no index, follow" on these paginated pages? According to a Google blog post it said no need to use when using rel=prev / next. However, in my case these pages are very similar to other pages around the web and not original content. Yes, I know I could make more unique by adding content, but that is not what my users want. I need a simple clean look with minimal words. So, if I have a result page with 10 pages, would no index follow 9 of those pages make sense to reduce the duplicate content on my website? Or, is issue that my result page will look "thin" compared to competitors and that will impact my ranking negatively?
-
Google just announced some tags to help support pagination better. They say if you have a view all option that doesn't take too long to load, searchers generally prefer that, so you can rel=canonical to that page. However, if you don't have a view all page, then you can put these nifty rel="next" and rel="prev" tags in to let Google know your page has pagination, and where the next and previous pages are.
View all: http://googlewebmastercentral.blogspot.com/2011/09/view-all-in-search-results.html
next/prev: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
I was talking about the same concept you're describing when I mentioned category listings. The next / previous and related items sound exactly like the things that I would recommend to get links to the page > 1 items! Lastly, yes the canonical URL should be the page we're actually viewing and not always page 1.
-
What do you mean by category listings? I'm talking about category pages where each item in the category is listed.
I do link from product or item pages to each other using next, previous and related items.
Also I'm pretty sure about this but just asking, rel=canonical for page 2,3 should be that page and not page 1 ?
-
You're welcome! It is a link from one page of your website to another, thus an internal link. I don't see how noindex,follow would change that. Yes, they will receive link juice. Because of the follow in the robots tag the pages (even though they aren't indexed) still pass link juice. Like I said in my original post, it is best to have other pages (such as category listings for example) link to these items as welll though.
-
Thanks for the answer.
Does a link from a page with noindex,follow count as an internal link? Will the items on page 2 receive any link juice, if their only internal link is from a noindexed page?
What do you think?
-
From what I've read on the internet, it is best to "noindex,follow" all pages >1. This issue had bugged me for quite some time as well, and I've struggled to find good resources explaining why their solution was the best. Now that I've actually given the subject some thought, and finally managed to read some quality material on the matter, it all makes sense.
It's basically a checklist. Do you want search engines to
-
index your paginated result pages: yes / no
-
reach the items that are listed in your paginated result pages: yes / no
In most cases you don't want your paginated result pages to be indexed. With our without Panda, visitors get little value from actually viewing 'page 7' in your result pages. That actual page provides little or no value to those visitors. However, you DO want those items listed on these paginated pages to be crawled, especially when you don't have any other pages linking to them (which you should by the way). This boils down to:
-
Don't nofollow your paginated links (because you want search engine spiders to reach them)
-
Put "noindex,follow" in the meta robots tag for all pages >1 (thus page 2 and greater) so the engines will no index these paginated results, but will crawl on to the pages that are behind the listings
Good luck!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Adding hreflang tags - better on each page, or the site map?
Hello, I am wondering if there seems to be a preference for adding hreflang tags (from this article). My client just changed their site from gTLDs to ccTLDs, and a few sites have taken a pretty big traffic hit. One issue is definitely the amount of redirects to the page, but I am also going to work with the developer to add hreflang tags. My question is - is it better to add them to the header of each page, or the site map, or both, or something else? Any other thoughts are appreciated. Our Australia site, which was at least findable using Australia Google before this relaunch, is not showing up, even when you search the company name directly. Thanks!Lauryn
Intermediate & Advanced SEO | | john_marketade0 -
Better to 301 or de-index 403 pages
Google WMT recently found and called out a large number of old unpublished pages as access denied errors. The pages are tagged "noindex, follow." These old pages are in Google's index. At this point, would it better to 301 all these pages or submit an index removal request or what? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Is it a problem to use a 301 redirect to a 404 error page, instead of serving directly a 404 page?
We are building URLs dynamically with apache rewrite.
Intermediate & Advanced SEO | | lcourse
When we detect that an URL is matching some valid patterns, we serve a script which then may detect that the combination of parameters in the URL does not exist. If this happens we produce a 301 redirect to another URL which serves a 404 error page, So my doubt is the following: Do I have to worry about not serving directly an 404, but redirecting (301) to a 404 page? Will this lead to the erroneous original URL staying longer in the google index than if I would serve directly a 404? Some context. It is a site with about 200.000 web pages and we have currently 90.000 404 errors reported in webmaster tools (even though only 600 detected last month).0 -
Should I noindex the site search page? It is generating 4% of my organic traffic.
I read about some recommendations to noindex the URL of the site search.
Intermediate & Advanced SEO | | lcourse
Checked in analytics that site search URL generated about 4% of my total organic search traffic (<2% of sales). My reasoning is that site search may generate duplicated content issues and may prevent the more relevant product or category pages from showing up instead. Would you noindex this page or not? Any thoughts?0 -
Follow or nofollow to subdomain
Hi, I run a hotel booking site and the booking engine is setup on a subdomain.
Intermediate & Advanced SEO | | vmotuz
The subdomain is disabled from being indexed in robots.txt Should the links from the main domain have a nofollow to the subdomain? What are you thoughts? Thanks!0 -
Dynamic pages - ecommerce product pages
Hi guys, Before I dive into my question, let me give you some background.. I manage an ecommerce site and we're got thousands of product pages. The pages contain dynamic blocks and information in these blocks are fed by another system. So in a nutshell, our product team enters the data in a software and boom, the information is generated in these page blocks. But that's not all, these pages then redirect to a duplicate version with a custom URL. This is cached and this is what the end user sees. This was done to speed up load, rather than the system generate a dynamic page on the fly, the cache page is loaded and the user sees it super fast. Another benefit happened as well, after going live with the cached pages, they started getting indexed and ranking in Google. The problem is that, the redirect to the duplicate cached page isn't a permanent one, it's a meta refresh, a 302 that happens in a second. So yeah, I've got 302s kicking about. The development team can set up 301 but then there won't be any caching, pages will just load dynamically. Google records pages that are cached but does it cache a dynamic page though? Without a cached page, I'm wondering if I would drop in traffic. The view source might just show a list of dynamic blocks, no content! How would you tackle this? I've already setup canonical tags on the cached pages but removing cache.. Thanks
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Robots.txt & url removal vs. noindex, follow?
When de-indexing pages from google, what are the pros & cons of each of the below two options: robots.txt & requesting url removal from google webmasters Use the noindex, follow meta tag on all doctor profile pages Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag make sure that they're not disallowed by the robots.txt file
Intermediate & Advanced SEO | | nicole.healthline0