Rel="prev" / "next"
-
Hi guys,
The tech department implemented rel="prev" and rel="next" on this website a long time ago.
We also added a canonical tag to the 'own' page.We're talking about the following situation:
However we still see a situation where a lot of paginated pages are visible in the SERP.
Is this just a case of rel="prev" and "next" being directives to Google?
And in this specific case, Google deciding to not only show the 1st page in the SERP, but still show most of the paginated pages in the SERP?Please let me know, what you think.
Regards,
Tom -
Interesting development which may be of interest to you Ernst:
Google admitted just the other day that they "haven't supported rel=next/prev for years." https://searchengineland.com/google-apologizes-for-relnext-prev-mixup-314494
"Should you remove the markup? Probably not. Google has communicated this morning in a video hangout that while it may not use rel=next/prev for search, it can still be used by other search engines and by browsers, among other reasons. So while Google may not use it for search indexing, rel=prev/next can still be useful for users. Specifically some browsers might use those annotations for things like prefetching and accessibility purposes."
-
I was looking into this today and happened across this line in Google's Search Console Help documents:
rel="next" and rel="prev" are compatible with rel="canonical" values. You can include both declarations in the same page. For example, a page can contain both of the following HTML tags:
Here's the link to the doc - https://support.google.com/webmasters/answer/1663744?hl=en
But I wouldn't be using a canonical to somewhere else and the rel="next" directives.
-
I had never actually considered that. My thought is, no. I'd literally just leave canonicals entirely off ambiguous URLs like that. Have seen a lot of instances lately where over-zealous sculpting has led to loss of traffic. In the instance of this exact comment / reply, it's just my hunch here. I'd just remove the tag entirely. There's always risk in adding layers of unrequired complexity, even if it's not immediately obvious
-
I'm going to second what @effectdigital is outlining here. Google does what they want, and sometimes they index paginated pages on your site. If you have things setup properly and you are still seeing paginated pages when you do a site: search in Google then you likely need to strengthen your content elsewhere because Google still sees these paginated URLs as authoritative for your domain.
I have a question for you @effectdigital - Do you still self-canonical with rel= prev / next? I mean, I knew that you wouldn't want to canonical to another URL, but I hadn't really thought about the self-canonical until I read something you said above. Hadn't really thought about that one haha.
Thanks!
-
Both are directives to google. All of the "rel=" links are directives, including hreflang, alternate/mobile, AMP, prev/next
It's not really necessary to use a canonical tag in addition to any of the other "rel=" family links
A canonical tag says to Google: "I am not the real version of this page, I am non-canonical. For the canonical version of the page, please follow this canonical tag. Don't index me at all, index the canonical destination URL"
The pagination based prev/next links say to Google: "I am the main version of this page, or one of the other paginated URLs. Did you know, if you follow this link - you can find and index more pages of content if you want to"
So the problem you create by using both, is creating the following dialogue to Google:
1.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
2.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from #buildawall"
*Google goes backwards to non-paginated URL
3.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
4.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from"
*Google goes backwards to non-paginated URL
... etc.
As you can see, it's confusing to tell Google to crawl and index URLs with one tag, then tell them not to with another. All your indexation factors (canonical tags, other rel links, robots tags, HTTP header X-Robots, sitemap, robots.txt files) should tell the SAME, logical story (not different stories, which contradict each other directly)
If you point to a web page via any indexation method (rel links, sitemap links) then don't turn around and say, actually no I've changed my mind I don't want this page indexed (by 'canonicalling' that URL elsewhere). If you didn't want a page to be indexed, then don't even point to it via other indexation methods
A) If you do want those URLs to be indexed by Google:
1) Keep in mind that by using rel prev/next, Google will know they are pagination URLs and won't weight them very strongly. If however, Google decides that some paginated content is very useful - it may decide to rank such URLs
2) If you want this, remove the canonical tags and leave rel=prev/next deployment as-is
B) If you don't want those URLs to be indexed by Google:
1) This is only a directive, Google can disregard it but it will be much more effective as you won't be contradicting yourself
2) Remove the rel= prev / next stuff completely from paginated URLs. Leave the canonical tag in place and also add a Meta no-index tag to paginated URLs
Keep in mind that, just because you block Google from indexing the paginated URLs, it doesn't necessarily mean that the non-paginated URLs will rank in the same place (with the same power) as the paginated URLs (which will be, mostly lost from the rankings). You may get lucky in that area, you may not (depending upon the content similarity of both URLs, depending whether or not Google's perceived reason to rank that URL - hinged strongly on a piece of content that exists only in the paginated URL variant)
My advice? Don't be a control freak and use option (B). Instead use option (A). Free traffic is free traffic, don't turn your nose up at it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I Use Multiple rel="alternate" Tags on Multiple Domains With the Same Language?
Hoping someone can answer this for me, as I have spent a ton of time researching with no luck... Is there anything misleading/wrong with using multiple rel="alternate" tags on a single webpage to reference multiple alternate versions? We currently use this tag to specify a mobile-equivalent page (mobile site served on an m. domain), but would like to expand so that we can cover another domain for desktop (possibly mobile in the future). In essence: MAIN DOMAIN would get The "Other Domain" would then use Canonical to point back to the main site. To clarify, this implementation idea is for an e-commerce site that maintains the same product line across 2 domains. One is homogeneous with furniture & home decor, which is a sub-set of products on our "main" domain that includes lighting, furniture & home decor. Any feedback or guidance is greatly appreciated! Thanks!
Intermediate & Advanced SEO | | LampsPlus0 -
The "webmaster" disallowed all ROBOTS to fight spam! Help!!
One of the companies I do work for has a magento site. I am simply the SEO guy and they work the website through some developers who hold access to their systems VERY tightly. Using Google Webmaster Tools I saw that the robots.txt file was blocking ALL robots. I immediately e-mailed out and received a long reply about foreign robots and scrappers slowing down the website. They told me I would have to provide a list of only the good robots to allow in robots.txt. Please correct me if I'm wrong.. but isn't Robots.txt optional?? Won't a bad scrapper or bot still bog down the site? Shouldn't that be handled in httaccess or something different? I'm not new to SEO but I'm sure some of you who have been around longer have run into something like this and could provide some suggestions or resources I could use to plead my case! If I'm wrong.. please help me understand how we can meet both needs of allowing bots to visit the site but prevent the 'bad' ones. Their claim is the site is bombarded by tons and tons of bots that have slowed down performance. Thanks in advance for your help!
Intermediate & Advanced SEO | | JoshuaLindley0 -
"No Index, No Follow" or No Index, Follow" for URLs with Thin Content?
Greetings MOZ community: If I have a site with about 200 thin content pages that I want Google to remove from their index, should I set them to "No Index, No Follow" or to "No Index, Follow"? My SEO firm has advised me to set them to "No Index, Follow" but on a recent MOZ help forum post someone suggested "No Index, No Follow". The MOZ poster said that telling Google the content was should not be indexed but the links should be followed was inconstant and could get me into trouble. This make a lot of sense. What is proper form? As background, I think I have recently been hit with a Panda 4.0 penalty for thin content. I have several hundred URLs with less than 50 words and want them de-indexed. My site is a commercial real estate site and the listings apparently have too little content. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Noindex, rel=cannonical, or no worries?
Hello, SEO pros, We need your help with a case ↓ Introduction: Our website allows individual contractors to create a webpage where they can show what services they offer, write something about themselves and show their previous projects in pictures. All the professions and services assigned accordingly are already in our system, so users need to pick a profession and mark all services they provide or suggest those which we missed to add. We have created unique URLs for all the professions and services. We have internal search field and use a autocomplete to direct users to the right page. **Example: ** PROFESSION Carpenter (URL: /carpenters ) SERVICES Decking (URL: /carpenters/decking) Kitchens (URL: /carpenters/kitchens) Flooring and staircases (URL: /carpenters/flooring-and-staircases) Door trimming (URL: /carpenters/door-trimming) Lock fitting (URL: /carpenters/lock-fitting) Problem We want to be found by Google search on all the services and give a searchers a list of all carpenters in our database who can provide a service they want to find. We give 15 contractors per page and rank them by recommendations provided by their clients. Our concern is that our results pages may be marked as duplicate since some of them give the same list of carpenters. All the best 15 carpenters offer door-trimming and lock-fitting. So, all the same 15 are shown in /carpenters, /carpenters/lock-fitting, /carpenters/door-trimming. We don't want to be marked as spammers and loose points on domain trust, however we believe we give quality content since we gave what the searchers want to find - contractors, who offer what they need. **Solution? ** Noindex all service pages to avoid duplicate content indexed by Google OR rel=canonical tag on service pages to redirect to profession page. e.g. on /carpenters/lock-fitting page make a tag rel=canonical to /carpenters. OR no worries, allow Google index all the professions and services pages. Benefits of indexing it all (around 2500 additional pages with different keywords) is greater than ttagging service pages with no index or rel=canonical and loosing the opportunities to get more traffic by service titles. We need a solution which would be the best for our organic traffic 🙂 Many thanks for your precious time.
Intermediate & Advanced SEO | | osvaldas0 -
Blog/Shop/Forum site structure - are we right to make these changes?
We run a fairly large online community with a popular blog and Europe's largest online shop for drift-specific motor sport parts and our website has been around since 2004 I believe. Since it was launched, the blog (or previous CMS system) has been at the domain root, the forums have been located at /forum and the shop at /shop (or similar) but we have decided to move things around a bit and would like some comments as to whether we are doing the right thing or if you would make any addition or different changes to us. Currently the entire website gets around 3m page views per month from 500,000 visitors, but this is split roughly 75% to the forums, 10% to the shop and 15% to the blog (but remember the blog is at the root so anyone who visits our homepage "visits" the blog). We plan to move the shop to the domain root (since the shop provides the income for the business - surely it should be the 1st thing visitors see?), the blog from root to /blog and the forums will stay where they are at /forum. We have read Steven Macdonald's post here, and have taken notes to help minimize traffic loss and disruption to our army of users and hopefully avoid too many penalties from Google and plan to: 301 redirect old URLs to new ones where they have changed. Submit new site maps to search engines. Update old links where we have control (such as forums where we are paid traders etc.). Send out a newsletter to our subscribers. Update our forum members. Fix errors via WMT before and after the re-structure. Should we be taking this opportunity to actually set each of the three sections of the site to it's own sub domain? Our thoughts are that if we are disrupting things, it's surely best to have lots of disruption once rather than a little bit of disruption several times over a 3-6 month period? OSE shows us to have roughly 1500 inbound links to /shop, 2100 to /forum and 4800 to the root / - if we proceed with our plan and put 301 redirects in place this seems to be the best plan to retain the value of these links but if we were to switch to sub domains would the 301s lose most of the link values due to them being on "different" domains? Any help, advise or suggestions are very welcome but comments from experience are what we are seeking ideally! Thanks Jay
Intermediate & Advanced SEO | | DWJames0 -
Sitelinks in 7-pack / blended / local results
I have a client who has been ranking well in the 7-pack for local searches, for 1.5+ years. I recently noticed a competitor's Google Places link has little sitelinks attached, but my client's link doesn't have them. This makes me sad. To provide a concise question: what can I do to help my client get sitelinks along with his Google Places listing in the 7-pack / blended / local results? Some example data: My client's business is called Ambiance Dental and his website is www.mycalgarydentist.com. An example search to see what I'm talking about is "calgary family dentist". The competitor that's showing sitelinks is www.aestheticdentalstudio.ca which has a title of "Dentist in Calgary | Cosmetic Treatment in Calgary". The sitelinks you'll see are "Dr. Gordon Chee", "Links", "Dr. Alexa Geminiano". Notice that my client doesn't have the same sitelinks. Some further data: If you do a a search for "calgary aesthetic dentist" you'll see the competitor's 1-box local result (is that what it's called?) with his Google Places data and sitelinks. If you search for "calgary ambiance dentist" you'll get a similar layout SERP for my client, again with no sitelinks. My client's sitelinks: If you search for "ambiance dental calgary" you'll see that Google does offer sitelinks for his site, just not in Google Places it seems. My client's website: My client's website has the navigation coded as a list (UL) without any javascript or complicated code messing things up. The competitor's navigation is built similarly, though he has about 40 more pages in his main navigation. My client's page names are concise, which I've read helps with sitelinks, the website is coded very cleanly, the URLs of his site are clear and concise without a complicated folder structure, so it seems like we're doing everything right. I appreciate any input other mozzers can provide, and discussion on the topic. I'm sure there are others who would benefit from local sitelinks as well!
Intermediate & Advanced SEO | | Kenoshi0 -
Schema.org Implementation: "Physician" vs. "Person"
Hey all, I'm looking to implement Schema tagging for a local business and am unsure of whether to use "Physician" or "Person" for a handful of doctors. Though "Physician" seems like it should be the obvious answer, Schema.org states that it should refer to "A doctor's office" instead of a physician. The properties used in "Physician" seem to apply to a physician's practice, and not an actual physician. Properties are sourced from the "Thing", "Place", "Organization", and "LocalBusiness" schemas, so I'm wondering if "Person" might be a more appropriate implementation since it allows for more detail (affiliations, awards, colleagues, jobTitle, memberOf), but I wanna make sure I get this right. Also, I'm wondering if the "Physician" schema allows for properties pulled from the "Person" schema, which I think would solve everything. For reference: http://schema.org/Person http://schema.org/Physician Thanks, everyone! Let me know how off-base my strategy is, and how I might be able to tidy it up.
Intermediate & Advanced SEO | | mudbugmedia0 -
Maximum of 100 links on a page vs rel="nofollow"
All, I read within the SEOmoz blog that search engines consider 100 links on a page to be plenty, and we should try (where possible) to keep within the 100 limit. My question is; when a rel="nofollow" attribute is given to a link, does that link still count towards your maximum 100? Many thanks Guy
Intermediate & Advanced SEO | | Horizon0