Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Blog Page Titles - Page 1, Page 2 etc.
-
Hi All,
I have a couple of crawl errors coming up in MOZ that I am trying to fix.
They are duplicate page title issues with my blog area.
For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc.
I was just wondering if this was an actual SEO problem or not and if there is a way to fix it.
I am using Wordpress for reference but I can't see anywhere to access the settings of these pages.
Thanks
-
I am having this very problem but it is probably a fundamental misunderstanding of search engines so bear with me.
I have used Yoast SEO to turn on "noindex, follow" for archives and categories but not for www.cpresearch.net/blog. The reason is that I am presuming that indexing the blog is necessary to find posts besides the current ones. If that is not the case, what link is Google following to find the cannonicalized posts after they scroll from the one I list on the homepage. And do I need to be indexed by Google daily to make sure my cannonicalized URLs are indexed? I fear they will be orphaned...
Thanks for any insight.
-
Thanks for clearing this up.
It sounds like noindexing might actually make the most sense then.
Thanks everyone!
Regards
-
If you put noindex/follow the pages /2, /3 etc will not be indexed - however the blogposts they are linking to will be indexed (as Google will follow the links).
Most cases pages just containing links to blog articles have little value as landing pages - that's why I think that the noindex/ follow is more appropriate. Next/Previous is normally meant for articles cut in several pieces (publishers do this a lot to increase pageviews = bigger inventory)
Without knowing your site it's difficult to judge which is the best solution.
Dirk
-
But if pages 2/3/etc are displaying duplicate content from your actual blog posts, then why would you want the paginated pages indexed by Google?
Ask yourself: what do I expect people to Google to land on page 2 of my blog, and would I rather they land on a blog post instead? If the pages 2/3/etc provide no value to searchers and only serve as navigation for users, why confuse Google by keeping them indexed?
-
Yes, surely noindexing them would mean that our content in the blog posts on those pages wasnt being read by the search engine? Not ideal by any means!
I will look into the rel next/previous option.
Thank you for your input.
-
In addition to Ria's answer - make them noindex/follow.
If these pages (2/3...etc) would have any value to be included in the SERP's you could consider using rel next/previous - indicating that these pages belong together and should be considered as one page. The way I understand your question - the noindex/follow is probably a better solution.
Dirk
-
This shouldn't be too much of an issue at all really.
My recommendation would be to noindex these /page/2 etc pages if you're using Wordpress. Various Wordpress plugins are available that allow you to do this easily. My favourite is Yoast SEO - you can noindex those pages and tag pages too. If you use tags, this would be more of an SEO concern than the paginated pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
What do you do with product pages that are no longer used ? Delete/redirect to category/404 etc
We have a store with thousands of active items and thousands of sold items. Each product is unique so only one of each. All products are pinned and pushed online ... and then they sell and we have a product page for a sold item. All products are keyword researched and often can rank well for longtail keywords Would you :- 1. delete the page and let it 404 (we will get thousands) 2. See if the page has a decent PA, incoming links and traffic and if so redirect to a RELEVANT category page ? ~(again there will be thousands) 3. Re use the page for another product - for example a sold ruby ring gets replaces with ta new ruby ring and we use that same page /url for the new item. Gemma
Technical SEO | | acsilver0 -
We switched the domain from www.blog.domain.com to domain.com/blog.
We switched the domain from www.blog.domain.com to domain.com/blog. This was done with the purpose of gaining backlinks to our main website as well along with to our blog. This set us very low in organic traffic and not to mention, lost the backlinks. For anything, they are being redirected to 301 code. Kindly suggest changes to bring back all the traffic.
Technical SEO | | arun.negi0 -
Schema for blogs
When I run a wordpress blog through the structured data testing tool I see that there is @type hentry. Is this enough for blogs etc? Is this a result of Wordpress adding in this markup? Do you recommend adding @blogposting type and if so why? What benefit to add a specific type of schema? How does it help in blogging? Thanks
Technical SEO | | AL123al4 -
Are image pages considered 'thin' content pages?
I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.
Technical SEO | | MTalhaImtiaz0 -
Is the Authority of Individual Pages Diluted When You Add New Pages?
I was wondering if the authority of individual pages is diluted when you add new pages (in Google's view). Suppose your site had 100 pages and you added 100 new pages (without getting any new links). Would the average authority of the original pages significantly decrease and result in a drop in search traffic to the original pages? Do you worry that adding more pages will hurt pages that were previously published?
Technical SEO | | Charlessipe0 -
Unnecessary pages getting indexed in Google for my blog
I have a blog dapazze.com and I am suffering from a problem for a long time. I found out that Google have indexed hundreds of replytocom links and images attachment pages for my blog. I had to remove these pages manually using the URL removal tool. I had used "Disallow: ?replytocom" in my robots.txt, but Google disobeyed it. After that, I removed the parameter from my blog completely using the SEO by Yoast plugin. But now I see that Google has again started indexing these links even after they are not present in my blog (I use #comment). Google have also indexed many of my admin and plugin pages, whereas they are disallowed in my robots.txt file. Have a look at my robots.txt file here: http://dapazze.com/robots.txt Please help me out to solve this problem permanently?
Technical SEO | | rahulchowdhury0 -
How should I shorten my titles?
I've read that page titles can't/shouldn't be more than 70 characters long. Out of around 1,000 products we have about 150 that have legitimate titles that exceed this character limitation. We plan on automatically truncating these. Should I just cut the titles off at 70 characters or should I cut them off and add a "..."? Does it even matter?
Technical SEO | | dbuckles0