MozBot Finding Duplicate Pages That Aren't Duplicate
-
I've been reviewing the technical audits for my campaign in Moz, and noticed I had a number of duplicate content issues that I'm not really sure how to address. When I click on the links of what the duplicates are, they are all different links that have different content/images.
Based on what I was seeing other's wrote in the forum, this could be because the code base is really the same between these pages, and many of these were using query parameters (I'm assuming that is why the code is almost exactly the same across these pages),
so example: website.com/tags/KEYWORD1?type=KEYWORD2 is a duplicate of website.com/tags/KEYWORD3?type=KEYWORD4
I was reading that I can use that URL Parameters area in google search console, but my search console says that the googlebot isn't experiencing issues, so I wasn't sure if that was the right move. I can't do the canonicals because these pages all have different content on them, and I know duplicate content is a big SEO issue, so I really wasn't sure what my next steps should be.
Thanks for the help!
-
Hi there! Tawny from Moz's help team here.
The best way to prevent our crawler from reporting duplicate content for pages you aren't concerned about and don't intend to change would be to block our crawler from these pages using the robots.txt file for the site. For example, it looks like most of the pages reported as duplicates include URL parameters, so you should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this:
User-agent: Rogerbot
Disallow: ?typeetc., until you have blocked all of the parameters that may be causing these duplicate content errors. You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer.
Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt
I'd recommend checking your robots.txt file in this handy Robots Checker Tool once you make changes to avoid any nasty surprises.
Let us know if we can help with anything else! Just drop us a line at help@moz.com and we'll do our best to get things straightened out for ya.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Limit MOZ crawl rate on Shopify or when you don't have access to robots.txt
Hello. I'm wondering if there is a way to control the crawl rate of MOZ on our site. It is hosted on Shopify which does not allow any kind of control over the robots.txt file to add a rule like this: User-Agent: rogerbot Crawl-Delay: 5 Due to this, we get a lot of 430 error codes -mainly on our products- and this certainly would prevent MOZ from getting the full picture of our shop. Can we rely on MOZ's data when critical pages are not being crawled due to 430 errors? Is there any alternative to fix this? Thanks
Moz Bar | | AllAboutShapewear2 -
How to find the pages in the site which is ranking for zero keyword
I have around three thousand pages in my website,how to find the list of pages which is ranking for zero keywords
Moz Bar | | srinivasan.n1 -
I keep getting a 429 Too Many Request error for a wp-login page on my website. Is there a way to prevent that from happening, or fix outside of redirecting on the back end of WordPress?
I have a client that keeps coming up with 429 Too Many Request critical crawl errors. I re-directed some of them on the back end of WordPress, but additional keep coming in. The URL has a WP-Login and directs to back end login section. Is there a reason that would come up as an error, how can I prevent it from happening again, and how can I fix the remaining current errors outside of redirecting back to /? Thanks, Kalyn Lengieza
Moz Bar | | GrindstoneConsult0 -
I'm checking keyword difficulty for two different sites. Would love to view the results by site instead of just one large list. Is that possible? Or would it just be easier to keep the lists separate in Excel and just import when I want an updated report?
I have keyword lists for two sites. Is there a way to label them in the keyword difficulty tool (List A, List B) so I can just view results for a particular site? Or do I need to run the report with List A, export results, delete those keywords, then run the report for List B?
Moz Bar | | JohnNovakLV0 -
Domain.com isn't recognized by on-page-grader, but domain.com/index.php is
I am running a website through On-page-grader, as www.domain.com and scores an "F" for a specific keyword. When it's ran as www.domain.com/index.php, it scores an "A" for that same keyword and has everything checked other than "keyword in the domain name". There are no other files such as index.htm, or index.html that would interfere and can't figure out why this page is not being recognized. I checked, the robots and .htaccess file, but do not see anything that would hinder. Could this be a server issue?
Moz Bar | | werkbot0 -
OnPage Reports - Duplicate titles and meta descriptions
Hi Moz, I know you guys changed your interface awhile back but I have a question about the new reports. On the old interface, I used to use a report that would automatically run when I created a new account letting me know where the dup titles and meta descriptions were on an entire site. Where can I find this report on the new interface? Thanks Carla
Moz Bar | | Carla_Dawson1 -
On Page Grader Returning Large # of Keywords
When using the on-page grader the results show the below for a page with a specific keyword in the body only 5 times : We found this keyword used 1100 times. Any Idea why this would be showing such a high number? The keywords are in Thai language but there is a space before and after the keyword. Thanks.
Moz Bar | | brettjohn670 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0