Duplicate page titles in SEOMoz
-
My on page reports are showing a good number of duplicate title tags, but they are all because of a url tracking parameter that tells us which link the visitor clicked on. For example, http://www.example.com/example-product.htm?ref=navside and http://www.example.com/example-product.htm are the same page, but are treated as to different urls in SEOMoz. This is creating "fake" number of duplicate page titles in my reports.
This has not been a problem with Google, but SEOMoz is treating it like this and it's confusing my data. Is there a way to specify this as a url parameter in the Moz software?
Or does anybody have another suggestion? Should I specify this in GWT and BWT?
-
The best way to handle this, for all crawlers including Google, Yahoo and Moz, is to make sure you have proper canonical tags on those URLs that point to the non-parameterized URL.
So http://www.example.com/example-product.htm?ref=navside will have a canonical that points to http://www.example.com/example-product.htm
-
My understanding is that the Moz crawlers should be checking the canonical, in which case it will ignore duplicate content and title tag issues. If you find this is not the case with your crawl, please let our help team know at help @ moz.com
-
No, Moz's tool won't check the canonical to see if that would ignore this.
-
Do you think that setting a canonical url tag might help fix this?
-
Hi Robert,
Yes, I'm sorry but you're overlooking something. Ignoring parameters is something you should do in regards to SEO. It won't stop Google Analytics tracking these parameters.
-
I'm not sure why I'd want to ask google to ignore those parameters... we're explicitly adding the ones that they suggested we use from here:
The issue that I'm having is that Moz analytics is showing duplicate pages as an issue to resolve when the only difference is that these params exist.
Am I overlooking something here?
-
Hi Robert,
I wouldn't handle this via robots.txt if you only want to do this for Rogerbot. The best way to tell Google to ignore your UTM parameters is via Google Webmaster Tools. Under Crawl > URL Parameters you've got the option to add parameters that don't change any of the content and are solely used for tracking purposes.
-
I'm having a similar issue. Is there an example of how to add this to the robots.txt file to ignore the utm stuff for RogerBot?
Our scenario is that we send out PDFs with links to pages on our site and those links have utm parameters included... and are showing up as duplicate content.
Thanks in advance,
Robby
-
I think this is the answer I was looking for... Yeah, GWT already has a bunch of our parameters added, and hasn't had a problem with this one. It's not showing these pages as duplicate like SEOMoz does.
Thanks guys!
-
Hi,
What you might want to do to get rid of this issues within SEOMoz is add the parameters to your robots.txt file and specifically target the user agent of SEOMoz: Rogerbot. This way SEOMoz won't crawl the links with this parameter and by doing that also won't warn you about these duplicate titles.
Hope this helps!
Btw. As James already mentioned I would also recommend to configure these parameters within Google Webmaster Tools.
-
You could set it up in GWT but it sounds like you are using utm tags on internal links so you can see which physical links on a page are driving clicks. If that's the case a cleaner solution is to upgrade your Google Analytics code for enhanced link attribution. I'm assuming you are using GA but if so this will allow you to see which links are driving which clicks and won't create tons of duplicate page titles in SEOmoz.
See link: http://support.google.com/analytics/bin/answer.py?hl=en&answer=2558867
Let me know if you have questions,
JS
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical tag on webstore products to avoid Duplicate Page Content ?
Hi, I would like to have an opinion on what how we are planning to solve the issue with Duplicate Page Contents that MOZ PRO is showing us. MOZ Pro is showing us a lot of pages with duplicate content as High Priority Issue. Mainly the problem is with products which have very few differences between them, e.g. pink bike model X and red bike model X. So we decided to implement a canonical tag on these products, and the pink bike model X will now have a canonical pointing to the red bike model X. So hopefully we will be ranking higher with our red bike model X and our pink bike model X will disapear from the index. Am I right ? Is it a good practice, since we will loose long tails indexes? I check each canonical in the Search Console, and we have extremely few searched for "pink bike model X" most of searches are "bike model X". Thank you in advance for your opinion. Isabelle
Moz Pro | | isabelledylag0 -
Duplicate Page
I just Check Crawl the status error with Duplicate Page Content. As Mentioned Below. Songs.pk | Download free mp3, Hindi Music, Indian Mp3 Songs http://www.getmp3songspk.com Songs.pk | Download free mp3, Hindi Music, Indian Mp3 Songs http://getmp3songspk.com and then i added these lines to my htaccess file RewriteBase /
Moz Pro | | Getmp3songspk
RewriteCond %{HTTP_HOST} !^www.getmp3songspk.com$ [NC]
RewriteRule ^(.*)$ http://www.getmp3songspk.com/$1 [L,R=301] But Still See that error again when i crawl a new test.0 -
Multiple Countries, Same Language: Receiving Duplicate Page & Content Errors
Hello! I have a site that serves three English-speaking countries, and is using subfolders for each country version: United Kingdom: https://site.com/uk/ Canada: https://site.com/ca/ United States & other English-speaking countries: https://site.com/en/ The site displayed is dependent on where the user is located, and users can also change the country version by using a drop-down flag navigation element in the navigation bar. If a user switches versions using the flag, the first URL of the new language version includes a language parameter in the URL, like: https://site.com/uk/blog?language=en-gb In the Moz crawl diagnostics report, this site is getting dinged for lots of duplicate content because the crawler is finding both versions of each country's site, with and without the language parameter. However, the site has rel="canonical" tags set up on both URL versions and none of the URLs containing the "?language=" parameter are getting indexed. So...my questions: 1. Are the Duplicate Title and Content errors found by the Moz crawl diagnostic really an issue? 2. If they are, how can I best clean this up? Additional notes: the site currently has no sitemaps (XML or HTML), and is not yet using the hreflang tag. I intend to create sitemaps for each country version, like: .com/en/sitemap.xml .com/ca/sitemap.xml .com/uk/sitemap.xml I thought about putting a 'nofollow' tag on the flag navigation element, but since no sitemaps are in place I didn't want to accidentally cut off crawler access to alternate versions. Thanks for your help!
Moz Pro | | Allie_Williams0 -
Pages with Temporary Redirect (CTA)
I had MOZ crawl my site and I had 5 CTA pages with a temporary redirect. How do I correct the issue? Thank You! -Nick
Moz Pro | | X2Metrology10 -
How to increase page authority
I wonder how to increase the page authority or the domain authority to begin with. It seems you are putting a lot of weight on this in your analysis.
Moz Pro | | wcsinc0 -
Difference in data between http://pro.seomoz.org/tools/keyword-difficulty and http://lsapi.seomoz.com/linkscape/url-metrics/
Hi, Has any once else experienced any difference in data between http://lsapi.seomoz.com/linkscape/url-metrics/ and http://pro.seomoz.org/tools/keyword-difficulty Please look at the attached image. For "http://www.webmd.com/diet/guide/choosing-weight-loss-program" and "http://www.freedieting.com/" page authority and domain authority match exactly. But for "http://www.fitnessmagazine.com/weight-loss/plans/" data does not match. The data from "http://lsapi.seomoz.com/linkscape/url-metrics/" was retrieved brely 60 seconds latter after data from "http://pro.seomoz.org/tools/keyword-difficulty". We used our custom app for retrieve data from "http://lsapi.seomoz.com/linkscape/url-metrics/". The columns were matched against the specs given in "http://apiwiki.seomoz.org/w/page/13991153/URL-Metrics-API". We are retrieving following columns 1)ut(Title) 2)ueid(External Links) 3)uid(Links) 4)umrp(mozRank) 5)upa(Page Authority) 6)pda(Domain Authority) Any help will be greatly appreciated. zvFif.jpg
Moz Pro | | claytons0 -
Help with SEOmoz API
Hi guys, I'm trying to make API requests from my webserver via PHP. I'd like to retrieve data from the SEOmoz URL Metrics API. Unfortunately I always get the error response "unauthorized" even when I copy and paste the Sample Valid API Signature generated by your system into the browser. Is Signed Authentication not longer supported? I even tried the sample PHP Code SignedAuth.php but there's the same problem, too. If signed authentication is not longer available, do you have a code example for the basic http authorization? Thanks, Brandon
Moz Pro | | thegreatpursuit1