How to Fix the Errors with Duplicate Title or Content?
-
The latest Crawl Diagnostic has found 160 Errors on my site.
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain.What does it matter (with or without www)?
How serious is that error?
Do I need to fix all the errors (and hundreds of warnings too)?
What's the best practice?
Is there any Guide on how to do it
or Tools for doing it the fast way?Viggo Joergensen
-
Hi Viggo,
what you're describing here is a common issue but also easy to fix.
If you can access and edit .htaccess on your server, you will only need to implement a simple rule which will redirect all traffic either to the www or non-www version of your website via a 301 redirect.
If you want to force the use of "www" in all cases, your rule should look like this:
RewriteCond %{HTTP_HOST} !^www.example.com [NC]
RewriteCond %{HTTP_HOST} !^
$RewriteRule ^/(.*) http://www.example.com/$1 [L,R]
You can refer here for more information.
-
Hi Viggo,
Canonical tag was created to resolve duplicate content caused by multiple paths to the same content. In "eyes" of Search Engines, www. and non-www. versions of the site are two different paths to the same content, which causes to be duplicate.
it is good to resolve this issue by redirecting to one of this versions.
I hope that helped,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content in WordPress Taxonomies & Noindex, Follow
Hello Moz Community, We are seeing duplicate content issues in our Moz report for our WordPress site’s Tag pages. After a bit of research, it appears one of the best solutions is to set Tag pages to “no index, follow” within Yoast. That makes sense, but we have a few questions: In doing this, how are we affecting our opportunity to show up in search results? Are there any other repercussions to making this change? What would it take to make the content on these pages be seen as unique?
Moz Pro | | CoreyHicks1 -
Moz and HubSpot SSL - crawl error?
I'm getting an error message when Moz tries to crawl my site, however when I check in Google Search Console, they return no errors. Our site is hosted on HubSpot. Is Moz still having trouble crawling HubSpot sites that have enabled their SSL? I read an article that this should have been corrected in early 2017, but I'm getting an error.
Moz Pro | | jennygriffin0 -
Rogerbot does not catch all existing 4XX Errors
Hi I experienced that Rogerbot after a new Crawl presents me new 4XX Errors, so why doesn't he tell me all at once? I have a small static site and had 9 crawls ago 10 4XX Errors, so I tried to fix them all.
Moz Pro | | inlinear
The next crawl Rogerbot fount still 5 Errors so I thought that I did not fix them all... but this happened now many times so that I checked before the latest crawl if I really fixed all the errors 101%. Today, although I really corrected 5 Errors, Rogerbot digs out 2 "new" Errors. So does Rogerbot not catch all the errors that have been on my site many weeks before? Pls see the screenshot how I was chasing the errors 😉 404.png0 -
404: Error - MBP Ninja Affiliate
Hello, I use the plugin MBP Ninja Affiliate to redirect links. I did Crawl Diagnostics and it appears 404: Error, but the link is working, it exists. Why Crawl Diagnostics appear 404: Error?
Moz Pro | | antoniojunior0 -
Dot Net Nuke generating long URL showing up as crawl errors!
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title. URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F Is this a problem with DNN or a nuance to be ignored? Can it be controlled? Google webmaster tools shows no crawl errors like this.
Moz Pro | | EricSchmidt0 -
How to run down the actual source of a 404 error that is reported.
In my 404 errors, the second entry is as follows: URL: http://www.virginiahomesandforeclosures.com/listing/0428387-lot-k-commerce-park-franklin-va-23851/REWIDX_URL_CDNimg/no-image.gif Is there a simple way to find the root or page in which this error was generated? IF I visit this page " http://www.virginiahomesandforeclosures.com/listing/0428387-lot-k-commerce-park-franklin-va-23851" without the attached gobble de gook, I see a good page. So bottom line its possible it could be in one of my sitemaps, but I have 50 of those so its time consuming to search thru all 50 for each error like this since I have so many. I am pretty sure its not in my sitemaps, since google has not picked up any of these errors and they have crawled over 12,000 urls so far. When google gives me a 404 error I can click on the link and find what pages they found the link and go there and correct it at the root. Any suggestions would be greatly appreciated. I have more than 1,000 of these errors with the bad url with the junk attached to the end and have not been able to isolate the cause yet. Thanks in advance.
Moz Pro | | tommytx0 -
How can I clean up my crawl report from duplicate records?
I am viewing my Crawl Diagnostics Report. My report is filled with data which really shouldn't be there. For example I have a page: http://www.terapvp.com/forums/Ghost/ This is a main forum page. It contains a list of many threads. The list can be sorted on many values. The page is canonicalized, and has been since it was created. My crawl report shows this page listed 15 times. http://www.terapvp.com/forums/Ghost/?direction=asc http://www.terapvp.com/forums/Ghost/?direction=desc http://www.terapvp.com/forums/Ghost/?order=post_date and so forth. Each of those pages uses the same canonicalization reference shared above. I have three questions: Why is this data appearing in my crawl report? These pages are properly canonicalized. If these pages are supposed to appear in the report for some reason, how can I remove them? My desire is to focus on any pages which may have an issue which needs to be addressed. This site has about 50 forum pages and when you add an extra 15 pages per forum, it becomes a lot harder to locate actionable data. To make matters worse, these forum indexes often have many pages. So if I have a "Corvette" forum there that is 10 pages long, then there will be 150 extra pages just for that particular forum in my crawl report. Is there anything I am missing? To the best of my knowledge everything is set up according to the best SEO practices. If there is any other opinions, I would like to hear them.
Moz Pro | | RyanKent0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370