El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
-
Como
El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
-
Si tanto la versión www y el trabajo de la versión no-www en su sitio, el "Crawl Diagnostics Summary" va a reportar como dos páginas diferentes.
Utilice esta en el archivo htaccess. (en Apache):
Options +FollowSymlinks
RewriteEngine onRewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301]
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My "tag" pages are showing up as duplicate content. Is this harmful?
Hi. I ran a Moz sitecrawl. I see "Yes" under "Duplicate Page Content" for each of my tag pages. Is this harmful? If so, how do I fix it? This is a Wordpress site. Tags are used in both the blog and ecommerce sections of the site. Ecommerce is a very small portion. Thank you. | |
Moz Pro | | dlmilli1 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Only few pages (308 pages of 1000 something pages) have been crawled and diagnosed in 4 days, how many days till the entire website is crawled complete?
Setup campaign about 4-5 days ago and yesterday rogerbot said 308 pages were crawled and the diagnostics were provided. This website has over 1000+ pages and would like to know how long it would take for roger to crawl the entire website and provide diagnostics. Thanks!
Moz Pro | | TejaswiNaidu0 -
Slowing down SEOmoz Crawl Rate
Is there a way to slow down SEOmoz crawl rate? My site is pretty huge and I'm getting 10k pages crawled every week, which is great. However I sometimes get multiple page requests in one second which slows down my site a bit. If this feature exists I couldn't find it, if it doesn't, it's a great idea to have, in a similar way to how Googlebot do it. Thanks.
Moz Pro | | corwin0 -
How to resolve Duplicate Content crawl errors for Magento Login Page
I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page. The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above. This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing? Do I need to try to add canonical to the login page? If so, does anyone know how to do it using XML?
Moz Pro | | kdl01 -
Can I exclude pages from my Crawl Diagnostics?
Right now my crawl diagnostic information is being skewed because it's including the onsite search from my website. Is there a way to remove certain pages like search from the errors and warnings of the crawl diagnostic? My search pages are coming up as: Long URL Title Element Too Long Missing Meta Description Blocked by meta-robots (Which is how I want it) Rel Canonical Here is what the crawl diagnostic thinks my page URL looks like: website.com/search/gutter%25252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 525252525252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 52525252525252525252525252525252525252525252525252Bcleaning/ Thank you, Jonathan
Moz Pro | | JonathanGoodman0 -
Canonical tags and SEOmoz crawls
Hi there. Recently, we've made some changes to http://www.gear-zone.co.uk/ to implement canonical tags to some dynamically generated pages to stop duplicate content issues. Previously, these were blocked with robots.txt. In Webmaster Tools, everything looks great - pages crawled has shot up, and overall traffic and sales has seen a positive increase. However the SEOmoz crawl report is now showing a huge increase in duplicate content issues. What I'd like to know is whether SEOmoz registers a canonical tag as preventing a piece of duplicate content, or just adds to it the notices report. That is, if I have 10 pages of duplicate content all with correct canonical tags, will I still see 10 errors in the crawl, but also 10 notices showing a canonical has been found? Or, should it be 0 duplicate content errors, but 10 notices of canonicals? I know it's a small point, but it could potentially have a big difference. Thanks!
Moz Pro | | neooptic0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0