150 Duplicate page error
-
I am told that I have 150 duplicate page content. It seems that it is the login link on each of my pages. Is this an error? Is it something I have to change?
Thanks
Login/Register at
http://irishdancingdress.com/wp-login.php?redirect_to=http%3A%2F%2Firishdancingdress.com%2Fdress
-
This one's a bit weird - your main "Login" link is fine - this is happening down in the comments section (under "Leave a Reply") - that login link tags the source page, so that you can return to the post.
In this case, I think I'd actually nofollow that and it's probably fine to block it in Robots.txt. This is where things get really situational, as normally I'd advise against that - see my recent post:
http://www.seomoz.org/blog/logic-meet-google-crawling-to-deindex
In your situation, though, Google only seems to be indexing 2 of those URLs currently, so you can probably cut this off before it becomes a problem. Our crawler is being a bit more aggressive in this situation (and, honestly, these links could pose a problem long-term).
If you had a ton of these pages indexed, I'd agree with Slava and recommend rel-canonical, because Robots.txt is pretty ineffective for de-indexing (plus, nofollow causes the problem in my post).
Sorry, I'm making this clear as mud I think a nofollow and blocking are fine here, because basically the problem hasn't happened yet - you're trying to prevent future problems. You could also monitor for these URLs in Google's index for a few weeks, using this command:
site:irishdancingdress.com/wp-login.php
...if that number stays low (it's currently 2), then you're good to go.
-
Keith,I think the only way to stop Roger and google from indexing those pages is to put them in the robots.txt file
I made some things global, but Roger seemed to ignore those, so I gave him his own section.
Just modify these to suit your setup.
User-agent: *
Disallow: /tag/*
Disallow: /wp-login.php*User-agent: rogerbot
Disallow: /tag/*
Disallow: /wp-login.php* -
Rel Canonical may not be what you need here.
First question you need to ask yourself is the login page something that needs indexed by Search Engines? If the answer is no, block it with your robots.txt then use -> rel="nofollow" on your login links.
If you have a reason for your login page to be indexed then you'll need to use the meta rel-canonical tag to point to the absolute root of the page.. based on your URL I would assume it is "http://irishdancingdress.com/wp-login.php"
Hope that helps
-
Do you use rel=canonical meta tag? I think if you use it, it will solve your problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical. https://studyplaces.com/about-us/ The pages affected by this include: https://studyplaces.com/50-best-college-party-songs-of-all-time-and-why-we-love-them/ https://studyplaces.com/15-best-minors-for-business-majors/ As you can see the content on these pages is totally unrelated to the content on the about-us page. Any ideas why this is happening and how to resolve.
Technical SEO | | pnoddy0 -
Duplicate title error in GWT over spelling in URL
Hi, How do I resolve a duplicate title error in GWT over spelling in URL? Ttile of Post: Minneapolis Median Home Sales Price Up 16 Percent Not sure how this happened, but I have two URL versions show up. Even with a 301 redirect, the both remain an error in GWT. /real-estate-blog/Minneapolis-median-home-sales-price-up-16-percent and /real-estate-blog/minneapolis-median-home-sales-price-up-16-percent
Technical SEO | | jessential0 -
Are duplicate page titles fixed by the canonical tag
Google Web Master Tools is saying that some of my pages have duplicate page titles because of pagination. However, I have implemented the canonical tag on the paginated pages which I thought would keep my site from being penalized for duplicate page titles. Is this correct? Or does canonical tag only relate to duplicate content issues?
Technical SEO | | Santaur0 -
Duplicate page content - index.html
Roger is reporting duplicate page content for my domain name and www.mydomain name/index.html. Example: www.just-insulation.com
Technical SEO | | Collie
www.just-insulation.com/index.html What am I doing wrongly, please?0 -
Why is 4XX (Client Error) shown for valid pages?
My Crawl Diagnostics Summary says I have 5,141 errors of the 4XX (Client Error) variety. Yet when I view the list of URLs they all resolve to valid pages. Here is an example.
Technical SEO | | jimaycock
http://www.ryderfleetproducts.com/ryder/af/ryder/core/content/product/srm/key/ACO 3018/pn/Wiper-Blade-Winter-18-Each/erm/productDetail.do These pages are all dynamically created from search or browse using a database where we offer 36,000 products. Can someone help me understand why these are errors.0 -
New Domain Page 7 Google but Page 1 Bing & Yahoo
Hi just wondered what other people's experience is with a new domain. Basically have a client with a domain registered end of May this year, so less than 3 months old! The site ranks for his keyword choice (not very competitive), which is in the domain name. For me I'm not at all surprised with Google's low ranking after such a short period but quite surprsied to see it ranking page 1 on Bing and Yahoo. No seo work has been done yet and there are no inbound links. Anyone else have experience of this? Should I be surprised or is that normal in the other two search engines? Thanks in advance Trevor
Technical SEO | | TrevorJones0 -
50+ duplicate content pages - Do we remove them all or 301?
We are working on a site that has 50+ pages that all have duplicate content (1 for each state, pretty much). Should we 301 all 50 of the URLs to one URL or should we just completely get rid of all the pages? Are there any steps to take when completely removing pages completely? (submit sitemap to google webmaster tools, etc) thanks!
Technical SEO | | Motava0 -
On Page 301 redirect for html pages
For php pages youve got Header( "HTTP/1.1 301 Moved Permanently" );
Technical SEO | | shupester
Header( "Location: http://www.example.com" );
?> Is there anything for html pages? Other then Or is placing this code redirect 301 /old/old.htm http://www.you.com/new.php in the .htaccess the only way to properly 301 redirect html pages? Thanks!0