Duplicate page error
-
SEO Moz gives me an duplicate page error as my homepage www.monteverdetours.com is the same as www.monteverdetours.com/index is this actually en error? And is google penalizing me for this?
-
The site should not look different. We changed some URLs for unranked kewords on minor pages (with 301 links). We added the canonical tag.
We got rid of the https and redirected by 301 to http
and some of the suggestions you said above.
No major stuff and the site is not recovering... so strange. I think we must have done something structurally wrongs. I would happily pay someone to revise my site and make suggestions. I am at my wits end. Need someone familar with MOdx.
Can you see something obviously wrong with the site?
-
What other changes happened at that same time? The site seems different?
-
Between 12th and 16th July it dropped from page 8 to 45 and from 20 to 24th from page 45 to 86
-
Do you know which day it dropped on? There really isn't any reason that anything above should cause a drop. There have been updates - so let's figure out what is going on first.
-
Hi Mat
Thank you so much for your detailed answer I really appreciated it. So here is an update....I asked my web master to implement the changes you suggested for www.monteverdetours.com - the site has dropped from page 2 or 3 to page 89 and we thought it would bounce back after a few days and it hasn't. Do you have any idea why this would be so?
Best Regards,
Janet
-
Thank you so much!
-
You are diluting your homepage strength as you could have some links to one version of the page and some to another. I would create a 301 redirect from the /index to the plane .com version. In Googles eyes you have two pages with the same content, this is a common mistake with a lot of websites and their homepage.
For more info read:
http://www.seomoz.org/learn-seo/duplicate-content
http://www.seomoz.org/learn-seo/redirection
http://www.seomoz.org/blog/url-rewrites-and-301-redirects-how-does-it-all-work
-
A great answer.
-
Yes and No! (that was helpful, wasn't i??!)
Google is smarter than seomoz crawler when it comes to dealing with this issue. semoz seems to flag up home page variants quite often, but I haven't seen this cause a problem for a major search engine in years. Generally then it's pretty safe.
However - you do have some similar problems. To check the above I did a couple of searches for phrases that appear on your home page, limiting the results to pages off your domain. Whilst the domain.com vs domain.com/index issue doesn't seem to be a problem, you do have something weird going on with your home page.
The following pages do appear to be duplicates of your home page, and these ARE appearing in the index:
https://www.monteverdetours.com/~desafio/
https://www.monteverdetours.com/index.html?iframe=true&width=95%25&height=95%25
And your home page isn't being listed properly.
What you need to do ASAP:
- Consistently link to your home page: Where you have the home link up next to the sitemap link that to the home page throughout the site. Just to the www.domain.com version
- Log in to google webmaster tools and tell it so ignore the following url parameters:
- iframe
- width
- height
- Look at getting a canonical tag added throughout your site to ensure that the correct URL is always indexed
I hope that is helpful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will redirecting a logged in user from a public page to an equivalent private page (not visible to google) impact SEO?
Hi, We have public pages that can obviously be visited by our registered members. When they visit these public pages + they are logged in to our site, we want to redirect them to the equivalent (richer) page on the private site e.g. a logged in user visiting /public/contentA will be redirected to /private/contentA Note: Our /public pages are indexed by Google whereas /private pages are excluded. a) will this affect our SEO? b) if not, is 302 the best http status code to use? Cheers
Technical SEO | | bernienabo0 -
Duplicate Page Content Issue
Hello, I recently solved www / no www duplicate issue for my website, but now I am in trouble with duplicate content again. This time something that I cannot understand happens: In Crawl Issues Report, I received Duplicate Page Content for http://yourappliancerepairla.com (DA 19) http://yourappliancerepairla.com/index.html (DA 1) Could you please help me figure out what is happenning here? By default, index.html is being loaded, but this is the only index.html I have in the folder. And it looks like the crawler sees two different pages with different DA... What should I do to handle this issue?
Technical SEO | | kirupa0 -
Www.xyz.com v/s xyz.com creating duplicate pages
I just put my site in moz analytics. The crawl results says I have duplicate content. When I look at the pages it is because one page is www.xyz.com and the duplicate is xyz.com. What causes this and how can it be fixed. I'm not a developer, so be kind and speak a language I can understand. Thanks for your help 🙂
Technical SEO | | Britewave0 -
SEOMOZ and non-duplicate duplicate content
Hi all, Looking through the lovely SEOMOZ report, by far its biggest complaint is that of perceived duplicate content. Its hard to avoid given the nature of eCommerce sites that oestensibly list products in a consistent framework. Most advice about duplicate content is about canonicalisation, but thats not really relevant when you have two different products being perceived as the same. Thing is, I might have ignored it but google ignores about 40% of our site map for I suspect the same reason. Basically I dont want us to appear "Spammy". Actually we do go to a lot of time to photograph and put a little flavour text for each product (in progress). I guess my question is, that given over 700 products, why 300ish of them would be considered duplicates and the remaning not? Here is a URL and one of its "duplicates" according to the SEOMOZ report: http://www.1010direct.com/DGV-DD1165-970-53/details.aspx
Technical SEO | | fretts
http://www.1010direct.com/TDV-019-GOLD-50/details.aspx Thanks for any help people0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Do Collections in Shopify create Duplicate Pages according to Google/Bing/Yahoo?
I'm using the e-commerce platform Shopify to host an e-store. We've put our products into different collections. Shopify automatically creates different URL paths to a product in multiple collections. I'm worried that the same product listed in different collections is soon as different pages, and therefore duplicate content by Google/Bing/Yahoo. Would love to get your opinion on this concern! Thanks! Matthew
Technical SEO | | HappinessDigital0 -
Wordpress duplicate pages
I am using Wordpress and getting duplicate content Crawler error for following two pages http://edustars.yourstory.in/tag/edupristine/ http://edustars.yourstory.in/tag/education-startups/ These two are tags which take you to the same page. All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
Technical SEO | | bhanu22170 -
Seomoz is showing duplicate page content for my wordpress blog
Hi Everyone, My seomoz crawl diagnostics is indicating that I have duplicate content issues in the wordpress blog section of my site located at: http://www.cleversplash.com/blog/ What is the best strategy to deal with this? Is there a plugin that can resolve this? I really appreciate your help guys. Martin
Technical SEO | | RogersSEO0