How to remove duplicate content, which is still indexed, but not linked to anymore?
-
Dear community
A bug in the tool, which we use to create search-engine-friendly URLs (sh404sef) changed our whole URL-structure overnight, and we only noticed after Google already indexed the page.
Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on.
<code>Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake)</code>
After that, we ...
- Changed back all URLs to the "Right URLs"
- Set up a 301-redirect for all "Wrong URLs" a few days later
Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon.
What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"?
Best, David
-
Yes David your link is very helpful..
-
Found the perfect answer:
http://www.seomoz.org/blog/uncrawled-301s-a-quick-fix-for-when-relaunches-go-too-well
-
Thanks a lot, Sanket.
Do you think, it might help, to submit a sitemap, which also contains the "Wrong URLs", so we can trigger a recrawl of those pages? Maybe then Google will notice that there is a 301-redirect.
-
Hi Davin
The best thing in this situation is to wait for sometime more.. Because you just done the redirection of wrong url's to right url's so it will take some time. In webmaster tool you will see the changes later because the data in webmaster tool are updates on 15 days or monthly basis, depends on the website so you need to wait. The url that was 301 redirected should not appear in the search results so the problem of duplication will be sorted out shortly so dont worry. Also you can verify the redirection are done correctly or not from this redirect checker tool http://www.internetofficer.com/seo-tool/redirect-check/.
I have one suggestion to crawl your website pages fastly : Maximize the "Crawl Rate" under Settings option of webmaster tool.
Hope my response would help you. If need any help feel free to ask.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Woocommerce SEO & Duplicate content?
Hi Moz fellows, I'm new to Woocommerce and couldn't find help on Google about certain SEO-related things. All my past projects were simple 5 pages websites + a blog, so I would just no-index categories, tags and archives to eliminate duplicate content errors. But with Woocommerce Product categories and tags, I've noticed that many e-Commerce websites with a high domain authority actually rank for certain keywords just by having their category/tags indexed. For example keyword 'hippie clothes' = etsy.com/category/hippie-clothes (fictional example) The problem is that if I have 100 products and 10 categories & tags on my site it creates THOUSANDS of duplicate content errors, but If I 'non index' categories and tags they will never rank well once my domain authority rises... Anyone has experience/comments about this? I use SEO by Yoast plugin. Your help is greatly appreciated! Thank you in advance. -Marc
Intermediate & Advanced SEO | | marcandre1 -
Big problem with duplicate page content
Hello! I am a beginner SEO specialist and a have a problem with duplicate pages content. The site I'm working on is an online shop made with Prestashop. The moz crawl report shows me that I have over 4000 duplicate page content. Two weeks ago I had 1400. The majority of links that show duplicate content looks like bellow:
Intermediate & Advanced SEO | | ana_g
http://www.sitename.com/category-name/filter1
http://www.sitename.com/category-name/filter1/filter2 Firstly, I thought that the filtres don't work. But, when I browse the site and I test it, I see that the filters are working and generate links like bellow:
http://www.sitename.com/category-name#/filter1
http://www.sitename.com/category-name#/filter1/filter2 The links without the # do not work; it messes up with the filters.
Why are the pages indexed without the #, thus generating me duplicate content?
How can I fix the issues?
Thank you very much!0 -
Un Natural Links Removal Strategy
Hi All, I want to discuss one of my strategy that i applied on my website to dilute the value of TOXIC links those are coming on my website. Issue: Poor, Spam quality links were created for the home page and some inner pages. Google considered those links Unnatural and took manual action All rankings were disappeared Strategy Deleted all landing pages those are over linked from spam websites. Created new landing pages with some modifications and new content. Because home page(index.html) was also penalized by Google, i made the changes in index.html and put no follow no index tag so that bad links value couldn't pass from index.html to other inner pages (Newly created pages and pages those were not over optimized). Created new index.php page. Give option to the user to the Enter the Website from Index.html (Default Home Page) to index.php. Blocked all bad URLs (Un Natural Links) through .htaccess file. When user or Google bot will come through those blocked URLs (Un Natural Links), server responses 403 (Access Denied). The domain for which i did above experiment is http://www.thebaildepot.com/index.php Now, i have doubts on below points: Blocking unnatural links (403, access denied) from .htaccess file will really work? No follow no index to default page and than give option to the user to navigate to newly create index.php I did this experiment around 10 days before still rankings are not coming in Google top 100.
Intermediate & Advanced SEO | | RuchiPardal0 -
Links from Duplicate C Class IP
Would you get links from duplicate class c IP"s? The thing is, the IP address numerically appear like they are far enough apart, but the tool says they are duplicate IP"s ? Can someone shed some light on me as far as Duplicate Class C IP"s and linkbuilding utilizing blog networks is concerned??
Intermediate & Advanced SEO | | Alick3000 -
Worldwide Stores - Duplicate Content Question
Hello, We recently added new store views for our primary domain for different countries. Our primary url: www.store.com Different Countries URLS: www.store.com/au www.store.com/uk www.store.com/nz www.store.com/es And so forth and so on. This resulted in an almost immediate rankings drop for several keywords which we feel is a result of duplicate content creation. We've thousands of pages on our primary site. We've assigned a "no follow" tags to all store views for now, and trying to roll back the changes we did. However, we've seen some stores launching in different countries with same content, but with a country specific extensions like .co.uk, .co.nz., .com.au. At this point, it appears we have three choices: 1. Remove/Change duplicate content in country specific urls/store views. 2. Launch using .co.uk, .com.au with duplicate content for now. 3. Launch using .co.uk, .com.au etc with fresh content for all stores. Please keep in mind, option 1, and 3 can get very expensive keeping hundreds of products in untested territories. Ideally, we would like test first and then scale. However, we'd like to avoid any duplicate penalties on our main domain. Thanks for your help and answers on the same.
Intermediate & Advanced SEO | | globaleyeglasses0 -
Do you bother cleaning duplicate content from Googles Index?
Hi, I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do... For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'. Do you think this is right? Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled? Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0 -
Fixing Duplicate Content Errors
SEOMOZ Pro is showing some duplicate content errors and wondered the best way to fix them other than re-writing the content. Should I just remove the pages found or should I set up permanent re-directs through to the home page in case there is any link value or visitors on these duplicate pages? Thanks.
Intermediate & Advanced SEO | | benners0 -
How to resolve Duplicate Page Content issue for root domain & index.html?
SEOMoz returns a Duplicate Page Content error for a website's index page, with both domain.com and domain.com/index.html isted seperately. We had a rewrite in the htacess file, but for some reason this has not had an impact and we have since removed it. What's the best way (in an HTML website) to ensure all index.html links are automatically redirected to the root domain and these aren't seen as two separate pages?
Intermediate & Advanced SEO | | ContentWriterMicky0