How do i get rid of a duplicate page error when you can not access that page?
-
How do i get rid of a duplicate page error when you can not access that page? I am using yahoo store manager. And i do not know code. The only way i can get to this page is by copying the link that the error message gives me. This is the duplicate that i can not find in order to delete. http://outdoortrailcams.com/busebo.html
-
Thanks for the help. i will look into it right now.
-
Hi Anthony,
No...what I was talking about would give you two almost identical URL's.
For example:
http://outdoortrailcams.com/busebo.html and http://www.outdoortrailcams.com/busebo.html
Since the page you have a problem with is a blank page, this help page is more likely what you need How to Delete a Page in Yahoo Store. Do be careful to ensure that it is not a "section" page as described there, since pages related to those would also be deleted.
Hope that helps,
Sha
-
I am not sure if that is it because it is only 1 page out of the 106 that are doubles.
http://outdoortrailcams.com/busebo.html
http://outdoortrailcams.com/bushnell-security-boxes.html
the bottom one is correct the top has just a blank page. Please let me know if that is still the problem and thank you for the response, I will see if that fixes it.
-
Hi Anthony,
If the issue you have is related to the fact that you have both www and non www versions of the pages, then you can implement a 301 redirect within the Yahoo Stores system to take care of this.
If you go to this Yahoo Stores 301 Help Page you will find a detailed explanation of the various ways that you can implement redirects.
The part that applies to this situation is in the following paragraph:
You can also choose to redirect all hostnames and domain to your selected domain so your non-www address (acme.com) points for example to your www domain (www.acme.com). This setting also redirects your index page (for example www.acme.com/index.html) to your selected redirect setting (for example www.acme.com). This allows you to have one url (also known as a canonical url) rather than multiple so search engine relevancy factors will accrue to one url rather than being spread out across multiple urls.
The instructions are directly below that paragraph and the diagram shows you where to choose the redirect you want to use A.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Filter Pages
Howdy Moz Forum!! I have a headache of a job over here in the UK and I'd welcome any advice! - It's sunny today, only 1 of 5 days in a year and i'm stuck on this! I have a client that currently has 22,000 pages indexed to Google with almost 4000 showing as duplicate content. The site has a "jobs" and "candidates" list. This can cause all sorts of variations such as job title, language, location etc. The filter pages all seem to be indexed. Plus the static pages are indexed. For example if there were 100 jobs at Moz being advertised, it is displaying the jobs on the following URL structure - /moz
Moz Pro | | Slumberjac
/moz/moz-jobs
/moz/moz-jobs/page/2
/moz/moz-jobs/page/3
/moz/moz-jobs/page/4
/moz/moz-jobs/page/5 ETC ETC Imagine this with some going up to page/250 I have checked GA data and can see that although there are tons of pages indexed this way, non of them past the "/moz/moz-jobs" URL get any sort of organic traffic. So, my first question! - Should I use rel-canonical tags on all the /page/2 & /page/3 etc results and point them all at the /moz/moz-jobs parent page?? The reason for this is these pages have the same title and content and fall very close to "duplicate" content even though it does pull in different jobs... I hope i'm making sense? There is also a lot of pages indexed in a way such as- https://www.examplesite.co.uk/moz-jobs/search/page/9/?candidate_search_type=seo-consulant&candidate_search_language=blank-language These are filter pages... and as far as I'm concerned shouldn't really be indexed? Second question! - Should I "no follow" everything after /page in this instance? To keep things tidy? I don't want all the variations indexed! Any help or general thoughts would be much appreciated! Thanks.0 -
How can you tell how many pages rank for a particular keyword?
I know with ranktracker I can see a page that will rank for a certain keyword. What if I have multiple pages that rank for this keyword? How can I see all pages that rank for it?
Moz Pro | | Sika220 -
5XX (Server Error) on all urls
Hi I created a couple of new campaigns a few days back and waited for the initial crawl to be completed. I have just checked and both are reporting 5XX (Server Error) on all the pages it tried to look at (one site I have 110 of these and the other it only crawled the homepage). This is very odd, I have checked both sites on my local pc, alternative pc and via my windows vps browser which is located in the US (I am in UK) and it all works fine. Any idea what could be the cause of this failure to crawl? I have pasted a few examples from the report | 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/bags.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/gloves.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/purses.html 500 1 0 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories/sunglasses.html | 500 | 1 | 0 | Am extra puzzled why the messages say time out. The server dedicated is 8 core with 32 gb of ram, the pages ping for me in about 1.2 seconds. What is the rogerbot crawler timeout? Many thanks Carl
Moz Pro | | GrumpyCarl0 -
How to increase page authority
I wonder how to increase the page authority or the domain authority to begin with. It seems you are putting a lot of weight on this in your analysis.
Moz Pro | | wcsinc0 -
"Too many on-page links" warning on all of my Wordpress pages
Hey guys, I've got like 120 "Too many on-page links" warnings in my crawl diagnostics section. They're all the pages of my WordPress blog. Is this an acceptable and expected warning for Wordpress or does something need to be better optimized? Thanks.
Moz Pro | | SheffieldMarketing0 -
After fixing errors can I re-crawl for diagnostics?
As I am fixing errors will the campaign automatically update to show where I have fixed issues?
Moz Pro | | eidna220 -
Broken Links and Duplicate Content Errors?
Hello everybody, I’m new to SEOmoz and I have a few quick questions regarding my error reports: In the past, I have used IIS as a tool to uncover broken links and it has revealed a large amount of varying types of "broken links" on our sites. For example, some of them were links on my site that went to external sites that were no longer available, others were missing images in my CSS and JS files. According to my campaign in SEOmoz, however, my site has zero broken links (4XX). Can anyone tell me why the IIS errors don’t show up in my SEOmoz report, and which of these two reports I should really be concerned about (for SEO purposes)? 2. Also in the "errors" section, I have many duplicate page titles and duplicate page content errors. Many of these "duplicate" content reports are actually showing the same page more than once. For example, the report says that "http://www.cylc.org/" has the same content as "http://www.cylc.org/index.cfm" and that, of course, is because they are the same page. What is the best practice for handling these duplicate errors--can anyone recommend an easy fix for this?
Moz Pro | | EnvisionEMI0