404 and Duplicate Content.
-
I just submitted my first campaign. And it's coming up with a LOT of errors. Many of them I feel are out of my control as we use a CMS for RV dealerships.
But I have a couple of questions.
I got a 404 error and SEO Moz tells me the link, but won't tell me where that link originated from, so I don't know where to go to fix it.
I also got a lot of duplicate content, and it seems a lot of them are coming from "tags" on my blog. Is that something I should be concerned about?
I will have a lot more question probably as I'm new to using this tool Thanks for the responses!
-Brandon
here is my site: floridaoutdoorsrv.com
I welcome any advice or input!
-
There should be more information there. Mind sending an email to help@seomoz.org? We'll help you figure it out from that end. Thanks!
-
Okay, I did that. And only one of them had a URL. One had nothing and the other had a Keyword. Any ideas?
-
Hi Brandon,
It should tell you -- scroll over to the referral column. There's more information in this help hub page at http://www.seomoz.org/help/fixing-crawl-diagnostic-issues
-
Okay actually I did down load it, and it didn't tell me. It only tells me the link that is bad, not where it came from.
-
I'm not sure I have that kind of control. It's a sort of a Closed CMS system with RV dealerships.Though SEO moz did find almost 9,000 rel=canonical. So I think they are being used.
I'm a little concerned because I have like close to 4,000 errors. But since it is a "E commerce" site I wonder if the backend is making some problems.
The two big ones are Duplicate Content and Duplicate Title tags. I try to make the content unique, but there must still be a lot of content I haven't switched over. I'm not entirely sure what my next step should be.
-
Thanks! That's the answer I think I need!
-
Also, if you use the CSV of your errors, SEOmoz will tell you where those 404s came from too.
-
I forgot to address your question about duplicate content. Are you using canonical tags in your blog? If you place a rel=canonical tag on each of your blog pages with the full URL of the page you want to be viewed as the source of the original content, this should solve the duplicate content problem. If you already have tags in place then you may have another issue. If you are using canonical tags, you may want to go through and make sure they don't all look like this:
The tags should be specific to each page. This may be something
you've already done, and I might be explaining
in a way that's too basic. If so, I apologize. Just trying to make
sure you're covered!
-
Hi Brandon,
If your site is connected to Google Webmaster Tools, you can find out what page is the source of the link producing the 404. This can be done by logging into your GWT dashboard, clicking Site Health then click on "Crawl Errors" and then click on the "Not Found" tab. You will see a list of links producing 404 errors. Click on the link you want to investigate and you'll get a pop open window with more info. You will see three tabs "Error details," "In sitemaps," and "Linked from." Click linked from and you'll see the information you are wanting.
If you are not connected to Google Webmaster Tools yet, the process is fairly simple, even if you have limited access to your site. There are several ways to load your site into GWT and verify ownership, including simply installing a meta tag, or uploading a simple file to your root directory. GWT offers a wealth of information that can be a great supplement to the info you get from SEOMoz.
I hope this helps!
Dana
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it possible to deindex old URLs that contain duplicate content?
Our client is a recruitment agency and their website used to contain a substantial amount of duplicate content as many of the listed job descriptions were repeated and recycled. As a result, their rankings rarely progress beyond page 2 on Google. Although they have started using more unique content for each listing, it appears that old job listings pages are still indexed so our assumption is that Google is holding down the ranking due to the amount of duplicate content present (one software returned a score of 43% duplicate content across the website). Looking at other recruitment websites, it appears that they block the actual job listings via the robots.txt file. Would blocking the job listings page from being indexed either by robots.txt or by a noindex tag reduce the negative impact of the duplicate content, but also remove any link juice coming to those pages? In addition, expired job listing URLs stay live which is likely to be increasing the overall duplicate content. Would it be worth removing these pages and setting up 404s, given that any links to these pages would be lost? If these pages are removed, is it possible to permanently deindex these URLs? Any help is greatly appreciated!
Technical SEO | | ClickHub-Harry0 -
Duplicate Content Issues - Where to start???
Dear All I have recently joined a new company Just Go Holidays - www.justgoholidays.com I have used the SEO Moz tools (yesterday) to review the site and see that I have lots of duplicate content/pages and also lots of duplicate titles all of which I am looking to deal with. Lots of the duplicate pages appear to be surrounding, additional parameters that are used on our site to refine and or track various marketing campaigns. I have therefore been into Google Webmaster Tools and defined each of these parameters. I have also built a new XML sitemap and submitted that too. It looks as is we have two versions of the site, one being at www.justgoholidays.com and the other without the www It appears that there are no redirects from the latter to the former, do I need to use 301's here or is it ok to use canonicalisation instead? Any thoughts on an action plan to try to address these issues in the right order and the right way would be very gratefully received as I am feeling a little overwhelmed at the moment. (we also use a CMS system that is not particularly friendly and I think I will have to go directly to the developers to make lots of the required changes which is sure to cost - therefore really don't want to get this wrong) All the best Matt
Technical SEO | | MattByrne0 -
Duplicate Content Errror
I am getting a duplicate content error for urls for the "tags" or categories pages for my blog. These are some the URLs that SEOmoz is saying are errors, or duplicate pages. http://sacmarketingagency.com/blog/?Tag=Facebook http://sacmarketingagency.com/blog/?Tag=content+marketing http://sacmarketingagency.com/blog/?Tag=inbound+marketing As you can see, they are just the pages that are aggregating certain blog post based on how we tagged them with the appropriate category. Is this really a problem for our SEO, if so any suggestions on how to fix this?
Technical SEO | | TalkingSheep0 -
Duplicate content
I'm getting an error showing that two separate pages have duplicate content. The pages are: | Help System: Domain Registration Agreement - Registrar Register4Less, Inc. http://register4less.com/faq/cache/11.html 1 27 1 Help System: Domain Registration Agreement - Register4Less Reseller (Tucows) http://register4less.com/faq/cache/7.html | These are both registration agreements, one for us (Register4Less, Inc.) as the registrar, and one for Tucows as the registrar. The pages are largely the same, but are in fact different. Is there a way to flag these pages as not being duplicate content? Thanks, Doug.
Technical SEO | | R4L0 -
How do I fix this type of duplicate page content problem?
Sample URLs with this Duplicate Page Content URLs Internal Links External Links Page Authority Linking Root Domains http://rogerelkindlaw.com/index.html 30 0 26 1 http://www.rogerelkindlaw.com/index.html 30 0 20 1 http://www.rogerelkindlaw.com/ | 1,630 | 613 | 43 | 110 | As you can see there are three duplicate pages; http://rogerelkindlaw.com/index.html http://www.rogerelkindlaw.com/index.html http://www.rogerelkindlaw.com/ What would be the best and most efficient way to fix this problem and also how to prevent this from happening? Thank you.
Technical SEO | | brianhughes0 -
Duplicate Content for our Advertising Sites Showing in Search Results
Hello, My company has a couple different sites (Magento Stores) for Organic, Adwords and AdCenter purposes.They are mirror sites of each except for phone number, contact form, ect. Here is our organic site: http://www.oxygenconcnetratorstore.com/ Adwords and Adcenter site respectively: http://www.oxygenconcnetratorstore.com/portable/
Technical SEO | | chuck-layton
http://www.oxygenconcnetratorstore.com/oxygen/ The problem is, both the Adwords and AdCenter stores appear in Google SERP when you put in the exact URL. I have "noindex/nofollow" tag on both the advertising sites but they are still showing in search results. I feel we are getting hurt for basically have 3 sites of duplicate content. Is there a reason why the sites would be showing in search results even with the nofollow/index tags?? Any help would be awesome. Thanks. seomoz.jpg0 -
Duplicate content across multiple domains
I have come across a situation where we have discovered duplicate content between multiple domains. We have access to each domain and have recently within the past 2 weeks added a 301 redirect to redirect each page dynamically to the proper page on the desired domain. My question relates to the removal of these pages. There are thousands of these duplicate pages. I have gone back and looked at a number of these cached pages in google and have found that the cached pages that are roughly 30 days old or older. Will these pages ever get removed from google's index? Will the 301 redirect even be read by google to be redirected to the proper domain and page? If so when will that happen? Are we better off submitting a full site removal request of the sites that carries the duplicate content at this point? These smaller sites do bring traffic on their own but I'd rather not wait 3 months for the content to be removed since my assumption is that this content is competing with the main site. I suppose another option would be to include no cache meta tag for these pages. Any thoughts or comments would be appreciated.
Technical SEO | | jmsobe0 -
CGI Parameters: should we worry about duplicate content?
Hi, My question is directed to CGI Parameters. I was able to dig up a bit of content on this but I want to make sure I understand the concept of CGI parameters and how they can affect indexing pages. Here are two pages: No CGI parameter appended to end of the URL: http://www.nytimes.com/2011/04/13/world/asia/13japan.html CGI parameter appended to the end of the URL: http://www.nytimes.com/2011/04/13/world/asia/13japan.html?pagewanted=2&ref=homepage&src=mv Questions: Can we safely say that CGI parameters = URL parameters that append to the end of a URL? Or are they different? And given that you have rel canonical implemented correctly on your pages, search engines will move ahead and index only the URL that is specified in that tag? Thanks in advance for giving your insights. Look forward to your response. Best regards, Jackson
Technical SEO | | jackson_lo0