Duicated page error
-
Hi, I am trying to figure out how to fix duplicated error
Most of them are from wordpress "feed"
Does anyone know how to fix this problem?
|
Wedding Photographer San Antonio | Soobumim Photography 210-863-9878 begin_of_the_skype_highlighting 210-863-9878 end_of_the_skype_highlighting
http://www.soobumimphotography.com/feed/?paged=11 21 1 0 Wedding Photographer San Antonio | Soobumim Photography 210-863-9878 begin_of_the_skype_highlighting 210-863-9878 end_of_the_skype_highlighting
http://www.soobumimphotography.com/feed/?paged=12 21 1 0 Wedding Photographer San Antonio | Soobumim Photography 210-863-9878 begin_of_the_skype_highlighting 210-863-9878 end_of_the_skype_highlighting
-
Hi!
I'm guessing that you're using a campaign in the Seomoz toolset? This is perfectly normal as Seomoz only crawls your site and doesn't check against Google's index.
So, this means that Roger (Seomoz robot) found duplicate content in your Feed URLs. Google is pretty good at determining dupe content from feeds but you should always double check to see if these URLs are indexed:
Phew! Looks like you're clear
If you want to be super sure on your Wordpress site, you can install this plugin http://yoast.com/noindex-for-rss-feeds/ which will add the NOINDEX tag to the feed.
Don't worry too much, it looks like you're clear - it wouldn't hurt to install that plugin though!
Cheers,
Dave
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawl shows over 100 times more pages than my site has?
The latest crawl stats are attached. My site has just over 300 pages? Wondering what I have done wrong? RRv3fR0
Reporting & Analytics | | Billboard20120 -
Is there an automated way to determine which pages of your website are getting 0 traffic?
I'm doing a content audit on my company website and want to identify pages with zero traffic. I can use GA for low traffic, but not zero traffic. I can do this manually, but it would take a long time. Are there any tools to help me determine these pages?
Reporting & Analytics | | Ksink0 -
Conversion Rate Higher Than Landing Page Visits?
Interesting to see in Google Analytics that the conversion rate is higher than landing page visits - could it be attributed to a visitor clicking the CTA button multiple times? Or perhaps there is duplicate GA code on the conversion page since we utilize both Google Analytics and HubSpot. (see attached funnel screenshot) Screen-Shot-2014-09-26-at-10.49.09-AM.png
Reporting & Analytics | | W210 -
What is The Bounce Rate of Single Page Website?
Hi All, I just want to clear some of my confusion regarding bounce rate. Bounce rate depends upon time. If yes than how? What will be the bounce rate for single page website. Single page website will have same bounce rate and exit rate?
Reporting & Analytics | | RuchiPardal0 -
800,000 pages blocked by robots...
We made some mods to our robots.txt file. Added in many php and html pages that should not have been indexed. Well, not sure what happened or if there was some type of dynamic conflict with our CMS and one of these pages, but in a few weeks we checked webmaster tools and to our great surprise and dismay, the number of blocked pages we had by robots.txt was up to about 800,000 pages out of the 900,000 or so we have indexed. 1. So, first question is, has anyone experienced this before? I removed the files from robots.txt and the number of blocked files has still been climbing. Changed the robots.txt file on the 27th. It is the 29th and the new robots.txt file has been downloaded, but the blocked pages count has been rising in spite of it. 2. I understand that even if a page is blocked by robots.txt, it still shows up in the index, but does anyone know how the blocked page affects the ranking? i.e. while it might still show up even though it has been blocked will google show it at a lower rank because it was blocked by robots.txt? Our current robots.txt just says: User-agent: *
Reporting & Analytics | | TheCraig
Disallow: Sitemap: oursitemap Any thoughts? Thanks! Craig0 -
Homepage on page 2 for site:domain
Hi all, today I noticed that our homepage is located on page 2 if you do the site:domain query. As far as I know, the site:domain results mirror the importance in the eyes of Google. Some time ago, our homepage was the first result. I have to say that we do not often have changing elements or new content on the homepage, it is more like a static page. But still the most linked to page on the domain... What conclusion can I come to? Is our homepage of lower importance to Google than some time ago? Is it a problem for SEO? As we backed down our advertisments, the traffic from branded keywords fell the last months - could this be an explanation? And, most important: do I have to worry? (Besides, the SEO-traffic is fine and growing..)
Reporting & Analytics | | accessKellyOCG0 -
Setting up Webmaster Tools correctly - naked domain DNS error and sub-domains question
I'm trying to get our domain (verdantly.com) set up correctly in Google Webmaster Tools. Currently, I have three "sites" setup: blog.verdantly.com (wordpress.com blog redirected to this subdomain) www.verdantly.com verdantly.com The subdomain blog and www show up without errors. However, the naked domain shows a DNS error. I've checked the DNS settings at the registrar and don't see any issues. So here are my questions: 1. Am I correct in setting up the naked domain AND the subdomains separately in Webmaster tools? 2. How do I track down / resolve the source of the DNS errors at the naked domain? Thanks!
Reporting & Analytics | | letsdothis0 -
Google.co.uk (The Web or Pages From UK) Query?
Hi, Google.co.uk is ambiguous at best, it is geo targeted for the UK, however, by default all results incorporate "The Web" meaning outside the UK. If a user wishes to filter to "Pages From UK" then they have to click that specifically. Now my clients regularly ask me whether the traffic they are getting is from Google.co.uk (The Web) or Google.co.uk (Pages from UK) In analytics it combines these two as single source = Google.co.uk without any further breakdown, is there a way to figure this out. If I can split the figures then I can run necessary additional comparisons etc. Regards Ausaf
Reporting & Analytics | | conversiontactics0