Does SeoMoz realize about duplicated url blocked in robot.txt?
-
Hi there:
Just a newby question...
I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there.
They are intended to be blocked by the web robot.txt file.
Here is an example url (joomla + virtuemart structure):
http://www.domain.com/component/users/?view=registration
and the here is the blocking content in the robots.txt file
User-agent: *
_ Disallow: /components/_
Question is:
Will this kind of duplicated url errors be removed from the error list automatically in the future?
Should I remember what errors should not really be in the error list?
What is the best way to handle this kind of errors?
Thanks and best regards
Franky
-
Hello Franky,
Yes, our crawler obeys robots.txt files. If you recently made that change to your robots then it should reflect in your next crawl. If this error doesn't go away, feel free to let us know help@seomoz.org. Thanks for letting us know!
-Abe
-
Don't be too worried about SEOMOZ's errors. Just be aware of them, and if you have done what you need to for the robots file in regards to S.E robots, they should take notice and there shouldn't be any issues. Always be sure to check GWT for errors, those are the ones you should fix asap.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Url-delimiter vs. SEO
Hi all, Our customer is building a new homepage. Therefore, they use pages, which are generated out of a special module. Like a blog-page out of the blog-module (not only for blogs, also for lightboxes). For that, the programmer is using an url-delimiter for his url-parsing. The url-delimiter is for example a /b/ or /s/. The url would look like this: www.test.ch/de/blog/b/an-article www.test.ch/de/s/management-coaching Does the url-delimiter (/b/ or /s/ in the url) have a negative influence on SEO? Should we remove the /b/ or /s/ for a better seo-performance Thank you in advance for your feedback. Greetings. Samuel
Moz Pro | | brunoe10 -
Duplicate Content
My website is hosted by Hubspot. With each blog I write I can tag them to be listed in a specific category. As an example, one blog article my have three tags or categories that it fits in. Seomoz is seeing this as a duplication of content. in other words, if you go to the different category pages the same article would be listed on all three pages, even though it is just one article. However, I only have 36 duplicate content warnings and I have 150 blog articles, each having 2 or 3 tags (categories.), so there should be many more than 36 duplications. Is this something that affects my seo, or should I just ignore the problem and check these warnings as fixed? Thanks,
Moz Pro | | Rong
Ron0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
Discrepancies in link reports from SEOMoz and OSE
Hello, I am trying to understand the link profile of my site. In the competitive analysis from SEOMoz, it shows 22K+ of links, with about 900+ being external followed links and 800+ internal. Webmaster tools shows a similar number of external followed links, but the link report from OSE only shows about 150 external followed links. This brings up several questions: Where is this 22K total links coming from? Why is there such a difference in the SEOMoz competitive analysis and the OSE report? Why does the OSE report show some links as not followed, while Webmaster tools lists them as followed? Am new to the off-page aspects of SEO, and I have to say the discrepancy in data from different tools is by far the most challenging thing for me. Would welcome general advice on this topic as well. Thank you!
Moz Pro | | LynnMarie0 -
SEOMOZ Crawler unicode bug
for the last couple of weeks the SEOMOZ crawls my homepage only and gets 4xx error for most of the URL's. the crawler have no issues with English url's only with the unicode(Hebrew) ones. this is what is see in the csv export for the crawl (one sample) : http://www.funstuff.co.il/׳ž׳¡׳™׳‘׳×-׳¨׳•׳•׳§׳•׳× 404 text/html; charset=utf-8 you can see that the URL is Gibberish please help.
Moz Pro | | AsafY0 -
Reducing duplicate content
Callcatalog.com is a complaint directory for phone numbers. People post information on the phone calls they get. Since there are many many phone numbers, obviously people haven't posted information on ALL of the phone numbers, THUS I have many phone numbers with zero content. SEOMoz is telling me that pages with zero content looks like duplicate content with each other.. The only difference between two pages that have zero coments is the title and phone number embedded in the page. For example, http://www.callcatalog.com/phones/view/413-563-3263 is a page that has zero comments.. I don't want to remove these zero comment phone number pages from the directory since many people find the pages via a phone number search. Here's my question: what can I do to make google / seomoz think that thexe zero comment pages is not dupliicate content?
Moz Pro | | seo_ploom0 -
Solving duplicate content errors for what is effectively the same page.
Hello,
Moz Pro | | jcarter
I am trying out your SEOMOZ and I quite like it. I've managed to remove most of the errors on my site however I'm not sure how to get round this last one. If you look at my errors you will see most of them revolve around things like this: http://www.containerpadlocks.co.uk/categories/32/dead-locks
http://www.containerpadlocks.co.uk/categories/32/dead-locks?PageSize=9999 These are essentially the same pages because the category for Dead Locks does not contain enough products to view over more than one resulting in the fact that when I say 'View all products' on my webpage, the results are the same. This functionality works with categories with more than the 20 per page limit. My question is, should I be either: Removing the link to 'show all products' (which adds the PageSize query string value) if no more products will be shown. Or putting a no-index meta tag on the page? Or some other action entirely? Looking forward to your reply and you showing how effective Pro is. Many Thanks,
James Carter0