Website blocked by Robots.txt in OSE
-
When viewing my client's website in OSE under the Top Pages tab, it shows that ALL pages are blocked by Robots.txt. This is extremely concerning because Google Webmaster Tools is showing me that all pages are indexed and OK. No crawl errors, no messages, no nothing. I did a "site:website.com" in Google and all of the pages of the website returned.
Any thoughts? Where is OSE picking up this signal? I cannot find a blocked robots tag in the code or anything.
-
No worries - glad to help!
-
Thanks for responding - I did, and I noticed that we are blocking a bunch of other spiders including the spider that crawls for OSE. So, that explains why they cannot retrieve the data.
Again, thanks.
-
Have you looked at your robots.txt file to see if you are blocking specific bots? Visit yoursite.com/robots.txt and check whether you have something like this:
User-agent: [example]
Disallow: /But you may have something else to specify that Googlebot is allowed to crawl the site:
User-agent: googlebot
Allow: /
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does MOZ still do deep crawls of the website?
In the past you could get MOZ to crawl your website, now I don't see this option, no do I see a crawl at the beginning of the month. Has this change? I saw this as a useful feature.
Moz Pro | | cdgospel0 -
OSE for Facebook
Hi, I recall being able to use OSE for Facebook. Take https://www.facebook.com/VICE/ which we know as a URL would have many backlinks. It's need registering any. Has this always been the case?
Moz Pro | | wearehappymedia0 -
Duplicate content across two websites
Hi. I'm looking at ways to compare duplicate content across two different websites instead of one, as with the Moz crawler. Instead it will flag up up duplicates present on both site A and B.
Moz Pro | | Blink-SEO0 -
A tool to tell a websites estimated traffic
I am new to Moz (as a member), so I am not sure if Moz has a tool that I need. I don't want this post to be about self promotion, so I will keep it short. Our business helps increase conversions and sales for online businesses. Our ideal prospects belongs to some key categories of businesses like ecommerce, saas etc. However, I would like to know the estimated volume of traffic for a website before approaching them and introducing our service. So if there was a tool I could use to estimate the volume of visitors a specific website receives on average a day or month, it would be hugely beneficial.Obviously, these are prospective clients, so we do not have access to their system or their analytics. I just want to get an estimate. So for example, if I entered the domain abc.com into the system, I would hope it could tell me, that abc.com gets an average of 900 unique visitors a day. I don't need too much detail like geographic locations etc, but it would be a bonus having that additional information. I also don't mind paying for a tool that's quality. So it doesn't have to be free.
Moz Pro | | RyanShahed0 -
Good tool to track external links from the website
I am in search of a tool that provides me links generating from my site to another site. Is there a software or tool that can scan the whole site and provide me what are the links of other sites in my site.
Moz Pro | | csfarnsworth0 -
Robots.txt
I have a page used for a reference that lists 150 links to blog articles. I use in in a training area of my website. I now get warnings from moz that it has too many links. I decided to disallow this page in robots.text. Below is the what appears in the file. Robots.txt file for http://www.boxtheorygold.com User-agent: * Disallow: /blog-links/ My understanding is that this simply has google bypass the page and not crawl it. However, in Webmaster Tools, I used the Fetch tool to check out a couple of my blog articles. One returned an expected result. The other returned a result of "access denied" due to robots.text. Both blog article links are listed on the /blog/links/ reference page. Question: Why does google refuse to crawl the one article (using the Fetch tool) when it is not referenced at all in the robots.text file. Why is access denied? Should I have used a noindex on this page instead of robots.txt? I am fearful that robots.text may be blocking many of my blog articles. Please advise. Thanks,
Moz Pro | | Rong
Ron0 -
How come there are no links to my website according to SEOmoz Competive domain analysis, while in google webmaster i do see links.?
I dont see any links to at all when i do a Competitive Domain Analysis in SEOmoz. However i do see links in google webmaster tools. this strikes me as odd. Also when i use open site exployer my website dont seem te be found. In google im on page 9 on my focus keyword so i do think there are links to my site. I would like to know what i can do so i can analyse my links in seomoz Competitive domain analysis. Many thanks. url: http://www.sadpanda.nl
Moz Pro | | Aquive0 -
New OSE
Since the rollout of the new OSE the many sites i run are indicating a huge loss of links overnight. is there an issue with OSE, either before or after the update or am I missing something ? Thanks
Moz Pro | | blocker04080