Can't work out robots.txt issue.
-
Hi
I'm getting crawl errors that MOZ isn't able to access my robots.txt file but it seems completely fine to me? Any chance anyone can help me understand what might be the issue?
-
I would follow the advise from Roman, I tried to access the file via a few ways as well and it doesn't seem to be an issue to load it at all.
-
Ok, I made a quick test of your robot.txt file and looks fine, making HTTP status code test and shows me 200 code which is ok. Also, you need to make sure that your robot.txt file is accessible for the Moz crawler
- Remember to put your file is in the top-level directory of your web server.
- Also, check if your hosting provider is not blocking third-party crawlers (server level)
Here you can test the status code of your robot.txt file** https://httpstatus.io/ **If you're still having trouble this is the email of Moz help@moz.com
Also, you should check
Best of luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solving URL Too Long Issues
Moz.com is reporting that many URL's are to long, these particularly affect Product URL's where the URL is typically https://www.domainname.com/collections/category-name/products/product-name, (You guessed it we're using Shopify). However, we use Canonicals that ignore all most of the URL and are just structured https://www.domainname.com/product-name, so Google should be reading the Canonical and not the long-winded version. However, Moz cannot seem to spot this... does anyone else have this problem and how to solve so that we can satisfy the Moz.com crawl engine?
Moz Pro | | Tildenet0 -
Spammy inbound links: Don't Fix It If It's Not Broken?
Hi Moz community, Our website is nearing the end of a big redesign to be mobile-responsive. We decided to delay any major changes to text content so that if we do suffer a rankings drop upon launch, we'll have some ability to isolate the cause. In the meantime I'm analyzing our current SEO strengths and weaknesses. There is a huge discrepancy between our rankings and our inbound link profile. Specifically, we do great on most of our targeted keywords and in fact had a decent surge in recent months. But Link Profiler turned up hundreds of pages of inbound links from spammy domains, many of which don't even display a webpage when I click there. (shown in uploaded image) "Don't fix it if it's not broken" is conflicting with my natural repulsion to these sorts of referrals. Assuming we don't suffer a rankings drop from the redesign, how much of a priority should this be? There are too many and most are too spammy to contact the webmasters, so we'll need to do it through a Disavow. I couldn't even open the one at the top of the list because our business web proxy identified it as adult content. It seems like a common conception is that if Google hasn't penalized us for it yet, they will eventually. Are we talking about the algorithm just stumbling upon these links and hurting us or would this be something we would find in Manual Actions? (or both?) How long after the launch should we wait before attacking these bad links? Is there a certain spam score that you'd say is a threshold for "Yes, definitely get rid of it"? And when we do, should we Disavow domains one domain at a time to monitor any potential drops or all at once? (this seems kind of obvious but if the spam score and domain authority alone is enough of a signal that it won't hurt us, we'd rather get it done asap) How important is this compared to creating fresh new content on all the product pages? Each one will have new images as well as product reviews, but the product descriptions will be the same ones we've had up for years. I have new content written but it's delayed pending any fallout from the redesign. Thanks for any help with this! d1SB2JP.jpg
Moz Pro | | jcorbo0 -
Why my domain authority doesn't update
Hi people I want to know why my new domain "https://reise-visum-australien.org/ "(register in about 2 months later) doesn't have authority? I built many good links for this site whole these 2 months but the domain authority doesn't update
Moz Pro | | vahidafshari450 -
My account recently got deactivated due to billing issues. I reactivated my account, but all of my campaigns are gone. Is there a way I can get back my campaigns without having to redo all of them??
My account recently got deactivated due to billing issues. I reactivated my account, but all of my campaigns are gone. Is there a way I can get back my campaigns without having to redo all of them??
Moz Pro | | DorianDDR10 -
What's with measuring search rank?
Okay, here 6 different ways to measure search rank on Google U.S. and I get 6 different answers: Looking for search term "Lunch Bags" for the site Zappos (just a random example - no connection to me). SEOMOZ RankTracker - #49 SEOCentro - #36 http://www.seocentro.com/tools/search-engines/keyword-position.html ZendProxy.com - #21 (an anonymizer proxy) anonymouse.org - no rank (another anonymizer) Mikes Marketing Tools - no rank http://www.mikes-marketing-tools.com/ranking-reports/ RankingCheck.com - #38 Can you duplicate a similar variation? What gives? Which if any of these would you rely on? Thanks... Darcy
Moz Pro | | 945010 -
You've recently updated your brand rules. We're fetching your new data, and we should have it ready for you within the hour.
Why do i always see this message when entering a certain campaign? "You've recently updated your brand rules. We're fetching your new data, and we should have it ready for you within the hour." I didnt change a thing since i started this campaign two-three weeks ago ...
Moz Pro | | alsvik0 -
How can you set SEOmoz to work with your dev site behind an htpasswd?
All sites need to be developed from the small to the grand - and this takes time. Development usually takes place on a subdomain different from our live domain. It is locked down behind an htpasswd during development so its not picked up by searching engines - that may create duplicate content issues if when the site goes live it has already scanned our site on the development server. Its also a security implementation to keep the site away from prying eyes before its ready for launch There could be security holes that have not been tweaked. Whats the best strategy to get SEOmoz involved in this scenario. Its tools are invaluable to the SEO part of the build - but the seomoz crawler bot has a different IP address (being cloud based) - so we cannot just let a single IP address through our htpasswd. Also is there a way to link the dev and live site in seomoz - so when it goes live to maintain all teh same logs without having to create two seperate site campaigns? Thanks!
Moz Pro | | dseo2410 -
Which would you chose? Link on PA56 with 88 OBL's and 80 IBL's or a link on a PA75 with 225 OBL's & 40 IBL (Same Domain)
Which would you chose? Link on PA56 with 88 OBL's and 80 IBL's or a link on a PA75 with 225 OBL's & 40 IBL (Same Domain) Pretty self explanatory. I want to know what metrics SEOMozzers rely on most. If you are not an expert at evaluating links for large scale development please don't muddy the waters on this question with a thin and vague answer.
Moz Pro | | DavidWolf580