Robots.txt - "File does not appear to be valid"
-
Good afternoon Mozzers!
I've got a weird problem with one of the sites I'm dealing with. For some reason, one of the developers changed the robots.txt file to disavow every site on the page - not a wise move!
To rectify this, we uploaded the new robots.txt file to the domain's root as per Webmaster Tool's instructions. The live file is: User-agent: * (http://www.savistobathrooms.co.uk/robots.txt)
I've submitted the new file in Webmaster Tools and it's pulling it through correctly in the editor. However, Webmaster Tools is not happy with it, for some reason. I've attached an image of the error.
Does anyone have any ideas? I'm managing another site with the exact same robots.txt file and there are no issues.
Cheers,
Lewis
-
Thanks for the quick response, Patrick. Why, if this robots.txt file is incorrect, does it yield no errors on other sites we use this on?
Cheers,
Lewis
-
Hi there
I want to say that needs an...
Allow: /
...or a "Group 2" specification.
I would take a look at Google Developer's Robots.txt Specifications and see where you have opportunities to remedy this issue.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Adding your sitemap to robots.txt
Hi everyone, Best practice question: When adding your sitemap to your robots.txt file, do you add the whole sitemap at once or do you add different subcategories (products, posts, categories,..) separately? I'm very curious to hear your thoughts!
Technical SEO | | WeAreDigital_BE0 -
Multiple robots.txt files on server
Hi! I have previously hired a developer to put up my site and noticed afterwards that he did not know much about SEO. This lead me to starting to learn myself and applying some changes step by step. One of the things I am currently doing is inserting sitemap reference in robots.txt file (which was not there before). But just now when I wanted to upload the file via FTP to my server I found multiple ones - in different sizes - and I dont know what to do with them? Can I remove them? I have downloaded and opened them and they seem to be 2 textfiles and 2 dupplicates. Names: robots.txt (original dupplicate)
Technical SEO | | mjukhud
robots.txt-Original (original)
robots.txt-NEW (other content)
robots.txt-Working (other content dupplicate) Would really appreciate help and expertise suggestions. Thanks!0 -
Will an XML sitemap override a robots.txt
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
Technical SEO | | KCBackofen0 -
Is my robots.txt file working?
Greetings from medieval York UK 🙂 Everytime to you enter my name & Liz this page is returned in Google:
Technical SEO | | Nightwing
http://www.davidclick.com/web_page/al_liz.htm But i have the following robots txt file which has been in place a few weeks User-agent: * Disallow: /york_wedding_photographer_advice_pre_wedding_photoshoot.htm Disallow: /york_wedding_photographer_advice.htm Disallow: /york_wedding_photographer_advice_copyright_free_wedding_photography.htm Disallow: /web_page/prices.htm Disallow: /web_page/about_me.htm Disallow: /web_page/thumbnails4.htm Disallow: /web_page/thumbnails.html Disallow: /web_page/al_liz.htm Disallow: /web_page/york_wedding_photographer_advice.htm Allow: / So my question is please... "Why is this page appearing in the SERPS when its blocked in the robots txt file e.g.: Disallow: /web_page/al_liz.htm" ANy insights welcome 🙂0 -
Would this be considered "thin content?"
I share a lot of images via twitter and over the last year I've used several different tools to do this; mainly twitpic, and now instagram. Last year I wanted to try to find a way to host those images on my site so I could get the viewers of the picture back to my site instead a 3rd party (twitpic, etc.) I found a few plugins that worked "sort of" well, and so I used that for a while. (I have since stopped doing that in favor of using instagram.) But my question is do all of these image posts hurt my site you think? I had all of these images under a category called "twitter" but have since moved them to an uncategorized category until I figure out what I want to do with them. I wanted to see if anyone could chime in and give me some advice. Since the posts are just images with no content (other than the image) and the title isn't really "optimized" for anything do these posts do me more harm than good. Do I delete them all? Leave them as is? Or do something else? Also in hindsight I'm assuming this was a bad idea since the bounce rate for people clicking on a link just to see an image was probably very high, and may have caused the opposite result of what I was looking for. If I knew than what I know now I would have tracked the bounce rate of those links, how many people who viewed one of those images actually went to another page on the site, etc. But hindsight's 20/20. 🙂
Technical SEO | | NoahsDad0 -
Robots txt
We have a development site that we want google and other bots to stay out of but we want roger to have access. Currently our robots.txt looks like this: User-agent: *
Technical SEO | | LadyApollo
Disallow: /cgi-bin/
Disallow: /development/ What would i need to addd or change to let him through? Thank you.0 -
How "Optimised" is my home page content
Good afternoon from 1 degrees C overcast frozen wetherby UK... I've made a number of on page html markup changes to optimise the page for steel suppliers steel stockholders but I'd like to know if there are any other on page improvments I could make for this page http://www.barrettsteel.com/ Im particulary concerned that contnet in in li tags and not p, could this be an issue? And finaaly on the home page a third party developer has slapped a header banner pointing to an external site know as woodberry tools, that cant be good can it? Any insights welcome 🙂
Technical SEO | | Nightwing0 -
Can I Disallow Faceted Nav URLs - Robots.txt
I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
Technical SEO | | tylerfraser
and
/category.html?price=1%2C1000&product_material=88 Thanks!0