Robots.txt Disallow: / in Search Console
-
Two days ago I found out through search console that my website's Robots.txt has changed to
User-agent: *
Disallow: /When I check the robots.txt in the website it looks fine - I see its blocked just in search console( in the robots.txt tester).
when I try to do fetch as google to the homepage I see its blocked. Any ideas why would robots.txt block my website? it was fine until the weekend.
- before that, in the last 3 months I saw I had blocked resources in the website and I brought back pages with fetch as google.
Any ideas?
-
Hello Ran,
Just to clarify, in search console, when you go to Crawl-> Robots.txt Tester and in the middle right clicking en el see live robots.txt, it doesnt show the correct file?
It could be that google isnt recrawling the new robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I access the old search console?
Hello, Moz community! I need to access the old search console in order to submit a change of address. I used to be able to switch from a toggle on the main menu, but I can't seem to find that anymore. Does anyone have any ideas on how I can access it?
Technical SEO | | eddiewang250 -
Should I add my html sitemap to Robots?
I have already added the .xml to Robots. But should I also add the html version?
Technical SEO | | Trazo0 -
How to remove Parameters from Google Search Console?
Hi All, Following are parameter configuration in search console - Parameters - fl
Technical SEO | | adamjack
Does this parameter change page content seen by the user? - Yes, Changes, reorders, or narrows page content.
How does this parameter affect page content? - Narrow
Which URLs with this parameter should Googlebot crawl? - Let Googlebot decide (Default) Query - Actually it is filter parameter. I have already set canonical on filter page. Now I am doing tracking of filter pages via data layer and tag manager so in google analytic I am not able to see filter url's because of this parameter. So I want to delete this parameter. Can anyone please help me? Thanks!0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
301 Redirect / cross-domain canonical to a URL w/ Ampersand
I have a question regarding ampersands, we are needing to redirect to a URL w/ an ampersand in the URL: http://local.sfgate.com/b18915250/Sam-&-Associates-Insurance-Agency Will Google pass page authority/juice despite the fact that there is an ampersand in the URL, if we were to 301 redirect or cross-domain canonical to the url? Should we 301 redirect to http://local.sfgate.com/b18915250/Sam-%26-Associates-Insurance-Agency instead of http://local.sfgate.com/b18915250/Sam-&-Associates-Insurance-Agency? I don't have the option of removing the ampersand Thank you for your time!
Technical SEO | | Gatelist0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
Robots.txt file question? NEver seen this command before
Hey Everyone! Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant). the command line is as follows: Disallow: /*?* I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me 😞 Any help would be greatly appreciated! Thanks, Rob
Technical SEO | | RobMay0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0