Have a Robots.txt Issue
-
I have a robots.txt file error that is causing me loads of headaches and is making my website fall off the SE grid. on MOZ and other sites its saying that I blocked all websites from finding it. Could it be as simple as I created a new website and forgot to re-create a robots.txt file for the new site or it was trying to find the old one? I just created a new one.
Google's website still shows in the search console that there are severe health issues found in the property and that it is the robots.txt is blocking important pages. Does this take time to refresh? Is there something I'm missing that someone here in the MOZ community could help me with?
-
Hi primemediaconsultants!
Did this get cleared up?
-
You not always have to do this, if you would go to domain.com/robots.txt then it should be removed maybe already. If that's the case you should be starting to see an increase in the number of pages crawled in Google Search Console.
-
This seems very helpful as I did remove it, and fetch as google, but i'm a complete novice. How do you clear server cache?
-
What does your robots.txt file contain? (or share the link)
Try removing it, clearing server cache and fetching as google again.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Main Nav Redirects Issue: Unnecessary Evil or Better Solution
Hi, I'm somewhat stumped on the best course of action for our navigation menu. Our site is "divided" into two areas; informational and transactional. Because of compliance, transnational users have to be geo targeted; therefore, we serve them a specific page (we have 6 different regions: uk, aus, eu, etc). If users visit informational side, it's not geo specific. Example: https://site/charts https://site/uk/money Within our main nav, we don't specify the geo transaction page and use a generic https://site/money/ (page doesn't exist) then when a user clicks that link, we'll detect their location and serve up a 301 redirect to the correct geo page. This has obviously caused a ton load of unnecessary redirects and a waste of powerful link equity from the header of the site. It's been recommended to dynamically change the linked URL in this header based on the location of the user. That sounds good but what about Google? Since we can't detect Google crawler IP, we would have to pick a default geo URL like /uk/money. If we do that, the other regional URLs suffer link equity. How do we minimize redirects and make Google happy for all our geo pages. Hope this makes sense and thanks for your time!
Intermediate & Advanced SEO | | Bragg0 -
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Http - Https Issue
Hey there Mozzers, I have a site that few months ago went from being http - https. All the links redirect perfect but after scanning my site with Screaming Frog i get a bunch of 503 errors. After looking into my website I see that a lot of links in my content and menu have as a link the http url. For example my homepage has content that interlinks to the http version of the site. And even though when I test it it redirects correctly after scanning with Screaming frog it reports back as 503. Any ideas what's going on? Thanks in advance
Intermediate & Advanced SEO | | Angelos_Savvaidis0 -
Duplicate Content Issues :(
I am wondering how we can solve our duplicate content issues. Here is the thing: There are so many ways you can write a description about a used watch. http://beckertime.com/product/mens-rolex-air-king-no-date-stainless-steel-watch-wsilver-dial-5500/ http://beckertime.com/product/mens-rolex-air-king-stainless-steel-date-watch-wblue-dial-5500/ Whats different between these two? The dial color. We have a lot of the same model numbers but with different conditions, dial colors, and bands.. What ideas do you have?
Intermediate & Advanced SEO | | KingRosales0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
What are the SEO issues we should consider on a plug in that creates a custom home page based on zip code or GPS location.
We are developing a plug in the changes the home page relative to a users location or zip code. We believe this will provide users with a more personalized experience. We are concerned about how this might affect SEO. We are also wondering if we should partner with one of the SEO ply in developers. We were thinking about Yoast. Is there another partner that might be better? I would appreciate any feedback people can give.
Intermediate & Advanced SEO | | Ron_McCabe0 -
Webmaster Tools - Structured Data 100% drop. Many people with same issue, nobody seems to understand what might have caused it.
WMT shows a significant drop in structured data markup on June 7th, steep incline by June 21st. Now the same thing happened on August 9th, with no signs of recovery. Lost 45% of our search traffic. There are many people with the same problem, and nobody seems to know what caused it. Here are a few links to some forums: #1 Google Groups, #2 Google Groups, #3 Google Groups, #4 70% drop on GWT on June 7 Google SEO News and Discussion forum at WebmasterWorld. On our end we see a 100% drop in breadcrumbs and a 100% drop in hcards leading to a 45% search traffic drop. Any ideas why might have happened and how to fix this?
Intermediate & Advanced SEO | | PhilippGreitsch0 -
Duplicate Content Issue
Why do URL with .html or index.php at the end are annoying to the search engine? I heard it can create some duplicate content but I have no idea why? Could someone explain me why is that so? Thank you
Intermediate & Advanced SEO | | Ideas-Money-Art0