Robots.txt Question
-
In the past, I had blocked a section of my site (i.e. domain.com/store/) by placing the following in my robots.txt file: "Disallow: /store/" Now, I would like the store to be indexed and included in the search results. I have removed the "Disallow: /store/" from the robots.txt file, but approximately one week later a Google search for the URL produces the following meta description in the search results: "A description for this result is not available because of this site's robots.txt – learn more"
Is there anything else I need to do to speed up the process of getting this section of the site indexed?
-
Thanks for the "Good Answer" flag, David! I reformatted & added a little extra info to make the process a little clearer.
Paul
-
To help speed up the process of getting re-included, use the "Fetch as Googlebot" and "Fetch as Bingbot" tools in Webmaster Tools for a page in the blocked section - this significantly helps jumpstart indexing of pages.Once you see a successful Fetch status, click Submit to Index, and then specify to submit URL and all linked pages.
In addition
- make certain your new pages are listed in your sitmap.xml file, and then resubmit the sitemap to the search engines using Google and Bing Webmaster Tools
- make sure your own internal pages (especially a few strong ones) link to the newly unblocked content
- see if you can get a couple good new incoming links to some of the pages in the new section - even if they're no-follow, they can help guide the crawlers to the newly available pages
Essentially you're trying to give the SEs as many hints as possible that there are new pages to crawl and hopefully index.
Paul
[edited for additional clarity]
-
Thanks. I figured this was the case, but was not sure if I was missing any "best practices" about getting the previously blocked URL included faster.
-
David, If I am correct this is an old message sitting in the index. Give it another week or so and I am sure this message will vanish. I had this with one of my sites that I went live with but forget to allow in the robots.txt file.
shivun
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Disallow: / in Search Console
Two days ago I found out through search console that my website's Robots.txt has changed to User-agent: *
Technical SEO | | RAN_SEO
Disallow: / When I check the robots.txt in the website it looks fine - I see its blocked just in search console( in the robots.txt tester). when I try to do fetch as google to the homepage I see its blocked. Any ideas why would robots.txt block my website? it was fine until the weekend. before that, in the last 3 months I saw I had blocked resources in the website and I brought back pages with fetch as google. Any ideas?0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Robots.txt Download vs Cache
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap. The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file. This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
Technical SEO | | Rich_A0 -
Controlling PageRank Flow Question
One of my competitors rose above me drastically, above everyone actually. His website has a sidebar on the homepage, but when you click a post it leads to a full width page of his content - with no sidebar at all. The only button on page is HOME. My site has the sidebar on all pages, meaning juice is flowing around in all directions. Would it be smarter for me to remove my sidebar as well? In theory, this would create a boost in rankings correct?
Technical SEO | | PrivatePartners0 -
Is my robots.txt file working?
Greetings from medieval York UK 🙂 Everytime to you enter my name & Liz this page is returned in Google:
Technical SEO | | Nightwing
http://www.davidclick.com/web_page/al_liz.htm But i have the following robots txt file which has been in place a few weeks User-agent: * Disallow: /york_wedding_photographer_advice_pre_wedding_photoshoot.htm Disallow: /york_wedding_photographer_advice.htm Disallow: /york_wedding_photographer_advice_copyright_free_wedding_photography.htm Disallow: /web_page/prices.htm Disallow: /web_page/about_me.htm Disallow: /web_page/thumbnails4.htm Disallow: /web_page/thumbnails.html Disallow: /web_page/al_liz.htm Disallow: /web_page/york_wedding_photographer_advice.htm Allow: / So my question is please... "Why is this page appearing in the SERPS when its blocked in the robots txt file e.g.: Disallow: /web_page/al_liz.htm" ANy insights welcome 🙂0 -
Webmaster Tools Site Map Question
I have TLD that has authority and a number of micro-sites built off of the primary domain. All sites relate to the same topic, as I am promoting a destination. The primary site and each micro-site have their own CMS installation, but the domains are mapped accordingly. www.regionalsite.com/ <- primary
Technical SEO | | VERBInteractive
www.regioanlsite.com/theme1/ <- theme 1
www.regioanlsite.com/theme2/ <- theme 2
www.regionalsite.com/theme3/ <- theme 3 Question: Should my XML site map for Webmaster Tools feed all sites off of the primary domain site map or are there penalties for this? Thanks.0 -
Frustrating Local Seo Question
I have just lost the top stop on Google Places (Pack 7) for one of my clients. The company that took it came out of nowhere to take the top stop. Now I don't mind getting out ranked when someone has worked hard to out rank me but what is frustrating is this site scores terrible across the board on just about every score that is supposed to determine on and off page ranking factors. The only place it outscores is in the link department but the site is so new there is no data available in OSE. The site has 0's and 1's for PA, MR and MT plus the keyword (kitchen remodeling) that it is taking the top spot for scores and F in the SEOMoz on page report card while the page that I optimized for this very same keyword scores an A. Plus this site has zero citations according to Bright Local Data. What Gives? Here is a little bit of data: Keyword: Kitchen Remodeling My Clients website is http://www.tandmkitchens.com Competitors website: http://www.njkitchendesigns.com I'm kicking his butt in every category except Domain Age and inbound link total, his inbound link total is 409 but it's only from 2 domains.
Technical SEO | | fun52dig
Any thoughts on how this is possible would be greatly appreciated. Thanks all and Merry Christmas!
Gary1 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0