Robots.txt Help
-
I need help to create robots.txt file.
Please let me know what to add in the file. any real example or working example.?
-
Michael, from what i can tell, your website is built using WordPress. We typically recommend installing the Yoast SEO plugin and using that--which will help with your robots.txt file. If you need more information, take a look here: https://yoast.com/wordpress-robots-txt-example/
Generally, most of your site won't need to be disallowed in the robots.txt file, unless you're using tags and categories on your site. Yoast typically helps disallow the proper directories that you need to disallow.
One thing that you need to be aware of is the fact that you don't want to disallow your .CSS or .JS files on your site, many of the themes nowadays will put those files in your wp-admin folder--which by default typically gets disallowed.
-
This is the site I used to really get a good understanding of how to create a robots.txt file: http://www.robotstxt.org/
-
A very basic robots.txt file would look something like the below
User-agent: *
Sitemap: http://www.yourwebsite.com/sitemap.xml
Disallow: http://www.yourwebsite.com/url-you-dont-want-indexed
Disallow: http://www.yourwebsite.com/another-url-you-dont-want-indexedHope that helps
-
Include sitemaps. Disallow: Pages that you don't want indexed: search pages, login pages, core admin files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Application & understanding of robots.txt
Hello Moz World! I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled. I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt). I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s? This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses. Best Regards, Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Robots.txt error message in Google Webmaster from a later date than the page was cached, how is that?
I have error messages in Google Webmaster that state that Googlebot encountered errors while attempting to access the robots.txt. The last date that this was reported was on December 25, 2012 (Merry Christmas), but the last cache date was November 16, 2012 (http://webcache.googleusercontent.com/search?q=cache%3Awww.etundra.com/robots.txt&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). How could I get this error if the page hasn't been cached since November 16, 2012?
Intermediate & Advanced SEO | | eTundra0 -
Google Recon Request 4 Failed - This is crazy. HELP!
We run a niche website selling sunglasses at www.aluminumeyewear.com. I've been trying to resolve a 'Failed Quality Guidelines' message since May. My 4th recon request has just failed and I've exhausted all changes that I believe I need to make. I rely on this site to pay my bills etc so obviously I really need to get this resolved. I would be grateful if someone from Google could actually point out whats wrong instead of an unhelpful auto response.Steps taken.1. Rewrote content as it was a bit thin. Recon failed.2. Removed old products that couldn't be reached from every page. Recon failed.3. Submitted back link audit and added 'sitemap' link to footer. Recon Failed.4. Removed 40+ old urls that existed from old Yahoo! store (didn't realize they still existed). Recon failed.I felt sure #4 would resolve the issue so feeling pretty low right now that it didn't. That being said doing a site:aluminumeyewear.com it looks like I missed one of them which was http://www.aluminumeyewear.com/demora/black/, however it just returns a 404 which would seem harsh to penalize me for.The only other pages that I can think of are some dynamic pages that the store uses to create reviews such as:www.aluminumeyewear.com/product-reviews-add.aspx?product=2www.aluminumeyewear.com/resize.aspxI'm pretty sure that the reviews page is blocked via robots txt. The resize.aspx is a blank page with javascript as its needed by the PowerReviews Express system to work, and many many merchants use that platform so it would be hard to think its that.Thanks in advance.
Intermediate & Advanced SEO | | smckenzie750 -
I can't help but think something is wrong with my SEO
So we re-launched our site about a month ago, and ever since we've seen a dramatic drop in search results (probably due to some errors that were made) when changing servers and permalink structure. But, I can't help but think something else is at play here. When we write something, I can check 24 hours later, and if I copy the Title verbatim, but we don't always show up in SERPs. In fact, I looked at a post today, and the meta description showing is not the same, but when I check the source code, it's right. What shows up in Google: http://d.pr/i/jGJg What's actually in the source code: http://d.pr/i/p4s8 Why is this happening? Website is The Tech Block
Intermediate & Advanced SEO | | ttb0 -
Will an RSS feed help new product get indexed? How to create one for product?
Hi I've read that creating an RSS feed for one of our ecommerce sites will help the products get indexed faster. Currently it takes google 4-5 days to index our new products, we want to speed that up. Will an RSS feed of the new products we have help? How do you create an RSS feed for this? Our blog gets indexed within minutes, but our main website, 4 days. Help!
Intermediate & Advanced SEO | | xoffie0 -
Corporate pages and SEO help
We own and operate more than two dozen educational related sites. The business team is attempting to standardize some parts of our site hierarchy so that our sitemap.php, about.php, privacy.php and contact.php are all at the root directory. Our sitemap.php is generated by our sitemap.xml files, which are generated from our URLlist.txt files. I need to provide some feedback on this initiative. I'm worried about adding more stand-alone pages to our root directory and as part of a separate optimization in the future I was planning to suggest we group the "privacy", "about" and "contact" pages in a separate folder. We generally try to put our most important pages/directories for SEO in the root as our homepages pass a lot of link juice and have high authority. We do not invest SEO time into optimizing these pages as they're not pages we're trying to rank for, and I've already been looking into even no-following all links to them from our footer, sitemap, etc. I know that adding these "corporate" pages to a site are usually a standard part of the design process but is there any SEO benefit to having them at the root? And along the same lines, is there any SEO harm to having unimportant pages at the root? What do you guys think out there in Moz land?
Intermediate & Advanced SEO | | Eric_edvisors0 -
Does It Really Matter to Restrict Dynamic URLs by Robots.txt?
Today, I was checking Google webmaster tools and found that, there are 117 dynamic URLs are restrict by Robots.txt. I have added following syntax in my Robots.txt You can get more idea by following excel sheet. #Dynamic URLs Disallow: /?osCsidDisallow: /?q= Disallow: /?dir=Disallow: /?p= Disallow: /*?limit= Disallow: /*review-form I have concern for following kind of pages. Shorting by specification: http://www.vistastores.com/table-lamps?dir=asc&order=name Iterms per page: http://www.vistastores.com/table-lamps?dir=asc&limit=60&order=name Numbering page of products: http://www.vistastores.com/table-lamps?p=2 Will it create resistance in organic performance of my category pages?
Intermediate & Advanced SEO | | CommercePundit0