Robots.txt Help
-
I need help to create robots.txt file.
Please let me know what to add in the file. any real example or working example.?
-
Michael, from what i can tell, your website is built using WordPress. We typically recommend installing the Yoast SEO plugin and using that--which will help with your robots.txt file. If you need more information, take a look here: https://yoast.com/wordpress-robots-txt-example/
Generally, most of your site won't need to be disallowed in the robots.txt file, unless you're using tags and categories on your site. Yoast typically helps disallow the proper directories that you need to disallow.
One thing that you need to be aware of is the fact that you don't want to disallow your .CSS or .JS files on your site, many of the themes nowadays will put those files in your wp-admin folder--which by default typically gets disallowed.
-
This is the site I used to really get a good understanding of how to create a robots.txt file: http://www.robotstxt.org/
-
A very basic robots.txt file would look something like the below
User-agent: *
Sitemap: http://www.yourwebsite.com/sitemap.xml
Disallow: http://www.yourwebsite.com/url-you-dont-want-indexed
Disallow: http://www.yourwebsite.com/another-url-you-dont-want-indexedHope that helps
-
Include sitemaps. Disallow: Pages that you don't want indexed: search pages, login pages, core admin files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site moved. Unable to index page : Noindex detected in robots meta tag?!
Hope someone can shed some light on this: We moved our smaller site (into the main site ( different domains) . The smaller site that was moved ( https://www.bluegreenrentals.com)
Intermediate & Advanced SEO | | bgvsiteadmin
Directory where the site was moved (https://www.bluegreenvacations.com/rentals) Each page from the old site was 301 redirected to the appropriate page under .com/rentals. But we are seeing a significant drop in rankings and traffic., as I am unable to request a change of address in Google search console (a separate issue that I can elaborate on). Lots of (301 redirect) new destination pages are not indexed. When Inspected, I got a message : Indexing allowed? No: 'index' detected in 'robots' meta tagAll pages are set as Index/follow and there are no restrictions in robots.txtHere is an example URL :https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen/Can someone take a look and share an opinion on this issue?Thank you!0 -
#! (hashbang) check help needed
Does anybody have experience using hashbang? We tried to use it to solve indexation problem and I'm not fully sure do we use right solution now (developers did it with these FAQ and Guide to Ajax crawling as information source). One of our client has problem, that their e-shop categories, has solution where search engines aren't able to index all products. In this example a category, there is this "Näita kõiki (38)" that shows all category products for users but as I understand search engines aren't able to index it as /et#/activeTab=tab02 because of #. Now there is used #! (hashbang) and it is /et#!/activeTab=tab02. Is this correct solution? Also now example category URL is defferent for better indexation with:
Intermediate & Advanced SEO | | raido
/et#!/
../et And when tabs "TOP ja uued" and "Näita kõik" where activated/clicked then:
/et#/activeTab=tab01
/et#/activeTab=tab02 I tried to fetch it in Google Webmaster Tools but it seems it didn't work. I would appreciate it if anybody can check this solution?0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Is my landing page "over-optimized"? Please help
Hello out there My website www.painterdublin.com and www.tilers-dublin.com were heavily hit by google panda update on 27.9.2012 and EMD update few days after. I lost about 70% of the traffic mainly from combination of the keywords from my domain name (painter dublin and tilers dublin) and never managed to recover from it. I am wondering if I should also concentrate on rewriting the content of both home landing pages in the terms of "KEYWORD DENSITY". Do you think my content is "OVER OPTIMIZED" for my main keywords? (painter dublin, tilers-dublin). What is the correct use? Is there any tool to guide me? I am aware I am using those terms quite often. I don't want to start deleting those terms before I know the right way to do it. Is there anybody willing to have a look at my sites and give me advice please? kind regards Jaro
Intermediate & Advanced SEO | | jarik0 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Adding Meta Languange tag to xhtml site - coding help needed
I've had my site dinged by Google and feel it's likely several quality issues and I'm hunting down these issues. One of Bing's Webmaster SEO tools said my xhtml pages (which were built in 2007) are missing Meta Language and suggested adding tag in the or on the html tag. Wanting to "not mess anything up" and validate correctly, I read in **W3C's site and it said: ** "Always add a lang attribute to the html tag to set the default language of your page. If this is XHTML 1.x you should also use the xml:lang attribute (with the same value). Do not use the meta element with http-equiv set to Content-Language." My current html leads like: QUESTION:
Intermediate & Advanced SEO | | mlm12
I'm confused on how to add the Meta Language to my website given my current coding as I"m not a coder. Can you suggest if I should add this content-language info, and if so, what is the best way to do so, considering valid w3c markup for my document type? Thank you!!!
Michelle0 -
Why specify robots instead of googlebot for a Panda affected site?
Daniweb is the poster child for sites that have recovered from Panda. I know one strategy she mentioned was de-indexing all of her tagged content, fo rexample: http://www.daniweb.com/tags/database Why do you think more Panda affected sites specifying 'googlebot' rather than 'robots' to capture traffic from Bing & Yahoo?
Intermediate & Advanced SEO | | nicole.healthline0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0