Robots.txt question
-
What is this robots.txt telling the search engines?
User-agent: * Disallow: /stats/
-
Oh - and it's affect the domain negatively.. when cleaning up your site directories via robots.txt. Its actually better as I explained below
-
Hey Mark,
It's good practice to disallow access to any folder/content you don't want indexed as well as anything that has any security involved (login's, databases etc).
It will also keep the most important pages from the domain in front of the search spiders eyes, while keeping poor content out of the indes. This helps the domain on a site authority level provide valuable content and information to users.
Lower ranking pages, can cause the domain to be pulled down by serarch engines (Google and Bing have attested to this already) as they want businesses to focus on high value content - which leads to better user experience.
Cheers!
-
Thanks- wanted to make sure all was copacetic there. I'm assuming that it's good practice to disallow access to stats and won't impact the site negatively?
-
Assuming that this is the entire contents of this file: It says that no robot (search engine spider, other crawler, etc.) should visit or index anything in the /stats/ directory or any directories inside of it.
More info available here: http://www.robotstxt.org/robotstxt.html
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Question about a Screaming Frog crawling issue
Hello, I have a very peculiar question about an issue I'm having when working on a website. It's a WordPress site and I'm using a generic plug in for title and meta updates. When I go to crawl the site through screaming frog, however, there seems to be a hard coded title tag that I can't find anywhere and the plug in updates don't get crawled. If anyone has any suggestions, thatd be great. Thanks!
Technical SEO | | KyleSennikoff0 -
Canonical question for cross-listed product listings
We have products that are listed across multiple categories. This results in muliple urls for the PDP, for example: mystore.com/shirts/shirt-101.html mystore.com/shirts/pink-shirts/shirt-101.html They make use of the canonical tag and point back to only one product listing url, however Google has indexed both urls in some cases. Has anyone else run up against this and does anyone have advice on how this should be handled?
Technical SEO | | LivDetrick0 -
Question re: spammy internal links on site
Hi all, I have a blog (managed via WordPress) that seems to have built spammy internal links that were not created by us on our end. See "site:blog.execu-search.com" in Google search results. It seems to be a pharma-hack that's creating spammy links on our blog to random offers re: viagra, paxil, xenical, etc. When viewing "Security Issues", GSC doesn't state that the site has been infected and it seems like the site is in good health according to Google. Will anyone be able to provide any insight on the best necessary steps to take to remove these links and to run a check on my blog to see if it is in fact infected? Should all spammy internal links by disavowed? Here are a couple of my findings: When looking at "internal links" in GSC, I see a few mentions of these spammy links. When running a site crawl in Moz, I don't see any mention of these spammy links. The spammy links are leading to a 404 page. However, it appears some of the cached version in Google are still displaying the page. Please lmk. Any insight would be much appreciated. Thanks all! Best,
Technical SEO | | hdeg
Sung0 -
301 redirect file question
Hi Everyone, I am creating a list of 301 redirects to give to a developer to put into Magento. I used Screaming Frog to crawl the site, but I have noticed that all of their urls 302 to another page. I am wondering if I should 301 the first URL to the url on the new site, or the second. I am thinking the first, but would love some confirmation. Thank you!
Technical SEO | | mrbobland0 -
Robots.txt - What is the correct syntax?
Hello everyone I have the following link: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I want to prevent google from indiexing everything that is related to "view=send_friend" The problem is that its giving me dublicate content, and the content of the links has no SEO value of any sort. My problem is how i disallow it correctly via robots.txt I tried this syntax: Disallow: /view=send_friend/ However after doing a crawl on request the 200+ dublicate links that contains view=send_friend is still present in the CSV crawl report. What is the correct syntax if i want to prevent google from indexing everything that is related to this kind of link?
Technical SEO | | teleman0 -
RegEx help needed for robots.txt potential conflict
I've created a robots.txt file for a new Magento install and used an existing site-map that was on the Magento help forums but the trouble is I can't decipher something. It seems that I am allowing and disallowing access to the same expression for pagination. My robots.txt file (and a lot of other Magento site-maps it seems) includes both: Allow: /*?p= and Disallow: /?p=& I've searched for help on RegEx and I can't see what "&" does but it seems to me that I'm allowing crawler access to all pagination URLs, but then possibly disallowing access to all pagination URLs that include anything other than just the page number? I've looked at several resources and there is practically no reference to what "&" does... Can anyone shed any light on this, to ensure I am allowing suitable access to a shop? Thanks in advance for any assistance
Technical SEO | | MSTJames0 -
Should I block robots from URLs containing query strings?
I'm about to block off all URLs that have a query string using robots.txt. They're mostly URLs with coremetrics tags and other referrer info. I figured that search engines don't need to see these as they're always better off with the original URL. Might there be any downside to this that I need to consider? Appreciate your help / experiences on this one. Thanks Jenni
Technical SEO | | ShearingsGroup0 -
Blocking other engines in robots.txt
If your primary target of business is not in China is their any benefit to blocking Chinese search robots in robots.txt?
Technical SEO | | Romancing0