Robots.txt file question? NEver seen this command before
-
Hey Everyone!
Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant).
the command line is as follows:
Disallow: /*?*
I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me
Any help would be greatly appreciated!
Thanks, Rob
-
I don't think this is correct.
? is an attempt at using a RegEx in Robots file which I don't think works.
Further, if it was a properly formed regex, it would be ?
- is a special character for the user agent to mean all. For the disallow line, I believe you have to use a specific directory or page.
http://www.robotstxt.org/robotstxt.html
I could be wrong, but the info on this site has been my understanding from the past too.
-
It depends on how your site is structured.
For example if you have a page at
http://www.yourdomain.com/products.php
and this shows different things based on the parameter, like:
http://www.yourdomain.com/products.php?type=widgets
You will want to get rid of this line in your robots.txt
However if the parameter(s) doesn't change the content on the page, you can leave it in.
-
Thanks Ryan and Ryan! I'm just unfamiliar with this command set in the robots file, and getting settled into the company (5 weeks).. so I am still learning the site's structure and arch. With it all being new to me with limitations I am seeing from the CMS side, I was wondering if this might have been causing crawl issues for Bing and or Yahoo... I'm trying to gauge where we might be experiencing problems with the sites crawl functions.
-
Its not a bad idea in the robots.txt, but unless you are 100% confidant that you wont block something that you really want, i would consider just handling unwanted parameters and pages through the new Google Webmaster url handling toolset. that way you have more control over which ones do and dont get blocked.
-
So, for this parameter, should I keep it in the robots file?
-
Its preventing spiders from crawling pages with parameters in the URL. For example when you search on google you'll see a URL like so:
http://www.google.com/search?q=seo
This passes the parameter of q with a value of 'seo' to the page at google.com for it to work its magic with. This is almost definitely a good thing, unless the only way to access some content on your site is via URL parameters.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Technical URL SEO question
Hi All, We sell a product on our site which is displayed in cubic metres, from an SEO perspective is it ok to have /3m³ in the URL or should I use 3m3. Thanks All
Technical SEO | | Redooo0 -
Little confused regarding robots.txt
Hi there Mozzers! As a newbie, I have a question that what could happen if I write my robots.txt file like this... User-agent: * Allow: / Disallow: /abc-1/ Disallow: /bcd/ Disallow: /agd1/ User-agent: * Disallow: / Hope to hear from you...
Technical SEO | | DenorL0 -
Curious Keyword Tags Question...
In 2012, we got hit with something... I have always assumed Panda... We have hundreds of thousands of products on our site. Prior to the traffic drop, our old site design listed a small number of keywords tags on the product pages. About 10 or so... After the site re-design, we allowed all of the keyword tags to appear on these product pages and also linked them to our search results pages. I know that one thing this did is cause a lot of these Search Results pages to be indexed. But our traffic has been constantly declining since then... I wonder what would happen if I just went back to the old with a smaller number of keywords listed and not linked? Any thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Summarize your question.Crawl Diagnostics Summary
Hi, Crawl Diagnostics Summary pointed on some mistakes I've done, I fixed them, but Crawl Diagnostics Summary still shows same errors, how often does ithe data refreshes?
Technical SEO | | AndreyStotsky0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0 -
High pr doc files
I saw that the website www.comunicatedepresa.net outranks www.comunicatedepresa.ro for the therm "comunicate de presa" in google.ro SERP even though .ro beats .net in every seo indicator (links, domains linking, fb likes, g+, onpage etc) I saw that site:www.comunicatedepresa.net returns a lot of *.doc files with a title that contains the kw ("comunicate de presa"). Ex: www.comunicatedepresa.net/worddoc/1485/ It seems a little suspicios to me.Did anyone see this before (google giving higher importance to doc files)? Does anyone know why .net site is ranking better?
Technical SEO | | seo.academy0 -
Is having a sitemap.xml file still beneficial?
Hi, I'm pretty new to SEO and something I've noticed is that a lot of things become relevant and irrelevant like the weather. I was just wondering if having a sitemap.xml file for Google's use is still a good idea and beneficial? Logically thinking, my websites would get crawled faster by having one. Cheers.
Technical SEO | | davieshussein0