Google robots.txt test - not picking up syntax errors?
-
I just ran a robots.txt file through "Google robots.txt Tester" as there was some unusual syntax in the file that didn't make any sense to me...
e.g. /url/?*
/url/?
/url/*and so on. I would use ? and not ? for example and what is ? for! - etc.
Yet "Google robots.txt Tester" did not highlight the issues...
I then fed the sitemap through http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php and that tool actually picked up my concerns.
Can anybody explain why Google didn't - or perhaps it isn't supposed to pick up such errors?
Thanks, Luke
-
Many thanks Beau - much appreciated.
-
Hey Luke,
It appears that in each of the three examples, there was a plausible case for each example. Let's cover each:
- For /url/?* , it can be expressed that a URL can offer a trailing slash and then a query string, see examples here.
- with /url/? , this covers examples of the above and in addition, would plausibly block product pages that generate query strings, similar to this example from H&M. In essence, only allowing the product page to be seen.
- /url/* , well, that's just anything and everything after the trailing slash.
I guess the question you should ask yourself is "Is this the best approach for the issue?"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Detecting Real Page as Soft 404 Error
We've migrated my site from HTTP to HTTPS protocols in Sep 2017 but I noticed after migration soft 404 granularly increasing. Example of soft 404 page: https://bit.ly/2xBjy4J But these soft 404 error pages are real pages but Google still detects them as soft 404. When I checked the Google cache it shows me the cache but with HTTP page. We've tried all possible solutions but unable to figure out why Google is still indexing to HTTP pages and detecting HTTPS pages as soft 404 error. Can someone please suggest a solution or possible cause for this issue or anyone same issue like this in past.
Intermediate & Advanced SEO | | bheard0 -
Google has penalized me for a keyword,and removed from google some one know for how long time is the penalty
i have by some links from fiverr i was ranking 9 for this keyword with 1200 of searches after fiverr it has disappeared from google more then 10 days i guess this is a penalty someone know how long a penalty like this is how many days to months ? i don't get any messages in webmaster tools this is the gig https://www.fiverr.com/carissa30/do-20-unique-domains-high-tf-and-cf-flow-backlinks-high-da?source=Order+page+gig+link&funnel=a7b5fa4f-8c0a-4c3e-98a3-74112b658c7f
Intermediate & Advanced SEO | | alexmuller870 -
Is this Schema? Did Test, and Shows Nothing
Hi Everyone, If you look at the first listing on https://www.google.com/search?q=tanzanite+minerals&ie=utf-8&oe=utf-8, you will see this under the listing: Mineral Class: ZoisiteLuster: VitreousChemical Formula: Ca<sub>2</sub>Al<sub>3</sub>(SiO<sub>4</sub>)<sub>3</sub>(OH)Crystal System: OrthorhombicI tried to run a test against the page and it shows no schema. How are they doing this as we would love to provide the same information.Thank you!
Intermediate & Advanced SEO | | vetofunk0 -
Robots.txt Blocking - Best Practices
Hi All, We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line. We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files? Thanks! User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
Intermediate & Advanced SEO | | ReunionMarketing0 -
Google News sitemap keywords
My company is a Theater news and reviews site. We're building a google news sitemap and Google suggests some recommended keywords we can use with their <keywords>tag: https://support.google.com/news/publisher/answer/116037</keywords> Our writers also tag their stories with relevant keywords. What should we populate the <keywords>tag with?</keywords> We were thinking we'd automatically populate it with author-added tags, in addition to one or more of the recommended ones suggested by Google, such as Theater, Arts, and Culture (all of our articles are related to these topics). Finally, many of our articles are about say, celebrities. An author may tag an article with 'Bryan Cranston,' and when this is the case we're considering also tagging it with the 'Celebrities' tag. Are all or any of these worthwhile?
Intermediate & Advanced SEO | | TheaterMania0 -
Google Hummingbird Update - Any Changes ?
Google has update with the new alogrithm and did you see any effects and as they are not revelaing the techinicaly how they work ? What's your opinion ?
Intermediate & Advanced SEO | | Esaky0 -
Why are these results being showed as blocked by robots.txt?
If you perform this search, you'll see all m. results are blocked by robots.txt: http://goo.gl/PRrlI, but when I reviewed the robots.txt file: http://goo.gl/Hly28, I didn't see anything specifying to block crawlers from these pages. Any ideas why these are showing as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
New server update + wrong robots.txt = lost SERP rankings
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers). When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site. The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
Intermediate & Advanced SEO | | 9Studios0