Robots txt is case senstive? Pls suggest
-
Hi i have seen few urls in the html improvements duplicate titles
Can i disable one of the below url in the robots.txt?
/store/Solar-Home-UPS-1KV-System/75652
/store/solar-home-ups-1kv-system/75652if i disable this
Disallow: /store/Solar-Home-UPS-1KV-System/75652
will the Search engines scan this /store/solar-home-ups-1kv-system/75652
im little confused with case senstive.. Pls suggest go ahead or not in the robots.txt
-
Hi Already there is some equity for duplicate links, wht is going to happen?
-
Actually, you have just one option to not index them - the second one. The first will, still keep them in index if google can find them. I currently have roughly 27k URLs indexed that were blocked via robots.txt from the start (generated with a time-based parameter; yeah: ouch.).
Those results do not usually appear in "normal" search but can be forced (currently you may try site:grimoires.de inurl:fakechecknr and showing skipped results to see the effect of that). So basically I'd advise against using robots.txt - it does not prevent indexing, only the visiting/reading of that page.
Regards
Nico
-
Hi Abdul,
Yes, it is case sensitive.
Remember that you must not have many pages like that.
The first thing you should do is elimiate those duplicate pages.In the case you can´t eliminate them, you have 2 way to ask the google bot not to index them:
1- By robots.txt with a 'Disallow:' instruction
2- By a meta tag with a_ '' _in theHope it helps.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Upper and lower case URLS coming up as duplicate content
Hey guys and gals, I'm having a frustrating time with an issue. Our site has around 10 pages that are coming up as duplicate content/ duplicate title. I'm not sure what I can do to fix this. I was going to attempt to 301 direct the upper case to lower but I'm worried how this will affect our SEO. can anyone offer some insight on what I should be doing? Update: What I'm trying to figure out is what I should do for our URL's. For example, when I run an audit I'm getting two different pages: aaa.com/BusinessAgreement.com and also aaa.com/businessagreement.com. We don't have two pages but for some reason, Google thinks we do.
Intermediate & Advanced SEO | | davidmac1 -
Robots blocked by pages webmasters tools
a mistake made in software. How can I solve the problem quickly? help me. XTRjH
Intermediate & Advanced SEO | | mihoreis0 -
Our parent company has included their sitemap links in our robots.txt file - will that have an impact on the way our site is crawled?
Our parent company has included their sitemap links in our robots.txt file. All of their sitemap links are on a different domain and I'm wondering if this will have any impact on our searchability or potential rankings.
Intermediate & Advanced SEO | | tsmith1310 -
95% of organic traffic lands in my homepage, despite having a 250 page website with a "seo optimized" hierarchical structure. Any suggestion as to what might be happening?
Challenging issue All the "usual suspects" have been discarded: all pages included in google index, no google penalties, metas optimized, kw's segregated by pages/cluster of pages to avoid cannibalization... BUT, we know we are missing something website is www.e-florex.com and is an e-commerce site based on magento Any ideas you might think are worth exploring? Thanks in advance for your help Juan
Intermediate & Advanced SEO | | juanmarn0 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220 -
Issue with Robots.txt file blocking meta description
Hi, Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)? A description for this result is not available because of this site's robots.txt – learn more. Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list. Here is the current Robots.txt file: # XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins: WooCommerce All-in-One SEO Pack Google Analytics for WordPress XML Sitemap Google News Feeds Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above). Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently. One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs. Could it just be that this will take time, to review the new sitemap and re-index the new site? If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
Intermediate & Advanced SEO | | HiddenPeak0 -
MOZ crawl report says category pages blocked by meta robots but theyr'e not?
I've just run a SEOMOZ crawl report and it tells me that the category pages on my site such as http://www.top-10-dating-reviews.com/category/online-dating/ are blocked by meta robots and have the meta robots tag noindex,follow. This was the case a couple of days ago as I run wordpress and am using the SEO Category updater plugin. By default it appears it makes categories noindex, follow. Therefore I edited the plugin so that the default was index, follow as I want google to index the category pages so that I can build links to them. When I open the page in a browser and view source the tags show as index, follow which adds up. Why then is the SEOMOZ report telling me they are still noindex,follow? Presumably the crawl is in real time and should pick up the new follow tag or is it perhaps because its using data from an old crawl? As yet these pages aren't indexed by google. Any help is much appreciated! Thanks Sam.
Intermediate & Advanced SEO | | SamCUK0 -
Do you loose Link Equity when using RanDom CasE?
I seen a site linking internally using Caps from the home page to sub pages, the rest of the site links in lower-case. Are there any disadvantages in terms of link juice or duplication for doing this? Example link from homepage: /blah/Doctors.aspx Example link from other internal page: /blah/doctors.aspx The site is on a Windows based server and not Linux. Thanks in advance
Intermediate & Advanced SEO | | 3wh0