Question about robots.txt
-
Solved!
-
Just a friendly reminder. Please don't delete your question after it's been answered. It's very likely that someone in the future could have the same question and they would have been able to find the answer if you hadn't deleted the question.
-
Consider deleting all of this:
Disallow: /&limit
Disallow: /?limit
Disallow: /&sort
Disallow: /?sort
Disallow: /?route=checkout/
Disallow: /?route=account/
Disallow: /?route=product/search
Disallow: /?route=affiliate/
Disallow: /?marca
Disallow: /&manufacturer
Disallow: /?manufacturer
Disallow: /?filter
Disallow: /&filter
Disallow: /?order
Disallow: /&order
Disallow: /?price
Disallow: /&price
Disallow: /?filter_tag
Disallow: /&filter_tag
Disallow: /?mode
Disallow: /&mode
Disallow: /?cat
Disallow: /&cat
Disallow: /?product_id
Disallow: /&product_id
Disallow: /?route=affiliate/
Disallow: /*?keywordThose rules are telling Google not to crawl domain.com/EVERYTHING(then the URL parameter). This could be where the issue stems from. If you're worried about URLs with these things ranking, consider implementing canonical tags instead to point to the proper pages
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Home page optimisation question - Expanding box
Hi guys, I was wandering if anyone can help me on how google looks at expanding boxes now? What I am referring to is on our home page orderblinds.co.uk we have an article written which shows a taster of the information about the company, the user then has to click read more to expand the box and see the rest of the content. Is this bad for seo as when you view the html all the content is there but I'm sure google can work out that this text isn't visible until you click read more? Any feedback on this subject would be great,
On-Page Optimization | | OrderBlinds0 -
Need suggestion: Should the user profile link be disallowed in robots.txt
I maintain a myBB based forum here. The user profile links look something like this http://www.learnqtp.com/forums/User-Ankur Now in my GWT, I can see many 404 errors for user profile links. This is primarily because we have tight control over spam and auto-profiles generated by bots. Either our moderators or our spam control software delete such spammy member profiles on a periodic basis but by then Google indexes those profiles. I am wondering, would it be a good idea to disallow User profiles links using robots.txt? Something like Disallow: /forums/User-*
On-Page Optimization | | AnkurJ0 -
Help! A couple of basic questions on dup. content, pagination and tumblr blogs.
Hi, and many thanks in advance for any assistance. According to our GWMT we currently have over a thousand duplicated title tags and meta descriptions. These stem from tabs that we have located beneath the body copy, which when you click on them display offers or itineraries (we're a travel company). So the URLs change to having "?st=Offer" or "?st=Itinerary" at the end, and are considered to be duplicating the original page's title and meta des. Sometimes the original page is also paginated, and shows the same duplication errors. What would be the best way to ensure we're not duplicating anything? Also, we have a tumblr blog, where there's single page displaying all the blog content, but also links to each blog on a separate individual page. We would like to keep the individual pages as we can optimise to target specific keywords, but want to avoid any duplication issues again. Any advice would be greatly appreciated.
On-Page Optimization | | LV70 -
Question about Multi-national Websites
I am about to work on a multi-national site and need some more information about what I should consider regarding: content keyword research anything else My biggest question is regarding content. The company would like a UK version of the site with a different URL, but plan to keep the content essentially the same, with the exception of a few minor details. In this case, would duplicate content still be an issue? If so, any suggestions for working around this? Any strategy information on multi-national sites would be really helpful. Thank you! Erin
On-Page Optimization | | HiddenPeak0 -
Newbie with a few questions
Hi! First post here. Would be great if someone could help me out with a few questions: 1. When I search a brand-name, there are 6 pages in 2 columns listed right below the brand in the SERPS. Is it possible to choose which pages in the "category list" that Google shows? 2. From what I've understood, the keywords being included early in the content is of much higher importance than using them in a perfectly structured tag hierarchy. Instead of using a hierarchy like this: I could use something like this (which reads much better): ****Would this make any difference? 3. My category pages show up in the search listings. Is this a bad thing? Should I nofollow or noindex? 4. Category and author pages triggers duplicate content in seomoz. Should I do anything about it? Should i make all the excerpts unique to avoid this? 4. Is the title tag recommendation of 66 characters with or without the brand name? Am I good as long as the post part of the title is less than 66? Remove the brand name from the title all together?****
On-Page Optimization | | mathiasppc0 -
New CMS system - 100,000 old urls - use robots.txt to block?
Hello. My website has recently switched to a new CMS system. Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls. Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical' Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find. My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary. Thanks!
On-Page Optimization | | Blenny0 -
Robots.txt file
Does it serve any purpose if we omit robots.txt file ? I wonder if spider has to read all the pages, why do we insert robots.txt file ?
On-Page Optimization | | seoug_20050