Can I use a "no index, follow" command in a robot.txt file for a certain parameter on a domain?
-
I have a site that produces thousands of pages via file uploads. These pages are then linked to by users for others to download what they have uploaded.
Naturally, the client has blocked the parameter which precedes these pages in an attempt to keep them from being indexed. What they did not consider, was they these pages are attracting hundreds of thousands of links that are not passing any authority to the main domain because they're being blocked in robots.txt
Can I allow google to follow, but NOT index these pages via a robots.txt file --- or would this have to be done on a page by page basis?
-
Since you have those pages blocked via robots.txt, the bots would never even crawl these pages in theory...which means the Noindex,follow is not helping.
Also, if you do a report on the domain on opensiteexplorer and dig, you should be able to find tons of those links already showing up. So if my site is linking to a page on that site, that page may not be cached/indexed because of the robots.txt exclusion, but that as long as my site is follow, your domain is still getting the credit for the link.
Does that make sense ?
-
Answered my own question.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If my website uses CDN does thousands of 301 redirect can harm the website performance?
Hi, If my website uses CDN does thousands of 301 redirect can harm the website performance? Thanks Roy
Intermediate & Advanced SEO | | kadut1 -
I have implemented rel = "next" and rel = "prev" but google console is picking up pages as being duplicate. Can anyone tell me what is going on?
I have implemented rel="next" and rel = "prev" across our site but google console is picking it up as duplications. Also individual pages show up in search result too. Here is an example linkhttp://www.empowher.com/mental-health/content/sizeismweightism-how-cope-it-and-how-it-affects-mental-healthhttp://www.empowher.com/mental-health/content/sizeismweightism-how-cope-it-and-how-it-affects-mental-health?page=0,3The second link shows up as duplicate. What can i do to fix this issue?
Intermediate & Advanced SEO | | akih0 -
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika
Intermediate & Advanced SEO | | Malika11 -
Can i migrate to a new domain without losing rankings?
we are looking at migrating to a new domain name, but worried about current rankings.. can we do this and keep our rankings if we 301? if we can expect a dip, how long will that generally take? thanks
Intermediate & Advanced SEO | | Direct_Ram0 -
Pages with rel "next"/"prev" still crawling as duplicate?
Howdy! I have a site that is crawling as "duplicate content pages" that is really just pagination. The rel next/prev is in place and done correctly but Roger Bot and Google are both showing duplicated content + duplicate page titles & meta's respectively. The only thing I can think of is we have a canonical pointing back at the URL you are on - we do not have a view all option right now and would not feel comfortable recommending it given the speed implications and size of their catalog. Any experience, recommendations here? Something to be worried about? /collections/all?page=15"/>
Intermediate & Advanced SEO | | paul-bold0 -
Changeing Hyperlinks from Do Follow to Do Not Follow
I Have a blog on my website and we do guest post from time to time. I was contacted by a blogger to change the Back links on his Article from Do Follow to Do not Follow. His Article has been live for 5 Months. When I asked him of the reason for that his Answer was that he needs to comply with Google's New Alogarithm ! Has Anyone Had Similar situation or can shed more light on that issue ?
Intermediate & Advanced SEO | | sherohass0 -
Can you Canonical to a URL in a different folder under the same domain?
I want to know if it's possible to add a canonical tag to a URL that points to a URL under a different folder. Content is just about the same. Here's an example (fake urls and product, but structure and parameters are similar to my client's website): domain.com/toy-ducks-results.aspx?color=Purple&model=Elvis domain.com/toy-ducks-details.aspx?color=Purple&model=Elvis&style=Sparkly Let's say that my purple Elvis ducks are really popular. Is there any harm in putting a rel=canonical on the Sparkly Elvis ducks page to the purple Elvis ducks page? Even though they are two different folders? /toy-ducks-results and /toy-ducks-details So, in effect, the preferred folder is /toy-ducks-results Thanks in advance for any help.
Intermediate & Advanced SEO | | EEE30 -
Search Engine Blocked by robots.txt for Dynamic URLs
Today, I was checking crawl diagnostics for my website. I found warning for search engine blocked by robots.txt I have added following syntax to robots.txt file for all dynamic URLs. Disallow: /*?osCsid Disallow: /*?q= Disallow: /*?dir= Disallow: /*?p= Disallow: /*?limit= Disallow: /*review-form Dynamic URLs are as follow. http://www.vistastores.com/bar-stools?dir=desc&order=position http://www.vistastores.com/bathroom-lighting?p=2 and many more... So, Why should it shows me warning for this? Does it really matter or any other solution for these kind of dynamic URLs.
Intermediate & Advanced SEO | | CommercePundit0