Restricted by robots.txt does this cause problems?
-
I have restricted around 1,500 links which are links to retailers website and links that affiliate links accorsing to webmaster tools
Is this the right approach as I thought it would affect the link juice? or should I take the no follow out of the restricted by robots.txt file
-
Hello Ocelot,
I am assuming you have a site that has affiliate links and you want to keep Google from crawling those affiliate links. If I am wrong, please let me know. Going forward with that assumption then...
That is one way to do it. So perhaps you first send all of those links through a redirect via a folder called /out/ or /links/ or whatever, and you have blocked that folder in the robots.txt file. Correct? If so, this is how many affiliate sites handle the situation.
I would not rely on rel nofollow alone, though I would use that in addition to the robots.txt block.
There are many other ways to handle this. For instance, you could make all affilaite links javascript links instead of href links. Then you could put the javascript into a folder called /js/ or something like that, and block that in the robots.txt file. This works less and less now that Google Preview Bot seems to be ignoring the disallow statement in those situations.
You could make it all the same URL with a unique identifyer of some sort that tells your database where to redirect the click. For example:
www.yoursite.com/outlink/mylink#123
or
www.yoursite.com/mylink?link-id=123
In which case you could then block /mylink in the robots.txt file and tell Google to ignore the link-ID parameter via Webmaster Tools.
As you can see, there is more than one way to skin this cat. The problem is always going to be doing it without looking like you're trying to "fool" Google - because they WILL catch up with any tactic like that eventually.
Good luck!
Everett
-
From a coding perspective, applying the nofollow to the links is the best way to go.
With the robots.txt file, only the top tier search engines respect the information contained within, so lesser known bots or spammers might check your robots.txt file to see what you don't want listed, and that info will give them a starting point to look deeper into your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My wordpress website is facing indexing problems
Hello, I am facing indexing issues on one of my which is about budget bushcraft knife, Four months have been passed I built my site and published almost 8 articles so far. But i am worried no single keyword ranked on google till yet as I checked through MOZ site explorer. Can anyone guide me on what should I need to do now? Thanks
Technical SEO | | Ewerurt0 -
Query Strings causing Duplicate Content
I am working with a client that has multiple locations across the nation, and they recently merged all of the location sites into one site. To allow the lead capture forms to pre-populate the locations, they are using the query string /?location=cityname on every page. EXAMPLE - www.example.com/product www.example.com/product/?location=nashville www.example.com/product/?location=chicago There are thirty locations across the nation, so, every page x 30 is being flagged as duplicate content... at least in the crawl through MOZ. Does using that query string actually cause a duplicate content problem?
Technical SEO | | Rooted1 -
Problem with Markup (Price, MaxPrice, MinPrice)
Hi Hi, I've inserted some Markup on my homepage for a vacation flat. It's looking like thies: As there is not just one price for the whole season, I used minPrice and maxPrice to define it. Unfortunatelly the Google Testing Tool (https://developers.google.com/webmasters/structured-data/testing-tool/) sais, that the value "price" has to be also in the code, but I have no clue what value I should give it? Has someone an advice? Thank you very much! Best regards André
Technical SEO | | Andre-S0 -
Long title problem
I'm getting an incredible number of 4xx errors and long titles from a small website (northstarpad.com); over 13k 4xx errors and almost 20k "title element is too long". The number keeps climbing, but the site shouldn't have more than a couple hundred pages. When I look at the 4xx errors they are clearly being generated by some program since they have multiple and repeating keywords. Here's an example: | http://northstarpad.com/category/wedding-photographer-farmington-michigan/pet-photography/wedding-photography/pet-photography/wedding-photography/wedding-photography/wedding-photography/pet-photography/wedding-photography/wedding-photography/pet-photography/ | I looked at the ftp files and plugins and couldn't see anything that could cause it, but I'm a beginner so no surprise there. Any suggestions where to look or how to fix this?
Technical SEO | | dwerkema0 -
Determining the Cause of a Penalty
I received a link removal request from a site who said that they were penalized. I confirmed that they were #1 for the competitive keyword phrase that is also their domain name and now they are #10. Here are some things I noticed about the site: Over 2,500 linking domains. Dozens of high quality linking domains like Huffington Post and Mashable. Some off topic guest post links, e.g. on a SEO site. Guest post anchor text was usually their site name which is an exact match domain. Lots of top 100 resource pages that received good organic links. Infographics with links using their domain name as the anchor text. Relatively few spammy links according to Open Site Explorer. Overall their site's links were engineered but using tactics that most would consider "white hat." I don't think they violated any Google Webmaster Guidelines. Why were they penalized? What do you think?
Technical SEO | | ProjectLabs0 -
Correct Indexing problem
I recently redirected an old site to a new site. All the URLs were the same except the domain. When I redirected them I failed to realize the new site had https enable on all pages. I have noticed that Google is now indexing both the http and https version of pages in the results. How can I fix this? I am going to submit a sitemap but don't know if there is more I can do to get this fixed faster.
Technical SEO | | kicksetc0 -
Slash at end of URL causing Google crawler problems
Hello, We are having some problems with a few of our pages being crawled by Google and it looks like the slash at the end of the URL is causing the problem. Would appreciate any pointers on this. We have a redirect in place that redirects the "no slash" URL to the "slash" URL for all pages. The obvious solution would be to try turning this off, however, we're unable to figure our where this redirect is coming from. There doesn't appear to be an instruction in our .htaccess file doing this, and we've also tried using "DirectorySlash Off" in the .htaccess file, but that doesn't work either. (if it makes a difference it is a 302 redirect doing this, not a 301) If we can't get the above to work, then the other solution would be to somehow reconfigure the page so that it is recognizable with the slash at the end by Google. However, we're not sure how this would be done. I think the quickest solution would be to turn off the "add slash" redirect. Any ideas on where this command might be hiding, and how to turn it off would be greatly appreciated. Or any tips from people who have had similar crawl problems with google and any workarounds would be great! Thanks!
Technical SEO | | onetwentysix0 -
Google causing Magento Errors
I have an online shop - run using Magento. I have recently upgraded to version 1.4, and I installed a extension called Lightspeed, a caching module which makes tremendous improvements to Magento's performance. Unfortunately, a confoguration problem, meant that I had to disable the module, because it was generating errors relating to the session, if you entered the site from any page other than the home page. The site is now working as expected. I have Magento's error notification set to email - I've not received emails for errors generated by visitors. However over a 72 hour period, I received a deluge of error emails, which where being caused by Googlebot. It was generating an erro in a file called lightspeed.php Here is an example: URL: http://www.jacksgardenstore.com/tahiti-vulcano-hammock IP Address: 66.249.66.186 Time: 2011-06-11 17:02:26 GMT Error: Cannot send headers; headers already sent in /home/jack/jacksgardenstore.com/user/jack_1.4/htdocs/lightspeed.php, line 444 So several things of note: I deleted lightspeed.php from the server, before any of these error messages began to arrive. lightspeed.php was never exposed in the URL, at anytime. It was referred to in a mod_rewrite rule in .htaccess, which I also commented out. If you clicked on the URL in the error message, it loaded in the browser as expected, with no error messages. It appears that Google has cached a version of the page which briefly existed whilst Lightspeed was enabled. But I though that Google cached generated HTML. Since when does cache a server-side PHP file ???? I've just used the Fetch as Googlebot facility on Webmaster Tools for the URL in the above error message, and it returns the page as expected. No errors. I've had to errors at all in the last 48 hours, so I'm hoping it's just sorted itself out. However I'm concerned about any Google related implications. Any insights would be greatly appreciated. Thanks Ben
Technical SEO | | atticus70