Photogallery and Robots.txt
-
Hey everyone
SEOMOZ is telling us that there are to many onpage links on the following page:
http://www.surfcampinportugal.com/photos-of-the-camp/
Should we stop it from being indexed via Robots.txt?
best regards and thanks in advance...
Simon
-
Hey Ryan
Thanks allot for your help, and suggestions, i will try to get more links from rapturecamps.com to this domain. Also your idea about the link adding is not bad, dont know why i didnt came up with that one
Thanks again anyway...
-
Hi Joshua. Since the domain is so new, the tool is basically telling you that you don't have much "link juice" to go around, so you're easily going to have more links on page than Google will consider important. This is natural and as your new domain gains links from around the web you'll be fine. I noticed that www.rapturecamps.com is well established so sending a few more relevant links directly from there will help with the situation.
Also, this is a clever offer that you could post to surfcampinportugal.com as well:
Add a Link and Get Discount
Got your own website, blog, forum?
If you add a link to Rapture Camps website you will receive a discount for your next booking.
Please contact us for further information. -
Hey Aran
Thanks for you fast reply, and nice to hear you like the design.
best regards
-
Personally, I wouldn't stop it being indexed.Its not like your being spammy with the onpage links.
P.s. awesome website, really love the photography on the background images.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Question for E-Commerce Sites
Hi All, I have a couple of e-commerce clients and have a question about URLs. When you perform a search on website all URLs contain a question mark, for example: /filter.aspx?search=blackout I'm not sure that I want these indexed. Could I be causing any harm/danger if I add this to the robots.txt file? /*? Any suggestions welcome! Gavin
On-Page Optimization | | IcanAgency0 -
Robot.txt file issue on wordpress site.
I m facing the issue with robot.txt file on my blog. Two weeks ago i done some development work on my blog. I just added few pages in robot file. Now my complete site seems to be blocked. I have checked and update the file and still having issue. The search result shows that "A description for this result is not available because of this site's robots.txt – learn more." Any suggestion to over come with this issue
On-Page Optimization | | Mustansar0 -
Site Maps / Robots.txt etc
Hi everyone I have setup a site map using a Wordpress pluggin: http://lockcity.co.uk/site-map/ Can you please tell me if this is sufficient for the search engines? I am trying to understand the difference between this and having a robots.txt - or do I need both? Many thanks, Abi
On-Page Optimization | | LockCity0 -
Do I need a robots meta tag on the homepage of my site?
Is it recommended to include on the homepage of your site site? I would like Google to index and follow my site. I am using WordPress and noticed my homepage is not including this meta tag, therefore wondering if I should include it?
On-Page Optimization | | asc760 -
New CMS system - 100,000 old urls - use robots.txt to block?
Hello. My website has recently switched to a new CMS system. Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls. Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical' Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find. My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary. Thanks!
On-Page Optimization | | Blenny0 -
Can duplicate content issues be solved with a noindex robot metatag?
Hi all I have a number of duplicate content issues arising from a recent crawl diagnostics report. Would using a robots meta tag (like below) on the pages I don't necessarily mind not being indexed be an effective way to solve the problem? Thanks for any / all replies
On-Page Optimization | | joeprice0 -
How do you block development servers with robots.txt?
When we create client websites the urls are client.oursite.com. Google is indexing theses sites and attaching to our domain. How can we stop it with robots.txt? I've heard you need to have the robots file on both the main site and the dev sites... A code sample would be groovy. Thanks, TR
On-Page Optimization | | DisMedia0