A suggestion to help with linkscape crawling and data processing
-
Since you guys are understandably struggling with crawling and processing the sheer number of URLs and links, I came up with this idea:
In a similar way to how SETI@Home (is that still a thing? Google says yes: http://setiathome.ssl.berkeley.edu/) works, could SEOmoz use distributed computing amongst SEO moz users to help with the data processing? Would people be happy to offer up their idle processor time and (optionally) internet connections to get more accurate, broader data?
Are there enough users of the data to make distributed computing worthwhile?
Perhaps those who crunched the most data each month could receive moz points or a free month of Pro.
I have submitted this as a suggestion here:
http://seomoz.zendesk.com/entries/20458998-crowd-source-linkscape-data-processing-and-crawling-in-a-similar-way-to-seti-home -
Sean - I share Rand' sentiments, thanks so much for the suggestion!
We have considered distributed crawling in the past (or even distributed rank checking because then it would be in that user's locale) but there are a whole different set of challenges. For example, you have to handle all the edge cases: what if a user's computer isn't on, or loses connectivity, what if we crawl too fast and the user gets blocked from a site, how do you write all that data securely?
Of course all of these concerns can be overcome, but right now we feel like we have a good handle on the problems, and it will be much faster for us to just fix what we have
Although, I know all of us are so appreciative of the ideas and support, and we will have something really great soon!
-
Thanks a ton Sean! We have considered distributed computing as a way to help crawl, index, process, etc. It's so flattering and humbling to hear that you'd be willing to help out and that the community would, too
For now, we believe we can get to the index size/quality/freshness using our hosted system, but the engineering team will certainly be encouraged to hear that folks in our community might contribute to this. Distributed systems present their own challenges, and we'd have to write that code from scratch, but if we find that we can't do what we want with our existing network, we might reach out.
BTW - I wanted to let folks know that the team here does feel very confident that come December/January, we're going to be producing indices that reach exceptional quality bars. The problems we face are largely known, and we now have the team and the solutions to tackle it, so we're pretty excited.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
moz crawl is stopped?
moz stopped indexing the links due to some updates? can some one confirm me thanks
Moz Pro | | 42409300125323700 -
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
SEOmoz crawler not crawling my site
We set up a new campaign in SEOmoz on Friday. It is my understanding that the preliminary crawl can cover up to 250 and this has been our experience in the past. However, the preliminary crawl only went through 2 pages. This is a larger eCommerce site with many pages. Any ideas why more pages weren't crawled? We set up the campaign to track at the root domain level.
Moz Pro | | IMM0 -
Not provided Data in SEOMoz reports
Hi How do SeoMoz reports deal with 'not provided' data I see my total visits from organic search for a month are same as total of both my branded and non branded keyword traffic combined yet GA is reporting 157 visits from non provided data so is SEOMoz being very clever and finding a way to decipher this not provided data and allocate it accordingly in the reports ? Or if not what ? Many Thanks Dan
Moz Pro | | Dan-Lawrence0 -
Linkscape update 2013
After the last linkscape update of 2012 (December) I noticed that it had missed a couple of high quality follow links to other sites that I'd built. I just assumed that the crawler had missed them but would pick them up next time. It hasn't though (I've been built some since then as well) and my DA has stayed the same. Help...
Moz Pro | | EmpofMan0 -
Wordpress malware help
Hi guys, I've noticed in my crawl reports that some URL's seem to have inline Javascript in them... The JS doesn't work, but it does cause the link to 404. I'm not sure where the links have come from - it's only affecting really old blog posts made before my time here. I'm contemplating deleting them... Here's an example: | http://www.evoenergy.co.uk/blog/author/aaron/page/76/ window.open('http%3A/www.lime.com/redirect/pubs.acs.org/'); void(0) | Any help would be appreciated! Thanks
Moz Pro | | tomcraig860 -
Campaign crawl re - schedule
Hello, On the last crawl of a website of mine, seomoz pointed out about 1500 errors (ouch!) on my site. I have made some corrections and i just want to see if they are at the right way but the next crawl is in a week. Is there any way so i can force a crawl before the scheduled date? Thanks!
Moz Pro | | Tz_Seo0 -
Crawl Diagnostics Report
I'm a bit concerned about the results I'm getting from the Crawl Diagnostics Report. I've updated the site with canonical urls to remove duplicate content and when I check the site - it all displays the right values, but the report, which has just finished crawling is still showing a lot of pages as duplicate content. Simple example: http://www.domain.com http://www.domain.com/ Both of them are in the duplicate content section although both have canonical url set as: Does each crawl check the entire site from the beginning or just the pages it didn't have a chance to crawl the last time? This is just one of 333 duplicate content pages, which have canonical url pointing to the right page. Can someone please explain?
Moz Pro | | coremediadesign0