Do you trust SEOMoz with your Google Analytics data?
-
This data is so so valuable...
-
You need to look at the bigger picture and not rely solely on Google Analytics, SEOMoz or any one tool in-particular. In the case of Google Analytics some users may not be tracked (not got JavaScript enabled) or you may be counting your own clicks on through your website if you don't exclude your IP address from Google Analytics.
In regards to trusting SEOMoz with your Google analytics data, I think you can. As previously mentioned SEOMoz is a community of SEO's and the tools that SEOMoz provide can be enhanced by linking your accounts.
-
Thanks Rand, that's put my mind a little more at ease - I just couldn't find any information about confidentiality.
-
Rand is mostly correct. The GA data you see in the product is the only data we request from GA. In fact, we only store a subset of that data. Some data, such as the data on the Find New Keywords page, is freshly pulled from GA each page request and never stored on our servers. When a campaign or an account is deleted all of our stored GA data is deleted at that moment. Don't worry though, if you recreate that campaign or a similar one we will re-fetch all your historical traffic data.
The method with which we access your GA data is protected by an authorization protocol called OAuth2. OAuth2 allows us to access your data without having to ever see or know your password. Basically, we get a token from Google that says this user gives SEOmoz, and only SEOmoz, access to a specific set of data (called a scope). The scope we use is the read-only Google Analytics scope, meaning we can only read your GA data and that's it. Further, this token from Google is only good for data requests from specific servers of ours.
-
Totally fair question - I'll add a few thoughts from our end:
- Right now, the only GA data we pull for accounts is what you see in the product. We're not taking out or using anything else behind the scenes, nor storing anything other than what you see. Being totally TAGFEE, I will say that in the future, we probably should start using some anonymous aggregations of data to help improve the product, run some testing and possibly long term, offer the ability to share your data anonymously in exchange for some sort of benchmarking/comparison (we'd obviously talk about this a lot more and you'd need to opt-in - we'd never do it without permission).
- Once an account is deleted, we remove its data within 6 months (sometimes sooner - only reason we keep it is in case of account re-activation, where folks don't want to lose stuff).
- We have network admins on call 24/7, so if anything unusual should happen, we can quickly address the problem.
- To date, we've had no intrusion attempts other than to the main WWW site (for injections of URLs - ugh to link spammers making the name "SEO" look bad).
- We have never sold ANY customer data ever to anyone for any reason, nor have we ever attempted or offered to do so. We do, obviously, make our link graph available via OSE, but that's public on the web (just hard to access in a scalable format).
I will ask one of our engineering folks to jump on this thread and provide some information about our security and encryption (probably not details, as that would be counter-productive, but at least a broad explanation).
My final note would be that traffic data via GA, while certainly important and private, hasn't typically been a target of hackers/malware/phishing schemes/etc. The value to outsiders is pretty minimal, even direct competitors (with a few rare exceptions).
-
I trust them. I highly doubt the will be trying to sell my Analytics information to my competitors. Besides, they are gathering the information using the GA API (I think).
Do you trust Google with your Analytics knowing they benchmark your performance and show it anonymously to others? Many large companies don't trust Google and choose Ominture Site Catalyst.
Don't worry, you are fine trusting SEOMoz with your analytics. They don't even have your GA password.
-
Truth be told, you do not want to trust anyone with anything. Movies, television and stories are filled with tales of trust and betrayal.
With that said, if you are going to trust a company, then SEOmoz has all the right indicators.
1. It is the largest SEO-focused site in the world in terms of traffic.
2. The CEO, Rand, has been trusted to represent the US in international matters related to SEO
3. SEOmoz began in 2004. 8 years is a long time in today's internet world. SEOs from around the world have entrusted SEOmoz since that time.
You can always go with the TRUST NO ONE approach. If you own or perform work for any small business type of site, your are trusting a lot more people then you realize.
-
Do you own your own server? If not, you are paying for hosting with a provider. My hosting provider has 200 employees, many of whom are network engineers or various level techs. A large number of these people have full access to your server and logs.
-
Do you have employees? You are trusting them with your data.
-
How secure is your site really? Numerous of the world's largest companies have had serious security breaches. What type of security testing have you performed on your site? I am not speaking of running some sort of free or inexpensive testing, but rather have a security expert examine your site's code line by line looking for vulnerabilities.
-
How secure is your office network? Your laptop or pc's? Your internet network? Your internet browser?
I take security very seriously, moreso then most of my colleagues. I have custom built Dell laptops with Intel vPro chips for encrypted hard drives and CompuTrace Complete for the ability to disable and recover lost or stolen laptops. I have a paper shredder in the office, and I use it. I maintain my own dedicated server with custom scripts for added protection. These are just a few examples of basic security practices.
I choose to trust SEOmoz with my data. I would suggest if you perform a full security audit on your business practices, SEOmoz data sharing is not likely to be even a blip on the radar compared to the numerous gaping holes many businesses share. With that said, this would be a great time for a SEO staff member to share the practices used to secure our data.
-
-
I do ...I hope they are on the up and up! I don't mind them selling it in aggregate but hopefully things that are supposed to be confidential or confidential. I doubt they could have gotten this big by doing bad stuff.
Greg
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My homepage is not getting indexed by google for some reason
My homepage http://www.truebluelifeinsurance.com is not indexed by google. The rest of my site is indexed. The hompage is indexed by bing. I looked in the webmaster tools and there is no indication why. I believe the issue started when I did a site re-design in August. Any ideas?
Moz Pro | | Brian_Greenberg0 -
SEOmoz PRO: How to manage a Site with 2 languages on the same domain - without mixing up data?
I want to track a "rootdomain" that has two languages on it, the english version is in a subfolder /en/ 1. http://website.de
Moz Pro | | inlinear
2. http://website.de/en/ I want to manage and track each language-version isolated. So I will setup:
1. http://website.de as campaign DE - german
= as Root Domain
But as there are links to the /en/ Subfolder these data will also be included in all reports. And there is still no option in SEOmoz PRO to exclude folders or even urls.?! This will be bad when wanting a clear report of just one Language Version. 2. http://website.de/en as campaing EN - english
To track as "Subfolder" will not work beacause this option will only consider exactly this subfolder... So is there a way to see data just only for one language Version?0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
Changing the way SEOmoz Detects Duplicate Content
Hey everyone, I wanted to highlight today's blog post in case you missed it. In short, we're using a different algorithm to detect duplicate pages. http://moz.com/blog/visualizing-duplicate-web-pages If you see a change in your crawl results and you haven't done anything, this is probably why. Here's more information taken directly from the post: 1. Fewer duplicate page errors: a general decrease in the number of reported duplicate page errors. However, it bears pointing out that: **We may still miss some near-duplicates. **Like the current heuristic, only a subset of the near-duplicate pages is reported. **Completely identical pages will still be reported. **Two pages that are completely identical will have the same simhash value, and thus a difference of zero as measured by the simhash heuristic. So, all completely identical pages will still be reported. 2. Speed, speed, speed: The simhash heuristic detects duplicates and near-duplicates approximately 30 times faster than the legacy fingerprints code. This means that soon, no crawl will spend more than a day working its way through post-crawl processing, which will facilitate significantly faster delivery of results for large crawls.
Moz Pro | | KeriMorgret2 -
SEOmoz tool Issue?
Hi Mozzers, I am doing a web maintenance task for a client and it's been weeks that Moz is detecting 49 duplicate pages ( contact page). I thought resolving the issue when creating the xml sitemap and excluding those duplicates. The moz tool would still detect them, so I went in making a search with some of these duplicate to check if they were indexed but non of them were indexed. So my question is has anyone recently experienced similar issues? Is the moz tool not 100% accurate? Thanks for sharing your thoughts and answers
Moz Pro | | Ideas-Money-Art0 -
Seomoz bar: No Follow and Robots.txt
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ? the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar. Thanks! Holly ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
Moz Pro | | squareplug0 -
How does SeoMoz works with noindex meta tags?
In my last SeoMoz Crawl I've found a lot of warnings about duplicated content in page with a noindex meta tag. Is that normal? These pages should not be considered as indexable content of my website, isn't it?
Moz Pro | | jgomes0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370