What is the quickest way to get OSE data for many URLs all at once?
-
I have over 400 URLs in a spreadsheet and I would like to get Open Site Explorer data (domain/page authority/trust etc) for each URL.
Would I use the Linkscape API to do this quickly (ie not manually entering every single site into OSE)? Or is there something in OSE or a tool I am overlooking?
And whatever the best process is, can you give a brief overview?
Thanks!!
-Dan
-
Thanks John!
FYI, for anyone else looking for a solution, Mike King also pointed me towards this: http://www.tomanthony.co.uk/blog/seomoz-linkscape-api-with-google-docs/
-
As I just told Dan on Twitter, I built out a spreadsheet a few months ago that is linked to in this post: http://bit.ly/mc0Q9v.
You'll have to use your own Moz free API key and hack the sheet a little bit, but it uses the Moz and Twitter APIs to pull the Moz metrics (DA, PA, etc) into a Google doc.
Good luck Dan and anyone else who reads this!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz vs google data conflict?
Hi there, I am doing an SEO site audit for a client(giveaway, and here is the problem: when performing site:domain.com on google --> 13,800 pages were found When I see this number it seems to be a bit too much compare to the links i checked on integrity(link check for broken links) which gave me a result of 1291. I digged in more into the Google results and saw hundreds(maybe thousands) of pages that are blocked by robots.txt. So I am thinking, ok this is it, thousands of pages can't be crawled by the search engines. Here is the big BUT though, then I check at my moz crawl (see attachment) and no pages are blocked by the SEs, and then look at the dups, only 23 recorded?? Is Moz not crawling properly the 13,800 results that google finds or is this some magical phenomenon happening here? I am really confused here that is why I need some help here! Thank you guys! A990Hu4.png k842AOn.png
Moz Pro | | Ideas-Money-Art0 -
AM I the only one getting misleading titles in OSE?
I am trying to locate directories in my competitor's links using OSE. Here is the workflow I am using: Filter all results to external sites only, group by site, linking to any page on the domain. Export results to csv. My competitor is in the web design industry. So I try to filter the titles of the pages linking to the competitor to look for titles containing directory. But when I click on the link for "Windsor Internet Web Design Hosting Ontario Canada Directory" I get a page with the title "Kitchen & Bathroom Showroom | London Ontario | Bathroom Vanity Showroom" Are the results really this misleading?? or am I doing something wrong here? Any insight or help would be greatly appreciated.
Moz Pro | | tdlabs0 -
How Old is OSE link data?
I ran an anchor text report for my client today, which shows that their site has some incoming comment spam links using totally unrelated phrases (pharma products). However, when looking for the live link, the linking page no longer contains the link to them. Maybe the webmasters removed these, but I can't track down a single one... how old is this data? thanks
Moz Pro | | JMagary0 -
Difference between Seomoz Competitive Domain Analysis and OSE link data?
Any reason these numbers are so much different? SeoMoz Competitive Domain Analysis:
Moz Pro | | fibers
Followed Linking Root Romains: 288
Total Linking Root Domains: 338 Open Site Explorer
Linking ROot Romains: 1270 -
URLs getting re-directed to double http:// URLs
The "Notices" section under "Crawl Diagnostics" shows that there are 435 issues on my website. I checked out a few URLs to verify this issue and found that most of these pages are working perfectly. For instance, the above mentioned report shows that http://policycomplaints.com/about redirects to http://http://policycomplaints.com/about/ . Then, http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ redirects to http://http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ . However, when I open these pages, they seem to be working perfectly. I didn't find them getting re-directed to somewhere else. So, as per the report, it seems that all of these 435 http://URLs are getting re-directed to http://http://URL versions which in reality is not true because all the http://URLs are working perfectly. So, is this a problem with SEOmoz software? If not, what is the reason for these issues and how can I adddress them. Do notify if any further information is required for the same. Thanks. bNiEm.png
Moz Pro | | unknownID10 -
Where do these URL's come from?! (Indexation issues)
We have an international webshop with languages in the URLs. Our URLs are now set up as follows: http://thermalunderwear.eu/eng/category/product Now, we know that there's some kind of strange redirect problem causing problems with our indexation, this is a technical issue that should be fixed soon. But whether this is the cause of some other strange problems, I do not know. I'd be happy with any help/advice/tips. 1. The SEOmoz site crawler starts at http://thermalunderwear.eu. This currently does not yet redirect to http://thermalunderwear.eu/eng like we want it to, but all the links on the page do include the default language code. So all links on the page are http://thermalunderwear.eu/eng/category etc. However, apart from those URLs, the site crawler finds many URLs in the form http://thermalunderwear.eu/category/product etc., so not including the language variable. Where it gets these I do not know, and since these URLs dont exist and the webshop simply shows the homepage, these URLs all have 50+ duplicate titles/content. Why oh why? 2. If I do a Google search for indexed URL's with English as language, I get many results formatted like this: Coldpruf Enthusiast mens thermal shirt - Thermal wear for men ...
Moz Pro | | DocdataCommerce
thermalunderwear.eu/eng/men/coldpruf-enthusiast-mens-thermal-shirt 170+ items – Fine-ribbed longsleeve thermal shirt men from Enthusiast ... {$SCRIPT_NAME} eng/men/coldpruf-enthusiast-mens-the {$ajax_url} http://thermalunderwear.eu/ajax What are those variables doing there? It looks like it's taking something from our Smarty debug console, which is hidden but still active in the source code, but also the ajax URL which is in a completely different location. What is Google trying to show here?0 -
Getting PA & DA off of a list of links
I have a list of links that I want to get PA and DA for each individual link, can this be done in some way other than one at a time? I've heard this can be done with excel and using api but I don't know the specifics of this.. Help would be appreciated
Moz Pro | | Fergclaw2 -
Getting SEOMoz reports to ignore certain parameters
I want the SEOMoz reports to ignore duplicate content caused by link-specific parameters being added to URLs (same page reachable from different pages, having marker parameters regarding source page added to the URLs). I can get Google and Bing webmaster tools to ignore parameters I specify. I need to get SEOMoz tools to do it also!
Moz Pro | | SEO-Enlighten0