Best way to submit multiple, simultaneous urls to SEOMOZ
-
I have a pro membership, and am looking to get inbound link data on multiple urls. Is there a way to submit multiple urls at once?
-
Thanks Jesse and Joel – I definitely think the API route is what I need. That said, I have no programming skills whatsoever. But before I find someone to put something together for me, I came across the google docs tool: http://moz.com/ugc/updated-tool-seomoz-api-data-for-google-docs. This does what I need to be done (bulk url submission retrieved in spreadsheet format), but seems to be limited to pulling just the data available under the free plan. I’m looking to retrieve a wider range of data available with my Pro membership – specifically, the domain and page metrics (number of linking domains, FB, Twitter, etc. shares…). Do you know of a modification of this tool that would allow me to access that data? Or would I have to (that is, get someone to) modify the existing code here to be able to access what I need.
Thanks again for your help and patience!
Charlton
-
Hey Charlton,
Thanks for the question. Jesse's suggestion will definitely work if you're looking just for overview metrics. If you need actual back links you'd need to look at setting up an automated process using the API. It does require a good bit of programming but could definitely give you what you're after.
You can read more about the api here: http://apiwiki.moz.com.
I hope that helps.
Cheers,
Joel. -
Go to Open Site Explorer and directly under the URL input box is a button that says "+ Compare up to 5 sites."
That's your ticket, right there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Looking for a Tool to Find Referring Pages of Specific URLs
Hello Everyone, We are Looking for a Tool to Find Referring Pages of Specific URLs. Please let me know if you know of a Moz tool or another tool for this need. Thanks.
Moz Bar | | Pushm4 -
Is there a way to view difficulty for related keyword suggestions?
I'd really like to view the difficulty/organic CTR/priority of suggested related keywords without having to click on each individually - is there a way to do this?
Moz Bar | | avfundstack2 -
On Page Grader - URL not accessible
We have tried to use the On Page Grader today and it is coming back with URL not accessible for all pages on our website. We previously used the On Page Grader on Friday 10th Nov for a couple of product pages with no issues. Since then, the only changes we have made on the websites is updating some downloadable documents. We have done this several times before and it has never affected Moz. We have not changed the page URLs, and therefore do not know why it is now not working. The pages are working fine on the website with no issues. A link to one of the pages is below. http://www.processinstruments.co.uk/products/dissolved-oxygen-monitor/ Any help would be greatly appreciated.
Moz Bar | | PiMike0 -
Is there a way to export all your crawl errors for multiple Moz campaigns at once?
We're looking for a simple way to export all crawl errors for our Moz campaigns. More than likely we could use the API, but was wondering if there was any functionality already built into Moz for exporting all crawl errors.
Moz Bar | | ReunionMarketing0 -
Duplicate Content on Website with Multiple Locations
Hi there, I've spent hours reading posts on duplicate content and googling this but I'm still not sure what to do. We created a site that has two WP installs for a company with two different locations - the landing page is website.com and links to WP install 1 (website.com/city1), and WP install 2 (website.com/city2). They specifically wanted two different sites so they could be managed by staff at either location. However some of the pages have the same content - ie. services, policies, etc. so all of those are showing errors for duplicate content. All pages have different city-specific URL's and meta-descriptions but that clearly doesn't help. We can't redirect the "duplicate" pages because then it would take the user to the other city's specific site. Is there anything we can do?? Is this going to significantly damage rankings? Thanks kindly for any help you can provide.
Moz Bar | | charlie0071 -
Rank Tracker - URLs are Different when Exporting to CSV
When exporting to CSV in Rank Tracker, many of the URLs are reduced to the root domain instead of the full, ranking URL as seen within the tool. Right now the URLs must be copied/pasted or manually edited afterwards in the CSV. However, it doesn't happen to every item. A few of them do show the correct URL after being exported. Any idea if this is a bug or just an odd thing the export does?
Moz Bar | | AlfredGoldberg1 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0