Mozcape API Batching URLs LIMIT
-
Guys, there's an example to batching URLs using PHP:
Which is the maximum number of URLs I can add to that batch?
-
Yes, it's weird. I currently have the Pro plan. I'm doing queries using 200 URLs at the same time with no issues ;). The only limitation is the time: I have to make a query and wait for 10 seconds to perform another
This request exceeds the limit allowed by your current plan.
Thank you Zach, have a good day!
-
As far as I know the limit was 10. That article on the APIWiki says the same. I do know, as a premium subscriber, that the # of batch requests per second is 200, however.
Quote from the API Wiki:
"You can submit up to 10 URLs for every batch request. Larger batch requests will return an HTTP 400 response."
I'd just be careful, because if your not getting a 400 response, they may end up throttling you.
Hope this helps
Zach -
Thanks Zachary. I made a test adding a lot of URLs. SEOmoz says the limit is 200 or less URLs at the same time. So, what I have to use... the 10 URLs limit or the 200?
Currently Im able to get data of 200 URLs at the same time, that's great for me!
-
SEOmoz recommends batch requests of 10 URLs according to their API wiki http://apiwiki.seomoz.org/url-metrics it states that any batch request larger than this will output a 400 error from the server.
Hope that helps!
Zach
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Facebook URLs, Anchor Text
I have a client that is considering a facebook url change. For ease of explanation, let's say their currently existing URL is facebook.com/Company123. I've googled their currently existing facebook url and found a dozen or so websites that include the text, "facebook.com/Company123". But, these results don't include websites that have an anchor text of, for example, "Facebook" and a link pointing to facebook.com/Company123. Has anybody had success tracking down any/all websites that point to a specific Facebook url? I've tried Open Site Explorer, OpenLinkprofiler, RankSignals, and SEO SpyGlass to no avail. Thank you!
Moz Pro | | OMTAnno0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Getting a URL Unaccessible on the page grader
I'm optimizing a site for a financial advisor, here is the site: http://www.mattkeenancfp.com I am getting the message "that URL is unaccessible" when I try to use the on-page grader. This is an emerald website too, I'm not sure if that has any effect on anything though.
Moz Pro | | ryanbilak0 -
URL, Subdomain and Root Domain Structure
Various URL Structure
Moz Pro | | Mark_Ch
mydomain.co.uk
www.mydomain.co.uk
http://www.mydomain.co.uk
http://mydomain.co.uk
mydomain.co.uk/index.html
www.mydomain.co.uk/index.html
http://www.mydomain.co.uk/index.html
http://mydomain.co.uk/index.html HTACCESS File Index Rewrite RewriteRule ^index.(htm|html|php) http://www.mydomain.co.uk/ [R=301,L]
RewriteRule ^(.)/index.(htm|html|php) http://www.mydomain.co.uk/$1/ [R=301,L]
RewriteCond %{HTTP_HOST} ^mydomain.co.uk
RewriteRule ^(.)$ http://www.mydomain.co.uk/$1 [R=301,L] Google WMT Setting: Configuration | Settings
Preferred domain: radio check on "don't set a preferred domain" SEOMoz Open Site Explorer
mydomain.co.uk - (301 Redirect) [No Data] PA38 DA30
http://www.mydomain.co.uk/index.html - (301 Redirect) [No Data] PA23 DA30 Majestic Site Explorer
Number of Referring Domains & External Backlinks vary between the following instances:
URL: http://www.mydomain.co.uk
SUBDOMAIN: www.mydomain.co.uk
ROOT DOMAIN: mydomain.co.uk
Question
I have set up my htaccess file to rewrite "Various URL Structure" to www.mydomain.co.uk. However when i view metrics in Majestic SEO, the url / Subdomain / Root Domain all differ. Why is this happening?
Is this harming my site?
What is common practice when defining URL Structure? Any other quality advise and implementation structure would be much appreciated. Regards Mark0 -
Canonical URLs and Duplicate Page Content
My website (doctor directory) is getting a lot of duplicate page content & duplicate page title warnings from SEOmoz. The pages that are getting the warnings are doctors profiles which can be accessed at three different URLs. Problem is this should be handled by the canonical tag on the pages. So example below, all three open the same page: https://www.arzttermine.de/arzt/dr-sara-danesh/ https://www.arzttermine.de/arzt/dr-sara-danesh/gkv https://www.arzttermine.de/arzt/dr-sara-danesh/pkv Here's our canonical tag (on line 34): rel="canonical" href="http://www.arzttermine.de/arzt/dr-sara-danesh" /> So why is SEO moz crawling the page? We are getting hundreds of errors from this - and yet Google doesn't have any of the duplicate URLs indexed...
Moz Pro | | thomashillard0 -
Garbled URL's in Private Messages.
Every time I try to put a url in a Private message it gets garbled up with extra chars and then won't go to the right place. http://www.facebook.com/pages/Mariah-Carle-Photography Becomes: http://www.facebook.com/pages/Mariah58973jhsdfui-Carle%8594743Photography Ok after that test I deliberately garbled a url and it STILL work in the open forums....
Moz Pro | | Mcarle0 -
Crawl reports urls with duplicate content but its not the case
Hi guys!
Moz Pro | | MakMour
Some hours ago I received my crawl report. I noticed several records with urls with duplicate content so I went to open those urls one by one.
Not one of those urls were really with duplicate content but I have a concern because website is about product showcase and many articles are just images with href behind them. Many of those articles are using the same images so maybe thats why the seomoz crawler duplicate content flag is raised. I wonder if Google has problem with that too. See for yourself how it looks like: http://by.vg/NJ97y
http://by.vg/BQypE Those two url's are flagged as duplicates...please mind the language(Greek) and try to focus on the urls and content. ps: my example is simplified just for the purpose of my question. <colgroup><col width="3436"></colgroup>
| URLs with Duplicate Page Content (up to 5) |0 -
SEOmoz API? – "Limited access is included ... PRO membership." ?
Can someone expand on what you actually get with your pro membership on the Site Intelligence API. API page. Thanks
Moz Pro | | josey0