API Limit
-
I have some problems with API Access. All time i send more than 10 requests per second i got HttpError() {\n "status" : "503",\n "error_message" : "This request exceeds the limit allowed by your current plan. To increase your request limit, see: http://www.seomoz.org/api/pricing"\n}.
My Plan is Low Volume and continually got this 503 error. How to solve this?
-
Hi Walter,
Thanks for writing in and sorry for any confusion.
It looks like you actually have the PRO level API access and there is no Low Volume subscription on your account. For the PRO level API, the account is actually rate limited at 1 request ever 5 seconds and it does look like you are getting the correct access for that plan level on your account.
If you would like to sign up for the Low Volume API subscription, you can fill out our Order Form at http://goo.gl/BTbyy and we will get you set up right away.
Let me know if you have any other questions.
Chiaryn
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Mozscape API Problem With Credentials
Yello everyone 🙂 Yesterday I generated Access ID and Secret Key to try the API, but it didn't work. It's said that it might need 5 minutes, so I gave it something like 8 hours but it still didn't work afterwards. I used the PHP sample and I checked it with the Sample Expires value and it produces the same Signature parameter as in the Sample Valid API Signature, so the PHP code sample and the Sample Valid API Signature are in sync. In case you wonder - yes I changed the PHP sample by adding the valid strings for $accessID and $secretKey - I'm a long time programmer and I know what I'm doing with it. I tried also the example URL from Sample Valid API Signature couple of times, refreshing each time to get new Expire parameter, but it's not working too. Both the PHP sample and the Sample Valid API Signature are giving me this: **{ "status" : "401", "error_message" : "Your authentication failed. Check your authentication details and try again. For more information on signed authentication, see: http://apiwiki.seomoz.org/w/page/29574176/SignedAuthentication" }** I am new here and if there is another place to look for support please just point it out and excuse me for wasting your time - I just couldn't see it and I really tried. Thank you for your time to read this! Calvin Your Access ID:
Moz Pro | | Calvin50 -
Why does the CSV report on OSE limit the number of links I can download to less than it should?
I run an inbound link report for the domain www.acornstairlifts.co.uk on OpenSiteExplorer.org using the following filters: Show: All Links from: only external Pages to: pages on this root domain Show links: ungrouped It says that there are around 30,678 links for that query. I then go to the advanced tab, and run a report using the same filters, but when I click download report I only get 2874. I've run this for the past 2 days and I get the same. Why?
Moz Pro | | Brett-Harland0 -
API Request Rate Problem
Hi there A Java app of mine worked perfectly until recently, when it gets "rate exceeds your current plan" errors from the API, although it does only 1 request in 10 seconds. Any idea what's wrong? Cheers, Chris
Moz Pro | | Diderino0 -
Why does Linkscap API request hang while extracting data ?
Hi, I am using LinkScape API to get follow and nofollow links . I use cron to get data for each url of sitemap.xml. However while cron is running, the extraction of data hangs on some pages which i later need to delete manually for re starting the execution. Do anyone have any idea why this is happening ? How can i ignore such pages ?
Moz Pro | | Ravi_Pathak0 -
Pages Crawled: 250 | Limit: 250
One of my campaigns says: Pages Crawled: 250 | Limit: 250 Is this because it's new and the limit will go up to 10,000 after the crawl is complete? I have a pro account, 4 other campaigns running and should be allowed 50,000 pages in total
Moz Pro | | MirandaP0 -
SEOmoz API? – "Limited access is included ... PRO membership." ?
Can someone expand on what you actually get with your pro membership on the Site Intelligence API. API page. Thanks
Moz Pro | | josey0 -
API Key
I am a Pro user and I am trying to find a way to create an SEOmoz API key but cannot find how to do...
Moz Pro | | netbuilder0