How do you guys/gals define a 'row?'
-
I have a question about calls to the API and how these are measured. I noticed that the URL Metrics calls allow a batch of multiple URLs.
We're in a position where we need link data for multiple websites; can we request a single row of data with link information for multiple URLs, or do we need to request a unique row for each URL?
-
Hi Stephen,
If you imported the information you received from a request to our API into a spreadsheet, you would have rows of information. The number of rows depends on the request you make. If you ask for 200 links from our Top Back Links API, then you’ll get 200 rows of information about backlinks. If you submit a single URL to our Page Metrics API, then you’ll get one row of information back from the API. That row of information would include page metrics about the URL. If you do a batch request to the Page Metrics API and submit 5,000 URLs, then you’ll receive 5,000 rows of information about the URLs you submitted. The number of rows you get back from your request depends on which API you’re using and the amount of information you ask for in your request. You only pay for rows of information you actually receive.
Thanks,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
How To Ques. Getting ranked on page one for a keyword when you compete with bigger websites/companies/stores
Can David Beat Goliath. I work with small businesses with top products that are up against big brands and their online presence. If I am working with them to create content that meets the needs of all their stakeholders/customers/prospects to generate revenue I wonder if keyword targeting with content can really pay off to get them page one, #1 position ranking. So I ask you this question? How do you create a story for a small online store that can get ranked on page one for a keyword when you compete with bigger websites (or sites with higher domain authority)? I don't need all the basics, I'm just looking for a key insight or tip that you have found or heard is working for a David to beat a Goliath (and hold their position rank once they get highly ranked). We are up against sites - for viable keywords -who have higher domain authority and in some cases more content or link backs. Also, I've notice in situations when I do get to page one and I'm in position 7 MOZ analytics show low to no traffic coming from it? Yikes, what do I do to improve that? These are top keywords.
Moz Pro | | brandawakening0 -
Its been over a month, rogerbot hasn't crawled the entire website yet. Any ideas?
Rogerbot has stopped crawling the website at 308 pages past week and has not crawled the website with over 1000+ pages. Any ideas on what I can do to get this fixed & crawling?
Moz Pro | | TejaswiNaidu0 -
Issue Using MozScape wtih SEOGadet's Link Excel Extension
Howdy Everyone! Since i've seen the MozCon presentation by Richard Baxter at SEOGadget, i've been completely amazed by the links extension for Excel. I tried this morning to give it a try. Unfortunately, I cannot get it working, and was hoping that someone here could help me out. I know it's out of the usual "realm" of questions, but I figure it's worth a try 🙂 I successfully installed the addon and entered my Access id (as "member-xxxxxxxxxx" member is in my id) and the secret key. I then downloaded the "OSE" excel spreadsheet" just to make sure I get all the calls right (as I know the doc works) Once I do this, and enter anything in, I get the "an error occured: the remote server returend an error: 401 unauthorized. I then went into the config file (as the setup doc suggests) and disabled run time caching (or at least set "SEOMOZ_API_use_cache_YN":"N") and the SEOMOZ_API_timeout to 100000 from 60000. I have also tried uninstalling and reinstalling the addin, along with regenerating the MozScape API Key Anyway, i'm not an excel wiz, and would appreciate any help that I could get on this. I'm also about to experiment with SEO Tools For Excel if anywone wants to check that out. Thanks in advance Zach Russell
Moz Pro | | Zachary_Russell0 -
When's the next Webinar?
I really love the webinars! I listen to them on long walks occasionally, but I haven't seen one in May or June. When will the next one be? Thanks! I apologize if this should have gone to the help desk...
Moz Pro | | WilliamBay0 -
I have a Rel Canonical "notice" in my Crawl Diagnostics report. I'm presuming that means that the spider has detected a rel canonical tag and it is working as opposed to warning about an issue, is this correct?
I know this seems like a really dumb question but the site I'm working on is a BigCommerce one and I've been concerned about canonicalisation issues prior to receiving this report (I'm a SEOmoz pro newbie also!) and I just want to be clear I am reading this notice correctly. I presume this means that the site crawl has detected the rel canonical tag on these pages and it is working correctly. Is this correct?? Any input is much appreciated. Thanks
Moz Pro | | seanpearse0 -
How can I Pull OSE Data for Multiple URL's at once
I'm putting together a link prospecting csv (very basic/simple). I'm doing my own manual hunting for link prospects and compiling them in a list in that excel doc. Once I'm done with that, I want to pull OSE data on a larger scale (MozRank, PA, DA, etc.). I know Niel Bosma's SEO tools for Excel exists, but I have a Mac, and it's not available for that. And I can't really pay for any of the big tools right now (ie BuzzStream). Does anybody know of a good tool or way of going about pulling this data in a way that will save time? As opposed to pulling data for each URL one by one. ANY tips would be GREATLY appreciated.
Moz Pro | | MichaelWeisbaum0 -
What's the best practice for tracking broad match traffic?
We use the Pro Web App for our keyword/ranking reporting. First, am I correct in assuming that this uses exact match to report the traffic? That being the case, is there a way in Moz (beyond entering endless permutations of our targeted keywords) to track traffic we're getting out of broad match searches? For example, say we optimized for and are tracking the keyword "lambada dancing llamas" for a company called Llarry's Llamas. Let's say that Moz reports little to no traffic for that keyword, but GoogleAnalytics indicates that we've gotten traffic from "larrys dancing llamas", "dancing llamas", "llarry llamas" and so on. So, obviously we're getting broad match traffic out of the lambada llamas, just not a lot of exact match, which is all that the Moz app shows. How do other people track this kind of traffic?
Moz Pro | | MackenzieFogelson0