Metrics from Linkscape - DJ Passed, URL mozRank Passed and funny numbers
-
Hello,
Hoping someone can help me understand the difference between the Domain Juice Passed and some interesting numbers found in the exported CSV file.
I ran the Advanced Link Intelligence Report and focusing on the Links to Domain metrics. It looks like the report is sorted by mozRank passed but next to each link we are given the DJ Passed instead. Why is that?
My confusion is compounded by the fact that when I export the CSV of this report it no longer includes the DJ Passed numbers but does show URL mozRank Passed instead.
For Example, on the web version of the Advanced Link Intelligence Report the top link is:
http://www.holdenouterwear.com/shop.php with mozRank: 5.56 mozTrust: 5.95 and DJ Passed: 4.49
In the CSV file we don't get the DJ passed but get the URL mozRank Passed of: 0.00051
Looking at the CSV file further some links have URL mozRank Passed of 4.00E-05
Anyone has a clear explanation of why DJ Passed is not in the CSV file and how the mozRank passed is calculated? And what the 4.00E-05 mean?
Thank you.
-
You are operating under the assumption of the random surfer model, which weights all links equally. Under the reasonable surfer model, links are weighted based on the likelihood they will get clicked. Therefore, internal links pass a whole lot of that mozRank. There is also no guarantee that all 5.56 mozRank is passed in some form or another. Each link passes a portion of it, but that doesn't mean if you add them all up you will necessarily get 5.56. That's just the most you could get.
-
Having a hard time making this math add up.
If we use the link from http://www.holdenouterwear.com/shop.php as an example.
It has mozRank of 5.56 and the URL mozRank passed to my site is 0.00051
So does this mean there are 10901 links on this page? Clearly there are not.
-
Thanks Daniel. Great explanation
-
I'm not sure on the DJ passed issue, but the 4.00E-05 looks like some kind of scientific notation which means 4 with 5 zero's in front of it, or 0.00004. The decimal is too long so they anotate it as 4.00E-05. If it said 4.67E-08 then that number would be 0.00000000467. Hopefully that makes sense.
How mozrank passed is calculated is based on the pagerank algorithm that says each link on a page passes juice. So if the mozrank of the page is 5.56, then you look at how many links are on the page and each one passes a portion of that 5.56.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
Best Metrics but Consistently Outranked
I am hoping someone could help us determine why we generally rank quite poorly compared to our competition, despite leading in every single Competitive Metric. We get outranked on a term where the Page Grade gives us an "A", and we best the competitor on each of the metrics. Where would those with more experience suggest we start looking? rank.jpg
Moz Pro | | Yardboy0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Why don't Google+ URL's work in OSE?
Is there any reason why Google+ URLs does not work in OSE? Is it just that it is a secure URL or is there something bigger there? Why? Be cool to determine every website the person has been published on; especially if it is rel="author" verified. Jeff
Moz Pro | | WebBizIdeas1 -
MozRank in Open Site Explorer?
Hi, I wondered why mozRank is not showing in OSE? As this is the "equivalent" of Google's page rank? Thanks
Moz Pro | | CallieGunstinson0 -
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources. So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues. Example Case: SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404). When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found. I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed. Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved). Bonus for any tool that uses Google and SEOmoz API to gather this info for me Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed). Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools. Thanks!
Moz Pro | | prima-2535090 -
Impact of 301-redirected domains on OSE Metrics
When looking at the OSE metrics (DA, PA, Number Linking RDs etc.) is it purely based on OSEs evaluation of the specific domain or will it take into account links that have been 301-redirected to the domain?
Moz Pro | | bjalc20110 -
Is the "Too Many Links" metric a blunt instrument?
The SEO Moz crawl diagnostics suggest that we have too many on-page links on several pages including on our homepage. How does your system determine when a page has too many links. It looks like when a page as more than 100 links it’s too many. Should the system take into account the page authority, domain authority, depth of page or other metrics? In addition, no-follow links are being included. As these are dropped from Google’s link graph, does it matter if we have too many? For example, we could have 200 links, 120 of which are no follow. Your tools would tell us we have too many links. Feedback appreciated. Donal
Moz Pro | | AdiRste0