Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved Different DR on MOZ vs SEMrush
-
My domain has a different backlink profile on Moz and different on SEMRush. I don't under whats accurate. my domain is an AI Jobs Portal
-
-
Different Methodologies: Moz and SEMrush use different algorithms to calculate DR.
-
Data Sources: Moz and SEMrush may pull data from different sources for backlink analysis.
-
Frequency of Updates: Variations in how often Moz and SEMrush update their databases can lead to differences in DR scores.
-
Scope of Analysis: The scope of websites analyzed by Moz and SEMrush may vary, impacting DR scores.
-
Algorithm Changes: Updates to algorithms used by Moz or SEMrush can result in changes to DR scores.
-
-
Hey,
You are right on your point. I also see the states on Moz for my website. it was totally different from my semrush. I want to ask for DA states. which platform is reliable for DA checker Moz or SEMrush for my CapCut apk website? Please let me know soon!
-
@mandoalanhukam, thank you. How can I increase the DA and traffic of my website?
-
Discrepancies in backlink profiles between different SEO tools like Moz and SEMRush are not uncommon. Each tool may use different algorithms and sources to crawl and index backlinks, leading to variations in the reported data. Here are some steps you can take to understand and address the differences in your domain's backlink profile:
-
@LucyEmma Because both Moz and SEMrush have their own criteria
-
@mcafeeonline thank you
-
@mcafeeonline Hi!
Yes, I also face that problem when Moz shows different da on my website. And when I check from Semrush, it offers a difference.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved about directlink google
0 -
Zero '0' Total Visits
Hi. One of the properties in our account has been reporting zero '0' total visits for the past few weeks. The other properties aren't affected. Is there a reason for this or is this an issue on the Moz side of things. Thanks!Moz Zero Visits.PNG
Reporting & Analytics | | rh-digi0 -
Unsolved how can i get up my da?
0 -
Unsolved Why Moz shutdown on seprate website ?
My moz shutdown on some website like my website -
Moz Tools | | yuvrajsindhal
https://shotsblastingmachine.com/1 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
TLD vs Sub Domain in Regards to Domain Authority
I have always been under the impression that top level (or root) domains can hold different domain authority than that of a sub domain. Meaning that sub domain's and TLD can hold different ranks and strength in search engine result pages. Is this a correct or just an assumption? If so when i add a root domain and subdomain into the campaign manager i get back the same link information and domain authority? www.datalogic.com
Moz Pro | | kchandler
www.automation.datalogic.com Have I made an incorrect assumption or is this an issue with the SEOMoz campaign manager?0