Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What is Linking C-Blocks
-
Currently i am using MOZ pro tool under moz analyticls >> Moz Competitive Link Metrics >> history having a graph "Linking C-Blocks" Please help me understanding Linking C-Blocks, what is, How to build, how to define ...
-
Thanks for quoting me in the answer. I didn't realize the original answer was so popular, either. Glad it's one that's easily understood.
-
Thank You.. Tom Roberts, i understand this..
-
To lift a quote from Keri Morgret in this thread from a few years ago:
"It refers to the part of the IP address that's different. The same class C address means something has the same third octect in the address. In the following, the first three IPs are in the same class C, and the fourth address is not.
192.168.1.1
192.168.1.2
192.168.1.3
192.168.100.4...it's a hint to Google that the sites are all related to each other and on the same server, and that the links may not be very natural since there is the good possibility that the same person set them up."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Rogerbot blocked by cloudflare and not display full user agent string.
Hi, We're trying to get MOZ to crawl our site, but when we Create Your Campaign we get the error:
Moz Pro | | BB_NPG
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct. If the issue persists, check out this article for further help. robot.txt is fine and we actually see cloudflare is blocking it with block fight mode. We've added in some rules to allow rogerbot but these seem to be getting ignored. If we use a robot.txt test tool (https://technicalseo.com/tools/robots-txt/) with rogerbot as the user agent this get through fine and we can see our rule has allowed it. When viewing the cloudflare activity log (attached) it seems the Create Your Campaign is trying to crawl the site with the user agent as simply set as rogerbot 1.2 but the robot.txt testing tool uses the full user agent string rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+shiny@moz.com) albeit it's version 1.0. So seems as if cloudflare doesn't like the simple user agent. So is it correct the when MOZ is trying to crawl the site it uses the simple string of just rogerbot 1.2 now ? Thanks
Ben Cloudflare activity log, showing differences in user agent strings
2022-07-01_13-05-59.png0 -
How Long Do The Link Tracking Lists Take To Update?
Hi, How long do the link tracking lists take to update? It's been over a week and each is still showing a red cross. The reason I created it was because I migrated to a new domain name and Moz is still showing the backlinks on the old property and not the new (the domain swap happened in December 2020). I can see that Ahrefs has picked up all of the links - Both new and redirected - but Moz has not. When will this be reflected in Moz as it has already been over three months? Is there a reason for the above questions? I appreciate any response here. 🙂
Moz Pro | | Smarter_Finances3 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Automatically Check List of Sites For Links To Specific Domain
Hi all, Can anyone recommend a tool that will allow me to put in a list of about 200 domains that are then checked for a link back to a specific domain? I know I can do various link searches and use Google site: command on a site by site basis, but it would be much quicker if there was a tool that could take the list of domains I am expecting a link on and then find if that link exists and if so on what page etc. Hope this makes sense otherwise I have to spend a day doing it by hand - not fun! Thanks,
Moz Pro | | MrFrisbee
charles.0 -
How do Infographics provide links? (example inside)
Hi, I don't get it...
Moz Pro | | BeytzNet
I've encountered the following infographic that seemed to do well: http://www.brilliance.com/4cs-of-diamonds-infograph As you can see, the page has thousands of social shares from Tweets to Facebook and obviously Pinterest. However, when I placed this page on Open Site Explorer (click to view) I see only 9 external links and page authority of 28/100. And this seems to me like a successful infographic. Any explanations? Thanks0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
What analysis exists for Out Bound Links (OBL) from your site
Hi Maybe I am missing this but I can''t seem to see it. I am doing some analysis on a client's site and want to get a csv list of links from the client's site to external sites. So what I am looking for is a list of Out Bound Links (OBL) from the client's site. I want to run these past a black list / bad link neighborhood checking script I have. This would actually be a nice feature in SEO Moz Pro, unless it actually already does and I am just missing it or not setting filters correctly. Thanks Trevor
Moz Pro | | tstolber10