Facebook URLs, Anchor Text
-
I have a client that is considering a facebook url change. For ease of explanation, let's say their currently existing URL is facebook.com/Company123. I've googled their currently existing facebook url and found a dozen or so websites that include the text, "facebook.com/Company123".
But, these results don't include websites that have an anchor text of, for example, "Facebook" and a link pointing to facebook.com/Company123. Has anybody had success tracking down any/all websites that point to a specific Facebook url? I've tried Open Site Explorer, OpenLinkprofiler, RankSignals, and SEO SpyGlass to no avail. Thank you!
-
Perfect. I heard something back from ahrefs that was very similar. Thanks, again!
Summary (as it relates to ahrefs, at least): If a website is crawled by a backlink research tool, any applicable backlink to facebook.com/Company123 should show up.
-
I assume that there are not that many results being returned?
Yeah, that would be the bad part - I believe there is nothing really can be done about that.
Maybe before changing URL, you can do some type of announcement on the page, saying something like "hey guys, we are about to change our FB url, here is what it gonna be". Also do similar email blast.
-
Thank you! I think my main issue when using the tools described is that the url is facebook.com/Company 123 (meaning, it's on facebook's domain). I am/was hoping there was a tool out there I wasn't familiar with.
-
Hi there.
I would say use all available tools like you mentioned + Ahrefs, Screaming Frog etc + Simply googling exact match links. Build up a combined list of returned results, get unique links. This is really the best way to accumulate all those links.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it?
I have 2 linking root domains on my URL. But I don't get the whole Root domain thing. So I don't understand how I can improve it? I copy and pasted this, from my Links page in my campaign because I can't seem to grasp what a root domain is: 'A higher number of good quality linking root domains improves a page's ranking potential'. Can some one explain to me what this is. As simply as possible. Here's my site www.Thumannagency.com Thanks in advance:)
Moz Pro | | MissThumann0 -
OSE for Facebook
Hi, I recall being able to use OSE for Facebook. Take https://www.facebook.com/VICE/ which we know as a URL would have many backlinks. It's need registering any. Has this always been the case?
Moz Pro | | wearehappymedia0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
5XX (Server Error) on all urls
Hi I created a couple of new campaigns a few days back and waited for the initial crawl to be completed. I have just checked and both are reporting 5XX (Server Error) on all the pages it tried to look at (one site I have 110 of these and the other it only crawled the homepage). This is very odd, I have checked both sites on my local pc, alternative pc and via my windows vps browser which is located in the US (I am in UK) and it all works fine. Any idea what could be the cause of this failure to crawl? I have pasted a few examples from the report | 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/bags.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/gloves.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/purses.html 500 1 0 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories/sunglasses.html | 500 | 1 | 0 | Am extra puzzled why the messages say time out. The server dedicated is 8 core with 32 gb of ram, the pages ping for me in about 1.2 seconds. What is the rogerbot crawler timeout? Many thanks Carl
Moz Pro | | GrumpyCarl0 -
How do i fix the problem of having 2 url's splitting my rankings?
please excuse my noobness. i have a nice site, www.soundsenglish.com which I built from scratch and learned by doing. It has lots of nice content and it does ok, my rankings are woeful mostly cos of all the mistakes i made building it...i'll fix that stuff. This stuff i don't know about. from my adsense i get 2 listings www.soundsenglish.com and soundsenglish.com wierdly the second one gets consistently higher paying ads although most of the visitors come through the first but they are both the same landing page same content -as far as i can tell. when i try to find rankings, use the seo tools etc i get diferent scores, so whatever it is, it is splitting the sites - can't be a good thing. i have no idea why this happens and i have some inkling that maybe i need something to do with cannonical redirects or maybe a 301 redirect. both of which i have little idea how to do. If that isn't enough naive blundering about for you, i have a little more... it occurs to me that this prpoblem is probably happening with every page on my site, i.e. the 'juice ' is not getting credited onto that one page. this surely means cannonical redirects but even afterreading up on them idon't quite get it. or rather ido but idon;t get how to apply it to my context.
Moz Pro | | soundsenglish0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Crawl test tool from SEOmoz - which URLs does it actually crawl?
I am using for the first time the crawl test tool from SEOmoz and I do not really understand which URLs the tool is going to crawl. First, it says "enter any subdomain" --> why can´t I do the crawl for the root domain? Second it says "we'll crawl up to 3,000 linked-to pages" --> does that mean that the tool crawls all internal links that it can find on the given domain? Thanks for your help!
Moz Pro | | Elke.GetApp0 -
Looking for a tool that can pull OSE stats for a bulk amount of URLs
I know that people have developed inhouse tools with the OSE API that can analyze thousands of URLs and pull metrics like PA, inbound links, etc. I need to analyze about 80k URLs and sort them by authority and I was hoping that someone could point me to a tool that can do this or let me use their tool. I'm willing to pay for access to it. We could build it inhouse, I imagine that it would be pretty easy, but our IT resources are stretched too thin right now.
Moz Pro | | Business.com0