Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Self Referencing Links - Good or Bad?
-
As an agency we get quite a few of our clients come to us saying "Ooo, this company just contacted me saying they've run an SEO report on my site and we need to improve on these following things"
We had one come through the other day that had reported on something we had not seen in any others before.
They called them self-referencing links and marked it as a point of action should be taken. They had stated that 100% of the pages on our clients website had self-referencing links.
The definition of self-referencing is when there is a link on a page that is linking to the page you are currently on. So for example you're on the home page and there is a link in the nav bar at the top that says "Home" with a link to the home page, the page you are already currently on.
Is it bad practice? And if so can we do anything about it as it would seem strange from a UI point of view not to have a consistent navigation. I have not heard anything about this before but I wanted to get confirmation before going back to our client and explaining.
Thanks Mozzers!
-
Great, as we thought!
Thanks for the explanation, makes even more sense now!
-
Well said. And on that note, I wouldn't trust the SEO advice of an email spammer
-
"it would seem strange from a UI point of view not to have a consistent navigation."
You hit the nail on the head, imagine is a website like Amazon removed the link on their logo to the homepage from the homepage, some people click this on the homepage just because they're confused and if they don't see a page refresh they may be upset. It's also a LOT more work to implement this.
I would argue a consistent navigation is good, as Google likes to follow a good structure, it's just causing you potential problems, what if you're using a CMS and you're on the blog, then you go to a blog article, do you remove the blog menu link as you're within the blog? There's just no real reason I can personally see for it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Broken canonical link errors
Hello, Several tools I'm using are returning errors due to "broken canonical links". However, I'm not too sure why is that. Eg.
Technical SEO | | GhillC
Page URL: domain.com/page.html?xxxx
Canonical link URL: domain.com/page.html
Returns an error. Any idea why? Am I doing it wrong? Thanks,
G1 -
CSS background image links bad for seo?
On one of the websites I manage SEO for, the developers are changing how our graphical links are coded. They're basically coding in such away where there is no anchor text and no alt tag, so for example: So there's no anchor nor alt context for Google's crawler. How badly will this affect SEO, or is it extremely minimal and I shouldn't worry about? Thanks in advance.
Technical SEO | | JimLynch0 -
Referencing links in Articles and Blogs
Hi I am wondering if the <sup>tag in html is picked up by google as a reference point?</sup> I.e when you put a superscript in word it puts a small number next to your sentence. Then you have a list of reference at the end of the blog/article does google recognise this?
Technical SEO | | Cocoonfxmedia0 -
Good alternatives to Xenu's Link Sleuth and AuditMyPc.com Sitemap Generator
I am working on scraping title tags from websites with 1-5 million pages. Xenu's Link Sleuth seems to be the best option for this, at this point. Sitemap Generator from AuditMyPc.com seems to be working too, but it starts handing up, when a sitemap file, the tools is working on,becomes too large. So basically, the second one looks like it wont be good for websites of this size. I know that Scrapebox can scrape title tags from list of url, but this is not needed, since this comes with both of the above mentioned tools. I know about DeepCrawl.com also, but this one is paid, and it would be very expensive with this amount of pages and websites too (5 million ulrs is $1750 per month, I could get a better deal on multiple websites, but this obvioulsy does not make sense to me, it needs to be free, more or less). Seo Spider from Screaming Frog is not good for large websites. So, in general, what is the best way to work on something like this, also time efficient. Are there any other options for this? Thanks.
Technical SEO | | blrs120 -
How can I block incoming links from a bad web site ?
Hello all, We got a new client recently who had a warning from Google Webmasters tools for manual soft penalty. I did a lot of search and I found out one particular site that sounds roughly 100k links to one page and has been potentialy a high risk site. I wish to block those links from coming in to my site but their webmaster is nowhere to be seen and I do not want to use the disavow tool. Is there a way I can use code to our htaccess file or any other method? Would appreciate anyone's immediate response. Kind Regards
Technical SEO | | artdivision0 -
Are 404 Errors a bad thing?
Good Morning... I am trying to clean up my e-commerce site and i created a lot of new categories for my parts... I've made the old category pages (which have had their content removed) "hidden" to anyone who visits the site and starts browsing. The only way you could get to those "hidden" pages is either by knowing the URLS that I used to use or if for some reason one of them is spidering in Google. Since I'm trying to clean up the site and get rid of any duplicate content issues, would i be better served by adding those "hidden" pages that don't have much or any content to the Robots.txt file or should i just De-activate them so now even if you type the old URL you will get a 404 page... In this case, are 404 pages bad? You're typically not going to find those pages in the SERPS so the only way you'd land on these 404 pages is to know the old url i was using that has been disabled. Please let me know if you guys think i should be 404'ing them or adding them to Robots.txt Thanks
Technical SEO | | Prime850 -
International Site Links In Footer
We have several international sites and we have them linked in the footer of our main .com site . Should we add "nofollow" to these links? Our concern is that Google could see these sites as a network?
Technical SEO | | EwanFisher0 -
Does Yelp pass link juice?
This is probably a profoundly obvious question, but I can't seem to find an explicit answer on the internet, so I'll ask it here: Yelp's links out to local business websites are not nofollow'd, but they go through a javascript-based redirect. My understanding is that javascript redirected links do not pass link juice, so a link from a yelp profile will not directly impact my page authority; however, it looks like yelp does use nofollow judiciously for internal links, so I don't understand why they would allow follow for these "useless" outbound links. Do yelp's javascript-redirected links pass link juice?
Technical SEO | | tvkiley0