Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel="Follow"? What the &#@? does that mean?
-
I've written a guest blog post for a site. In the link back to my site they've put a rel="follow" attribute. Is that valid HTML?
I've Googled it but the answers are inconclusive, to say the least.
-
I don't think so either, but you never know. Simple enough test to run to see if Google recognizes a "follow" or "dofollow" tag, simple enough test to run that's for sure. If it is hardcoded in the link code it will override any external nofollow tag.
-
Hi, what I meant was whether I should be looking for robot txt at the top of the page or somesuch
-
Hi Irvnig
Thanks for the response but the issue of adding tags doesn't apply as it's not my site.
-
AFAIK, there is no way to "sneakily" no-follow a link. You no-follow a link by adding rel=nofollow. If rel=nofollow isn't there, the link is followed.
-
test it to see if for some reason it is recognized, just for fun.
if something on a site is nofollowed by default and doesn't show up in the source code of that link (meaning it is declared in another piece of code), add a rel="follow" and a rel="dofollow" tag and see if it overrides the nofollow by using a firefox plugin tool that highlights nofollow links for you (you should already have this installed if you are an SEO)
-
The only other place I've seen that is in spam blog comments (as a desperate attempt to override the blog's default "no-follow")....
Yep, that's what I've read as well.
Now he's changed it to rel="dofollow" (no, me neither) -- which strikes me as even more gobbledegook.
Obviously I'm going to ask him to leave out the attribute altogether. But what other attributes should I be looking for on the page source (CTRL+U) to ensure he hasn't sneakily no-followed all the links on the page?
-
GoogleBot does obey the rel="nofollow" attribute.. as for rel="follow" - I don't think so. The only other place I've seen that is in spam blog comments (as a desperate attempt to override the blog's default "no-follow")....
-
It's a way of controlling the link power from a site. They're passing on the link juice to you.
If you want the search engines to see that link on the external blog, then what they have done is a good thing. They could have also just left that parameter out altogether.
People can put rel="nofollow". This means "don't pass link juice". You could interpret it as a directive to the world that whilst you are providing the link to the site, you don't endorse it.
From Google:
"Nofollow" provides a way for webmasters to tell search engines "Don't follow links on this page" or "Don't follow this specific link."
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
Rel=Canonical For Landing Pages
We have PPC landing pages that are also ranking in organic search. We've decided to create new landing pages that have been improved to rank better in natural search. The PPC team however wants to use their original landing pages so we are unable to 301 these pages to the new pages being created. We need to block the old PPC pages from search. Any idea if we can use rel=canonical? The difference between old PPC page and new landing page is much more content to support keyword targeting and provide value to users. Google says it's OK to use rel=canonical if pages are similar but not sure if this applies to us. The old PPC pages have 1 paragraph of content followed by featured products for sale. The new pages have 4-5 paragraphs of content and many more products for sale. The other option would be to add meta noindex to the old PPC landing pages. Curious as to what you guys think. Thanks.
Technical SEO | | SoulSurfer80 -
Sitemap_index.xml = noindex,follow
I was running a rapport with Sreaming Frog SEO Spider and i saw: (Tab) Directives > NOindex : https://compleetverkleed.nl/sitemap_index.xml/ is set on X-Robots-Tag 1 > noindex,follow Does this mean my sitemap isn't indexed? If anyone has some more tips for our website, feel free to give some suggestions 🙂 (Website is far from complete)
Technical SEO | | Happy-SEO2 -
Schema markup for products is missing "price": Is this bad?
Hey guys, So a current client of mine has an e-commerce shop with a few hundred products. They purposely choose to keep the prices off of their website, which is causing errors in Google Webmaster Tools. Basically the error shows: Error: Structured Data > Product (markup: schema.org) Error type: missing price 208 items with error Is this a huge deal? Or are we allowed to have non-numerical prices for schema ie. "call for quote"
Technical SEO | | tbinga1 -
What is the difference between "Referring Pages" and "Total Backlinks" [on Ahrefs]?
I always thought they were essentially the same thing myself but appears there may be a difference? Any one care to help me out? Cheers!
Technical SEO | | Webrevolve0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Google's "cache:" operator is returning a 404 error.
I'm doing the "cache:" operator on one of my sites and Google is returning a 404 error. I've swapped out the domain with another and it works fine. Has anyone seen this before? I'm wondering if G is crawling the site now? Thx!
Technical SEO | | AZWebWorks0 -
What is best practice for redirecting "secondary" domain names?
For sites with multiple top-level domains that have been secured for a business or organization, I'm curious as to what is considered best practice for setting up 301 redirects for secondary domains. Is it best to do the 301 redirects at the registrar level, or the hosting level? So that .net, .biz, or other secondary domains funnel visitors to the correct primary/main domain name. I'm looking for the "best practice" answer and want to avoid duplicate content problems, or penalties from the search engines. I'm not trying to game the system with dozens of domain names, simply the handful of domains that are important to the client. I've seen some registrars recommend hosting secondary domains, and doing redirects from the hosting level (and they use meta refresh for "domain forwarding," which I want to avoid). It seems rather wasteful to set up hosting for a secondary domain and then 301 each URL.
Technical SEO | | Scott-Thomas0