Site Wide Link Situation
-
Hi-
We have clients who are using an e-commerce cart that sits on a separate domain that appears to be providing site wide links to our clients websites. Therefore, would you recommend disallowing the bots to crawl/index these via a robots.txt file, a no follow meta tag on the specific pages the shopping cart links are implemented on or implement no follow links on every shopping cart link? Thanks!
-
Hi! Thanks! I completely understand. We would never want to prevent URLs on the client's domain from being crawled. That could clearly put our client's online presence at risk. However, we're more concerned with Google noticing the shopping cart's domain is pointing to every page of the client's website which could appear unnatural & potentially, put the client's site at risk. What we're hoping to achieve is preventing from Google crawling the third party URL on every page to avoid any penalization.
-
Rez you gotta consider a few things.
When looked at the site structure and AI of your site you have to think about the Juice flow as a funnel. More Juice to the top distributed less juice to the bottom. So for shopping cart pages or product pages ( depending on how deep they are ), i usually incorporate Long tail , targeted keywords ( ie: Mimi Juie baby sippy cups ) where the volume is not much but its targeted enough that even with a limited juice flow you can rank.
My initial suggestion to you was to contact the person or company that built the shopping cart in order to remove the link. ( THAT IS MY FIRST OPTION ). I would not do a no follow to the product page. ( dont do anything crazy like that ) Specially if you have Share bar options for your products and reviews etc. ( you will lose all that ) .
LAST OPTION for you should be to do a robot.txt to ONLY that link, NOT the page.
Again please understand you should not DEVALUE your page like that .
Hope this helps.
Let me know how it turns out
Hampig M
BizDetox
-
Hi-
Thanks for the feedback! So the robots.txt is the best way?
The shopping cart's URL does not have much authority so it's not important for us to get the link juice from the separate domain which is why we're debating how to implement a no follow. Do you see any harm in doing so?
Thanks,
Rez
-
Rez.
You should be able to remove that sitewide link from your shopping cart. I had a similar situation with a joomla site i did that had a sitewide link situation on the product page of JoomShopping and you can purchase to remove it. Unfortunately thats the way it is. Take a look at the help files or forums of the shopping cart site. What shopping cart is it?
If you cannot remove it, then robots.txt is the best way i would NOT do a no follow to that page. Unless you dont care about the data or care about getting ranked for those pages. But you are saying its site wide.
So i am a little confused on that.
Hope it helps.
Best Wishes,
Hampig M
BizDetox
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My sites are not mooving why?
i have three local sites in Dubai. my second site is on page three. i didn't go for any guest post yet but for a long time with all improvement, It didn't move a bit. unable to understand the adhesivity of page three. lol any suggestion site 1- https://www.desertsafaritour.ae site 2- https://www.arabiannightsafari.com site3- https://www.uaedesertsafari.com any expert suggestion or any guideline by moz expert www.desertsafaritour.ae
Intermediate & Advanced SEO | | faisalkiani0 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
Do I eventually 301 a page on our site that "expires," to a page that's related, but never expires, just to utilize the inbound link juice?
Our company gets inbound links from news websites that write stories about upcoming sporting events. The links we get are pointing to our event / ticket inventory pages on our commerce site. Once the event has passed, that event page is basically a dead page that shows no ticket inventory, and has no content. Also, each “event” page on our site has a unique url, since it’s an event that will eventually expire, as the game gets played, or the event has passed. Example of a url that a news site would link to: mysite.com/tickets/soldier-field/t7493325/nfc-divisional-home-game-chicago bears-vs-tbd-tickets.aspx Would there be any negative ramifications if I set up a 301 from the dead event page to another page on our site, one that is still somewhat related to the product in question, a landing page with content related to the team that just played, or venue they play in all season. Example, I would 301 to: mysite.com/venue/soldier-field tickets.aspx (This would be a live page that never expires.) I don’t know if that’s manipulating things a bit too much.
Intermediate & Advanced SEO | | Ticket_King1 -
URL Value: Menu Links vs Body Content Links
Hi All, I'm a little confused. I have read a number of articles from authority sites that give mixed signals over the importance of menu links vs body content links. It is suggested that whilst all menu links spread link juice equally, Google does not see them as favourably. Inserting a link within the body will add more link juice value to the desired page. Any thoughts would be appreciated. Thanks Mark
Intermediate & Advanced SEO | | Mark_Ch0 -
Site Navigation
Hi Mozzers, I am an SEO at uncommongoods.com and looking for your opinion on our site nav. Currently our nav & URLs are structured in 3 levels. From the top level down, they are: 1. Category ex: http://www.uncommongoods.com/home-garden 2. Subcat ex: http://www.uncommongoods.com/home-garden/bed-bath 3. Family ex:http://www.uncommongoods.com/home-garden/bed-bath/bath-accessories Right now, all levels are accessible from our top nav but we are considering removing the family pages. If we did that, Google could still find & crawl links to the family pages, but they would have to drill down to the subcat pages to find them. Do you guys think this would help or hurt our SEO efforts? Thanks! -Zack
Intermediate & Advanced SEO | | znotes0 -
Whats the best search parameters on Open Site Explorer for identifying un-natural back links?
Using open site explorer, what parameters will best narrow down low quality back links(or back links that could be viewed as un-natural by Google)? ie. blog networks, link schemes, etc.
Intermediate & Advanced SEO | | Stromme0 -
Purchased new site with good SERP ranks, do I operate and build links or redirect the TLD?
I recently purchased a blog within my product category - it has many first page rankings for difficult keywords within my niche. I am wondering if it makes more sense for for me to continue to operate this blog and build links to my site and blog (blog is in wordpress) or to export the XML feed and upload the content to my blog (new site also in wordpress), at which point I would do a 301 at the Top-Level domain. Any thoughts, ideas, or personal experiences would be greatly appreciated. Thanks!
Intermediate & Advanced SEO | | NickEubanks0 -
Links from tumblr
I have two links from hosted tumblr blogs which are not on tumblr.com. So, website1 has a tumblr blog: tumblr.website1.com And another site website2.com also uses the a record/custom domains option from tumblr but not on a subdomain, which is decribed below: http://www.tumblr.com/docs/en/custom_domains Does this mean that all links from such sites count as coming from the same IP in google's eyes? Or is there value in getting links from multiple sites because the a-record doesn't affect SEO in a negative way? Many thanks, Mike.
Intermediate & Advanced SEO | | team740