Main menu duplication
-
I am working on a site that has just gone through a migration to Shopify at the very same time as Google did an update in October. So problems from day 1.
All main menu categories have subsequently over the past 6 weeks fallen off a cliff. All aspects of the site have been reviewed in terms of technical, link profile and on-page, with the site in better shape than several ranking competitors.
One issue that i'd like some feedback on is the main menu which has 4 iterations in the source.
- desktop
- desktop (sticky)
- mobile
- mobile (sticky - appears as a second desktop sticky but I assume for mobile)
These items that are "duplicated" menus are the top level menu items only. The rest of the nested menu items are included within the last mobile menu option.
So
- desktop menu in source doesn't include any of the sub-menu items, the mobile version carries all these
- there are 4 versions of the top level main menu items in source
Should I be concerned? Considering we have significant issues should this be cleaned up?
-
A couple of other issues were uncovered with certain collections browser rendering. Cleaned up menu duplication and these. Monitoring.
-
You are right to be concerned and many in the SEO community don't really feel that Shopify has 'nailed' SEO yet. It started as a slightly nicer version of Wix where you could build your own site pretty easily but obviously they handle a lot of the eCommerce aspects as well (thus it's very attractive to business owners, sadly it's not great for SEO)
The community is expanding and the number of plugins and add-ons for Shopify is broadening. The problem is, many developers working on the Shopify platform don't have too much SEO experience (at least, that has been my experience of the Shopify community)
If you are finding that certain items are missing from the 'base' (non modified) source code, that is a concern. Google can technically crawl generated content and links (which are rendered client site), but that required headless browsers and client-side rendering. On average that takes 10x longer than basic source-scraping. Google's mission is to 'index the web', so although they have this new technology and functionality they wouldn't arbitrarily decide to take a 10x efficiency hit across all indexation (that would be nutty and would go against their prime directive)
Rendered crawling is deployed by Google for popular web pages. When it is used, it is not used with the same frequency as basic crawling - and not everyone gets that special treatment!
If you're not Santander or Coca Cola, you should be thinking about how you can help Google rather than how Google will "certainly use their latest technologies to help me, a small to medium business owner - at any expense!" - it just won't happen (sorry!)
The Shopify community is commerce and design led. One thing they are really bad at, is latching onto one-off isolated comments from Google (such as "we can crawl JavaScript now!") and then applying that to everything without testing it first in iterations. The fact is, sites that perform more server-side rendering do still perform better than sites which rely too heavily on client-side rendering (especially as that drastically impacts page-loading speeds and burdens the end user)
If I was finding lots of critical stuff that didn't appear in the base (non-modified) source code and my site wasn't a household name, I'd be really - really concerned!
I am sure that the right Shopify designers and developers could sort it out for you, but it may be costly. Especially as devs in that community won't believe you that it's necessarry, and will fire loads of posts to you (from Google) stating that what they have already done is fine. Comments from the horse's mouth are useful, but not without greater context
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do you think this case would be of a duplicated content and what would be the consequences in such case?
At the webpage https://authland.com/ which is a food&wine tours and activities booking platform, primary content - services thumbnails containing information about the destination, title and prices of the particular services, can be found at several sub-pages/urls. For example, service https://authland.com/zadar/zadar-region-food-and-wine-tour/1/. Its thumbnail/card through which the service is available, can be found on multiple pages (Categories, Destinations, All services, Most recent services...) Is this considered a duplicated content? Since all of the thumbnails for services on the platform, are to be found on multiple pages. If it is, which would be the best way to avoid that content being perceived by Google bots as such? Thank you very much!
Intermediate & Advanced SEO | | ZD20200 -
Using a Sub Domain as a Main Domain?
Hi, I'm working on a site at the moment and the sub domain is acting as the main domain. This occurred when the site was redesigned and built on a sub domain for testing but it was never moved to the main domain when it went live (a couple of years ago). So little or no pages are live on domain.com but all on sub.domain.com. It's a large company but they have very poor rankings. Would you recommend that they move the sub domain back into the root folder? Does this involve renaming/re-pointing URLs? Thanks Louise
Intermediate & Advanced SEO | | MVIreland1 -
Duplicate content - Images & Attachments
I have been looking a GWT HTML improvements on our new site and I am scratching my head on how to stop some elements of the website showing up as duplicates for Meta Descriptions and Titles. For example the blog area: <a id="zip_0-anchor" class="zippedsection_title"></a>
Intermediate & Advanced SEO | | CocoonfxmediaThis blog is full of information and resources for you to implement; get more traffic, more leads an
/blog/
/blog/page/2/
/blog/page/3/
/blog/page/4/
/blog/page/6/
/blog/page/9/The page has rel canonicals on them (using Yoast Wordpress SEO) and I can't see away of stopping the duplicate content. Can anyone suggest how to combat this? or is there nothing to worry about?
0 -
Pagination causing duplicate content problems
Hi The pagination on our website www.offonhols.com is causing duplicate content problems. Is the best solution adding add rel=”prev” / “next# to the hrefs As now the pagination links at the bottom of the page are just http://offonhols.com/default.aspx?dp=1
Intermediate & Advanced SEO | | offonhols
http://offonhols.com/default.aspx?dp=2
http://offonhols.com/default.aspx?dp=3
etc0 -
Noindexing Duplicate (non-unique) Content
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Example: I have a real estate business and I have noindex on MLS pages. However, is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website? On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
Intermediate & Advanced SEO | | khi50 -
Duplicate Page Content / Titles Help
Hi guys, My SEOmoz crawl diagnostics throw up thousands of Dup Page Content / Title errors which are mostly from the forum attached to my website. In-particular it's the forum user's profiles that are causing the issue, below is a sample of the URLs that are being penalised: http://www.mywebsite.com/subfolder/myforum/pop_profile.asp?mode=display&id=1308 I thought that by adding - http://www.mywebsite.com/subfolder/myforum/pop_profile.asp to my robots.txt file under 'Ignore' would cause the bots to overlook the thousands of profile pages but the latest SEOmoz crawl still picks them up. My question is, how can I get the bots to ignore these profile pages (they don't contain any useful content) and how much will this be affecting my rankings (bearing in mind I have thousands of errors for dup content and dup page titles). Thanks guys Gareth
Intermediate & Advanced SEO | | gaz33420 -
Main Keyword penalized. And now?
Hello SEO comunity. In one of the sites that I work, our team won numerous keywords in the first searching results. Unfortunately, on june we lost the most important site keyword. It was a general Word, very disputed, that generated a lot of traffic to us. The problem: it droped from third to fifty position. I’d like your help to identify the possible punishments and than find the solutions. What could happened? Where do we start? Is there any checklist to verify the punishments? What do you suggest to me? PS: We always do a clean job, getting away form the blackhat techniques; We didn’t received any notification from Google; During two year, the keyword was in the first ten positions. Actually, I'm looking for an explanation for what happened to me, to get my top10 again. Thanks
Intermediate & Advanced SEO | | webg0 -
How are they avoiding duplicate content?
One of the largest stores in USA for soccer runs a number of whitelabel sites for major partners such as Fox and ESPN. However, the effect of this is that they are creating duplicate content for their products (and even the overall site structure is very similar). Take a look at: http://www.worldsoccershop.com/23147.html http://www.foxsoccershop.com/23147.html http://www.soccernetstore.com/23147.html You can see that practically everything is the same including: product URL product title product description My question is, why is Google not classing this as duplicate content? Have they coded for it in a certain way or is there something I'm missing which is helping them achieve rankings for all sites?
Intermediate & Advanced SEO | | ukss19840