Altering Breadcrumbs based on User Path to Product URL
-
Hi,
Our products are listed in multiple categories, and as the URLs are path dependent (example.com/fruit/apples/granny-smith/, example.com/fruit/green-fruit/granny-smith/ and so forth) we canonicalise to the 'default' URL (in this case example.com/fruit/apples/granny-smith/).
For mainly crawling bandwidth issues I'm looking to change all product URL's to path neutral so there is only ever one URL per product (example.com/granny-smith/), but still list the product in multiple categories.
If a user comes directly to example.com/granny-smith/ then the breadcrumbs will use the default path "Fruit > Apples", however if the user navigated to the product via another category then I'd like the breadcrumbs to reflect this. I'm not worried about cloaking as it's not based on user-agent and it's very logical why it's being done so I don't expect a penalty.
My question is - how do you recommend this is achieved from a technical standpoint? Many sites use path neutral product URL's (Ikea, PCWorld etc) but none alter the breadcrumbs depending upon path.
Our site is mostly behind a CDN so it has to be a client side solution. I currently view the options as:
- Store Path to product in a cookie and/or browsers local-cache
- Attach the Path details after a # in the URL and use Javascript to alter breadcrumbs onload with JQuery
- When a user clicks to a product from a listing page, use AJAX to pull in the product info but leave the rest of the page (including the breadcrumbs) as-is, updating the URL accordingly
Do you think any of these wouldn't work? Do you have a preference on which one is best? Is there another method you'd recommend?
We also have "Next/Previous" functionality (links to the previous and next product URLs) on the page so I suspect we'd need to attach the path after a # and make another round trip to the server onload to update the previous and next links.
Finally, does anyone know of any sites that do update the breadcrumbs depending upon path?
Thanks in advance for your time
FashionLux
-
Further update to this. Ran into a problem with option 3... this solution works really well when navigating the site internally, however a user landing on one of these URL's directly (bookmark, social share etc) would have a slow loading page as (for non-default product variations) the page will load after the 1st request, then a 2nd request to the server is needed to pull in the image via AJAX.
Loading the other images, stock information, prices, copy etc into an array and doing the work on the client side wasn't an option as the page would get too heavy. So option 3 ruled out.
Ultimately the goal was to reduce duplicate content of product pages and none of the 3 options above do this whilst not affecting page loading times. I did look to fall back on using canonical tags however I've just now found that Facebook are using this tag, so if a user wanted to share a 'red apple' when the canonical is 'green apple' - Facebook would show an image of the 'green apple'.... so at the moment that is ruled out also.
I'll start a new thread on product page duplicates and the best solution - but if anyone has any ideas then please do let me know.
Thanks
Dean
-
Thanks for the response Dana. Option 3 did feel like the best option and that is the one I'm choosing to go with.
Point 2 (with the hash) provides the desired result of Search Engines only seeing the clean URL as the parameters behind the hash will never be seen, but the browser will use them to power the breadcrumbs. In the end it was a toss-up between 2 & 3 but 3 is the most maintainable and quickest for users.
Thanks again
Dean
-
Dean,
This is a great, great question and I am eager to find out what my fellow technical SEOs think because I have faced very similar situations on one of my sites. Thanks for asking this question.
My gut instinct is to select #3 of your options. But not really being a developer, it's hard for me to articulate as to why I think this is the best option. I am really only thinking of it from a user-end standpoint in that I want to know where, in the hierarchy of the site this page lives so that if I need to find it again, I can.
I disagree with your option #2 from an SEO standpoint because anything after a "#" or hashtag in a URL is ignored by search engines....so putting it there isn't going to benefit your SEO in any way.
Interested to hear what others think,
Dana
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
User intent
Hello, For the keyword Normandy cycling, it seems according to the result that people are looking for the bike routes. My question : can i rank indicating my favorite bike routes (personal routes) or doIi need to stick to what is already considered as the best biking routes in Normandy, the tour de Manche, the veloscenic, the velo Francette and so forth ? Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
Should I include URLs that are 301'd or only include 200 status URLs in my sitemap.xml?
I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?
Intermediate & Advanced SEO | | Jonathan.Smith0 -
Duplicate content based on filters
Hi Community, There have probably been a few answers to this and I have more or less made up my mind about it but would like to pose the question or as that you post a link to the correct article for this please. I have a travel site with multiple accommodations (for example), obviously there are many filter to try find exactly what you want, youcan sort by region, city, rating, price, type of accommodation (hotel, guest house, etc.). This all leads to one invevitable conclusion, many of the results would be the same. My question is how would you handle this? Via a rel canonical to the main categories (such as region or town) thus making it the successor, or no follow all the sub-category pages, thereby not allowing any search to reach deeper in. Thanks for the time and effort.
Intermediate & Advanced SEO | | ProsperoDigital0 -
Website Re-Launch - New URLS / Old URL WMT
Hello... We recently re-launched website with a new CMS (Magento). We kept the same domain name, however most of the structure changed. We were diligent about inputting the 301 redirects. The domain is over 15 years old and has tons of link equity and history. Today marks 27 days since launch...And Google Webmaster Tools showed me a recently detected (dated two days ago) URL from the old structure. Our natural search traffic has take a slow dive since launch...Any thoughts? Some background info: The old site did not have a sitemap.xml. The relaunched site does. Thanks!
Intermediate & Advanced SEO | | 19prince0 -
301 redirect to a temporary URL
Hi there, What would happen if I redirected a set of URLs to a temporary URL structure. And then a few weeks later redirected the original URLs and temporary URLs to the final permanent URLs? So for example:A -> B for a few weeks.
Intermediate & Advanced SEO | | sichristie
then: A->C and B->C where:
C is the final destination URL.
B is the temporary destination
A is the original URL. The reason we are doing this is the naming of the URLs and pages are different, and we wish to transition our customers carefully from old to new. I am looking for a pure technical response.
Would we lose link juice? Does Google care if we permanently redirect to a set of 'temporary' URLs, and then permanently redirect to a set of what we think are permanent URLs? Cheers, Simon0 -
Mystery: Ranking in Amazon for a product page?
My client has a product on Amazon that has more reviews and better rankings. However, their competitor with less reviews and lower ratings are ranking #1 for our primary keyword in Google. Our product page doesn't even rank on Google, but I'm assuming Google doesn't want to display two results from Amazon. The only difference is they have 1 link pointed to the product page that has a small PA of 10 and DA of 15. Do you think this link could be the only thing making a difference? Should we start building more links to this product page in addition to their website? Any other tips to help our Amazon page rank?
Intermediate & Advanced SEO | | Stryde0 -
Long URL with QueryStrings
Hi, I have a search page that generates some querystrings (with the term, current page, number of pages etc). This long url is something bad for Google indexing? Thanks.
Intermediate & Advanced SEO | | GDB0 -
Magento: URLs for Products in Multiple Categories
I am working in Magento to build out a large e-commerce site with several thousand products. It's a great platform, but I have run into the issue of what it does to URLs when you put a product into multiple categories. Basically, "a book" in two categories would make two URLs for one product: 1) /books/a-book 2) author-name/a-book So, I need to come up with a solution for this. It seems I have two options: Found this from a Magento SEO article: 'Magento gives you the ability to add the name of categories to path for product URL's. Because Magento doesn't support this functionality very well - it creates duplicate content issues - it is a very good idea to disable this. To do this, go to System => Configuration => Catalog => Search Engine Optimization and set "Use categories path for product URL's to "no".' This would solve the issues and be a quick fix, but I think it's a double edged sword, because then we lose the SEO value of our well named categories being in the URL. Use Canonical tags. To be fair, I'm not even sure this is possible. Even though it is creating different URLs and, thus, poses a risk of "duplicate content" being crawled, there really is only one page on the admin side. So, I can't go to all of the "duplicate" pages and put a canonical tag, because those duplicate pages don't really exist on the back-end. Does that make sense? After typing this out, it seems like the best thing to do probably will be to just turn off categories in the URL from the admin side. However, I'd still love any input from the community on this. Thanks!
Intermediate & Advanced SEO | | Marketing.SCG0