Should I implement 301 Redirects vs 410 in Removing Product Pages?
-
I manage an ecommerce site and had a question about 301 redirects vs 410 when removing product pages. What we have in place now is a 410 because we have removed products that are no longer available and the content itself no longer exists. In all my research, SEO Best Practices is to have a 301 in place. Should we replace our 410 with a 301 redirect or keep it?
-
hi there!
I want to add a bit more of difficulty on this topic. What would you guys do if your products are renewed every year, so product A in 2014 is product B (almost the same as the previous) in 2015, and then B will eventually become C in 2016.
How would you guys avoid the 301 chain?
any suggestion?
Thanks!
rafa
-
Thanks Wissam for your prompt feedback! We do have similar pages we can redirect them too and right now, when the visitor gets the 410 page, we are giving them the option to search our site or go to our home page. I will check into the # of inbound links.
-
Thanks Peter for your prompt feedback! I will look at your Moz link further. Right now, when the visitor gets the 410 page, we are giving them the option to search our site or go to our home page.
-
if the product have a relevant substitute then you can utilize the the 301. but if the product don't have a relevant substitute then 404/410 is the right of doing it.
i might rethink intelligently if the product have too much inbound links to it ... but this is case by case situation.
-
Hi Lori
A 410 code it technically correct in the context you are using it, but for SEO purposes you are better to do something with links that are going to the pages you are currently returning 410 codes for.
If you have a page that relates in some way to the page you have removed, then create a 301 redirect and send the requests to the similar page.
If no similar page exists, you could just redirect to the home page. In some cases though in those circumstances it could be worthwhile sending the link to a custom 404 page similar to the one Moz use as you can see for my invalid page query: http://moz.com/404. Rather than use a 301 redirect in those circumstances, this type of response can be a good opportunity to engage with the visitor to try to guide them to the information they are looking for. The Moz 404 page is a good example of that.
I hope that helps,
Peter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Weebly vs Wordpress for SEO?
Hi Mozzers, Wordpress has become a pain in the butt as far as plugins slowing down site speed and snippets of codes breaking here and there. My goal is to decrease the constant maintenance that needs to be done on Wordpress. I am curious what do you guys think of Weebly as an "SEO platform" vs wordpress? I only hear good things about Wordpress for SEOs but what about Weebly? Please share your thoughts and why one or the other is better for SEO purposes? P.S: if you have any experience with Weebly, feel free to let me know how did that go for you and if you were satisfied with it. Thanks!
Industry News | | Ideas-Money-Art0 -
Indexing "Without WWW" while it is already redirected to the "WWW" version
Hi Guys My websites are being indexed without "WWW" while the 'http://abc.com' is redirected to 'http://www.abc.com'. Now what I believe is that the URL encoded in my website files are written as 'http://abc.com' rather than 'http://www.abc.com' And since now Google has removed the "Set Preferred Domain" option from the Webmaster Tools, I can't set the preferred version of the URL. Oh & Some pages are indexed with "WWW" & Some are indexed "without WWW" Now I think that it's not an issue, but a lot of people have been saying that this may hurt the rankings.. Some comments/tips would be really appreciated
Industry News | | kasiddiqi0 -
What are the meta tags should be included in the website and all its pages?
Hi Moz teams and all community members, I have been thinking that what are the necessary meta tags should I include in the website and all its pages? Please guide me with the detail manner and how important each of these as I found this useful link. Best,
Industry News | | Futura
Teginder0 -
Get Google To Crawl More Pages Faster on my Site
We opened our database of about 10 million businesses to be crawled by Google. Since Wednesday, Google has crawled and indexed about 2,000 pages. Google is crawling us at about 1,000 pages a day now. We need to substantially increase this amount. Is it possible to get Google to crawl our sites at a quicker rate?
Industry News | | Intergen0 -
Is anyone else looking into ARIA Roles vs HTML5 Markup and Schema.org
Hi im investigating the pros/cons of the Aria Roles, HTML5 Semantic Markup and Schema.org. For those not in the loop: HTML5 Semantic Markup = Semantic Structure Layout. (i.e. Header, Footer, aside (sidebar) Nav) Aria Roles (WAL-Aria)- Defined for Screen Readers Schema.org - Microdata I went gung-ho with some dev costs accounted for to push the new marekup schema. My devs being really good pushed back and in doing further research I have found that there is indeed some underlying complexities surrounding how we structure and markup these pages. A lot of the HTML5 Semantic Stucture code seems to lack adequate purpose, like it was recommended in 2004 (oh wait it was). Schema.org makes sense in the larger realm given in SEO land we love the search engines, although Schema.org vs. Facebook Open Graph. The ARIA attributes look great and work for screen readers (so I am supportive of pursuing) but do they offer and SEO benefit. I would love to get the communities thoughts in general and or thoughts on ignoring HTML5 Semantic markup and just putting in Schema.org tags. Is anyone else going through the same issues? Regards, Phil
Industry News | | AU-SEO1 -
Searching for another Defender of the Front Page!
Greetings Mozfriends and Champions! My company is searching for a performance based SEO that can handle multiple websites and platforms at the same time. Was wondering if any of you fine people would be interested in such a quest. Please reply to the message or you can E-mail me at Jsmith@frontlinemobility.com. Good Luck Champions! Justin Smith
Industry News | | FrontlineMobility0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690 -
How to remove a Google algorithmic penalty
My site has a Google penalty. I seem to be stuck in the 64th position for a Google search for my sites name. All my keywords that I used to rank well for are now well above the 60th search place in Google. I have resolved the issue I recieved the penalty for and I have asked Google for reconsideration. That has been about 3 months ago. The penalty is still firmly in place. I was wondering if anyone else has had a Google algorithmic penalty removed and if so how did they accomplish this?
Industry News | | tadden0