Should I Keep adding 301s or use a noindex,follow/canonical or a 404 in this situation?
-
Hi Mozzers,
I feel I am facing a double edge sword situation. I am in the process of migrating 4 domains into one. I am in the process of creating URL redirect mapping
The pages I am having the most issues are the event pages that are past due but carry some value as they generally have one external followed link.
www.example.com/event-2008 301 redirect to www.newdomain.com/event-2016
www.example.com/event-2007 301 redirect to www.newdomain.com/event-2016
www.example.com/event-2006 301 redirect to www.newdomain.com/event-2016
Again these old events aren't necessarily important in terms of link equity but do carry some and at the same time keep adding multiple 301s pointing to the same page may not be a good ideas as it will increase the page speed load time which will affect the new site's performance. If i add a 404 I will lose the bit of equity in those. No index,follow may work since it won't index the old domain nor the page itself but still not 100% sure about it. I am not sure how a canonical would work since it would keep the old domain live. At this point I am not sure which direction I should follow?
Thanks for your answers!
-
Before deciding not to do a 301 redirect you may want to check how much traffic volume you get from these pages. If it's not significant and for some reason you're unwilling to do a 301 redirect, I would suggest trying to get the actual links going to those pages changed to your new events page. Also you should submit your new events page to those who linked to your old events page to see if you can get link equity flowing to your new page.
-
Thanks Everyone!
If I decide to not 301 what should be the best alternative for these old events?
-
Regarding the speed issue, a single rewrite rule using regex with a wildcard could handle redirecting all of those old event URLs to the new event calendar directory, as it appears you wish to do. Saves a huge amount of work and cuts way down on the 301 redirects that have be parsed on each page load.
Paul
-
If the pages are worth the effort of 301'ing them, I wouldn't worry about page speed for them. Besides link authority from those old pages, you should also look for traffic, since 301s are actually more about seamless experience for the people coming to your site.
-
The first thing that comes to my mind is "How much link equity do these pages bring in?". I know we SEO people hate to throw away any kind of link equity but at the end of the day we're not here to make SEO awesome for it's sake alone. We want results! We want to drive those heavenly KPI's we look at everyday. If these pages have really been a thorn in your side and are taking up your time I would suggest analyzing how much you'd lose if you just left these pages out of your new domain. I'd probably just cut them loose and make your life simple. If they're worth it though do the 301 redirect and see what kind of link equity you can get passed on.
Another option is just change the source link, if you can get in contact with the website that's linking and let them know what's going on that might be a good option. That being said these events are forever old so it might be met with a "That's not worth our time, besides the event is already past." when you ask for them to be changed.
Again I think unless these pages are bringing in some great link equity vital to your website to rank for keywords that are driving results... forget about them and spend your time working on something more valuable.
-Jacob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I noindex my categories?
Hello! I have created a directory website with a pretty active blog. I probably messed this up, but I pretty much have categories (for my blog) and custom taxonomy (for different categories of services) that are very similar. For example I have the blog category "anxiety therapists" and the custom taxonomy "anxiety". 1- is this a problem for google? Can it tell the difference between archive pages in these different categories even though the names are similar? 2- should I noindex my blog categories since the main purpose of my site is to help people find therapists ie my custom taxonomy?
Intermediate & Advanced SEO | | angelamaemae0 -
Marking Ads As Ads
In marking paid ads as "advertisement" for the sake of Google organic, if you have a block of small ads, do you have to mark each and every one as an advertisement? For instance, let's say you have a block of small ads in the right column... mark each one or just at the top or what? Thanks!
Intermediate & Advanced SEO | | 945010 -
Should I use **tags or h1/h2 tags for article titles on my homepage**
I recently had an seo consultant recommend using tags instead of h1/h2 tags for article titles on the homepage of my news website and category landing pages. I've only seen this done a handful of times on news/editorial websites. For example: http://www.muscleandfitness.com/ Can anyone weigh in on this?
Intermediate & Advanced SEO | | blankslatedumbo0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Website.com/blog/post vs website.com/post
I have clients with Wordpress sites and clients with just a Wordpress blog on the back of website. The clients with entire Wordpress sites seem to be ranking better. Do you think the URL structure could have anything to do with it? Does having that extra /blog folder decrease any SEO effectiveness? Setting up a few new blogs now...
Intermediate & Advanced SEO | | PortlandGuy0 -
Canonical tag
Hi all, I have an ecommerce client and on the pages they have a drop down so customers can view via price, list etc. Natrurally I want a canonical tag on these pages, here's the question. as they have different pages of products, the canonical tag on http://www.thegreatgiftcompany.com/occassion/christmas#items-/occassion/christmas/page=7/?sort=price_asc,searchterm=,layout=grid,page=1 is to http://www.thegreatgiftcompany.com/occassion/christmas#items-/occassion/christmas/page=7. now, because the page=7 is a duplicate of the main page, shouldn't the canonical just be to the main page rather than page=7? Even when there is a canonical tag on the /Christmas/page=7 to the /Christmas page? hope that makes sense to everyone!
Intermediate & Advanced SEO | | KarlBantleman0 -
Canonical tag usage.
I have added canonical tags to all my pages, yet I just don't know if I have used them correctly - do you have any ideas on this. My url is http://www.waspkilluk.co.uk
Intermediate & Advanced SEO | | simonberenyi0 -
Do you use your own Blog networks?
Do you use a network of sites you own for links to your clients in your seo efforts? I see so many seo companies doing this from such junk sites with all their clients in the blog roll, it seems totally crazy. It seems this stuff works do any of you do this if so how do you keep it white hat?
Intermediate & Advanced SEO | | DavidKonigsberg0