Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Looking to remove dates from URL permalink structure. What do you think of this idea?
-
I know most people who remove dates from their URL structure usually do so and then setup a 301 redirect. I believe that's the right way to go about this typically. My biggest fear with doing a global 301 redirect implementation like that across an entire site is that I've seen cases where this has sort of shocked Google and the site took a hit in organic traffic pretty bad.
Heres what I'm thinking a safer approach would be and I'd like to hear others thoughts. What if...
- Changed permalink structure moving forward to remove the date in future posts.
- All current URLs stay as is with their dates
- Moving forward we would go back and optimize past posts in waves (including proper 301 redirects and better URL structure). This way we avoid potentially shocking Google with a global change across all URLs.
Do you know of a way this is possible with a large Wordpress website? Do you see any conplications that could come about in this process? I'd like to hear any other thoughts about this please.
Thanks!
-
Hey Jeff,
thank you for your input. So you just globally changed the permalink structure, put global redirects in place and you didn't see permanent loss in trafic? And you did that on multiple sites?
If so I'll most probably follow your path.
Thanks again,
Julien
-
Hey Julien -
I wouldn't go this route. Since asking this question I have had dates removed from 30+ domains, many with 5-10 million+ pageviews per month. We haven't seen this as a risk and are now very in favor of removing dates from URLs on most sites we work with. We work with sites that have very evergreen content, and republishing is a very strong SEO strategy.
The process is very similar to moving your site to HTTPS from HTTP. Since Google has started recommending HTTPS we haven't seen any issue with removing dates as well.
Hope that helps
-
Hey Thomas,
Interesting thought! Could you go in a little more details as to how that regex would work? Would that randomize the redirects to only a portion of the posts?
Thanks!
Julien
-
I think only do 10% of pages watch them if you like what you see do the next 20%
RedirectMatch301^/([0-9]{4})/([0-9]{2})/(.*)$ http://yourwebsite.com/$3
-
Garrett -
I never got a clear answer, but I have since gone forward making changes on 20+ Wordpress blogs without any ill-effect. The changes we made were only to sites that had dates in the permalink structure and 301 redirects were put in place (on the server, not through a plugin). Trying to change the permalink structure going forward but not back was too much of a hassle. It appears Google sees this as a positive change for users because it cleans up the permalink structure and allows site owners to keep their content updated and continue sharing.
Not sure how this will apply in other scenarios such as removing folder structure (categories and tags) from the permalink, but I've had only positive results removing the dates. I work with some very high profile mom and food blogs so I have some pretty solid evidence and data supporting my decisions now.
I hope that helps. Cheers!
-
Hi Jeff,
Did you end up making these changes? How is it going? I found your post as I was researching and rethinking how to structure WordPress blog permalinks.
I have a few e-commerce clients with blog posts that are several years old and still popular in organic search. I'd like to turn some of them into evergreen content that is regularly updated, but I feel like we should do something about the permalinks first.
There are some great insights here. Thank you to all who contributed.
Garrett
-
No problem, glad to help! Best of luck with whichever route you go with!
-
It was worth a shot. Thanks for sharing your thoughts. Cheers!
-
Unfortunately, I don't have any examples for ya. Never come across this particular topic for a client.
-
Know of any site that has used the canonical to do anything like this? It seems like the safest option, I just haven't seen this to this scale is all.
-
Yes, I'm saying you should keep URLs as they are. I'm always an advocate for not changing URL structure unless there's a really good, highly beneficial reason for doing so. I don't know of a way to change only new URL structures while keeping old ones the same, but I'm no WP expert.
-
Although I haven't strongly considered that approach, it did cross my mind to utilize the canonical. Do you know of any way to change WordPress permalink structure going forward but not backwards? Or are you suggesting we keep the dates in the URL going forward? I just think that eventually we'll have to think about updating that URL structure.
-
OK, now that I understand the reasoning...
I believe there's a better, less-risky approach. What I would do is write a completely new post based on information from the old post. At the same time you publish the new post, go back to the old version and add these two things: a canonical tag pointing to the new version, and a bit of _very readable _text at the top linking to the new post. Something like "Hey, thanks for your interest in our content. Feel free to read on, but we thought you should know we've updated this post which can be found here: link"
This accomplishes a few important things. It eliminates the need for a risky project that could affect your entire site just for the ability to update posts (which I'm guessing doesn't happen too often, what percent of posts get updated?). The canonical tag removes the dupe content risk so you're not cannibalizing your own content. And leaving the old post there gives people the opportunity to discover old content that, while possibly not relevant anymore, still demonstrates you've been a trustworthy source of information for a long time.
-
Logan,
By not being able to remove the dates we're not able to go back to a 5-year old post, make updates, and then republish the content. This is a "mom blog" and the topics can be recycled, but if we create a new post that we also covered 5 years ago we would be competing with ourself instead of using something that already has some authority and rank to it.
That's why we were thinking to somehow make it possible (in WordPress) to keep all current URLs as is, change the permalink structure moving forward so that future posts don't have date, and then be able to update posts as we go and 301 them manually over time. Does that make sense?
I agree with your last 2 statements, it is a HUGE risk to 301 this entire site to do away with those dates. Even though redirects supposedly pass all link juice we all know that a big change like that across an entire site could have ill-effect with search engines.
I'd like to know if anyone has gone about the URL structure change like I'm outlining here. Am I crazy to think that is a logical way to go about it? I haven't been able to find anywhere that someone has done this though.
-
Jeff,
Based on the traffic you say this blog gets, I'm assuming its rather large and has hundreds, if not thousands of posts. Which leads me to one simple question:
Why? This seems like a HUGE amount of risk and a pretty decent amount of work to go into something that's really not going to provide any benefit.
*edit: It should also be noted that just because Google has recently stated that redirects now pass all link juice doesn't mean you should go needlessly add a massive amount of redirects. There are other implications that redirects have, like load time for example. If you have 1,000 redirects, every single one of those is going to be checked before any page on your site loads, which takes a lot of time.
-
Thanks for your response. I actually agree with most, if not all of what you are saying.
The problem is that this is a larger blog with 5-7 million page views on average per month. 1 million+ just from organic. I agree with your statement about postponing and never getting done. With a large blog I still think it would be easier (less stressful, not necessarily easier) to manage it in waves in order to pause or correct when there is a larger than normal dip that maybe doesn't come back up. With a business it makes sense, but with these bloggers sites it seems like too big of a risk when it's what brings in almost all the income. Does that make sense?
That tweet you're referring to, I thought that was mainly in regard to HTTP to HTTPS migrations. I need to look more into that I guess.
Thanks!
-
I'm not a fan of your plan.
There can be many reasons why a site might "take a hit". For example, if page-to-page redirects were not implemented or the sitemap was not updated, updated correctly, or resubmitted to search engines. I wouldn't assume that will happen in your case. In my experience, if the transition is done correctly and there's a hit, it's short-lived.
If you're thinking the redirects will cause you to lose SEO equity, that is no longer the case. Gary Illyes, a Google webmaster trends analyst, tweeted on July 26, 2016 "30x redirects don’t lose PageRank anymore."
One of the biggest risks (in my mind) of staging the migration the way you suggest is that the "waves" never happen. I see that a lot - a situation where an organization agrees to postpone work to a future date that never arrives. New and competing priorities take precedence resulting in an endless postponement. If you have the management commitment, funding and resources to do the work now, I say bite the bullet and go for it. Make a plan. Stick to it. Check and double check your work.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
Removing duplicate content
Due to URL changes and parameters on our ecommerce sites, we have a massive amount of duplicate pages indexed by google, sometimes up to 5 duplicate pages with different URLs. 1. We've instituted canonical tags site wide. 2. We are using the parameters function in Webmaster Tools. 3. We are using 301 redirects on all of the obsolete URLs 4. I have had many of the pages fetched so that Google can see and index the 301s and canonicals. 5. I created HTML sitemaps with the duplicate URLs, and had Google fetch and index the sitemap so that the dupes would get crawled and deindexed. None of these seems to be terribly effective. Google is indexing pages with parameters in spite of the parameter (clicksource) being called out in GWT. Pages with obsolete URLs are indexed in spite of them having 301 redirects. Google also appears to be ignoring many of our canonical tags as well, despite the pages being identical. Any ideas on how to clean up the mess?
Intermediate & Advanced SEO | | AMHC0 -
If I own a .com url and also have the same url with .net, .info, .org, will I want to point them to the .com IP address?
I have a domain, for example, mydomain.com and I purchased mydomain.net, mydomain.info, and mydomain.org. Should I point the host @ to the IP where the .com is hosted in wpengine? I am not doing anything with the .org, .info, .net domains. I simply purchased them to prevent competitors from buying the domains.
Intermediate & Advanced SEO | | djlittman0 -
Url structure for multiple search filters applied to products
We have a product catalog with several hundred similar products. Our list of products allows you apply filters to hone your search, so that in fact there are over 150,000 different individual searches you could come up with on this page. Some of these searches are relevant to our SEO strategy, but most are not. Right now (for the most part) we save the state of each search with the fragment of the URL, or in other words in a way that isn't indexed by the search engines. The URL (without hashes) ranks very well in Google for our one main keyword. At the moment, Google doesn't recognize the variety of content possible on this page. An example is: http://www.example.com/main-keyword.html#style=vintage&color=blue&season=spring We're moving towards a more indexable URL structure and one that could potentially save the state of all 150,000 searches in a way that Google could read. An example would be: http://www.example.com/main-keyword/vintage/blue/spring/ I worry, though, that giving so many options in our URL will confuse Google and make a lot of duplicate content. After all, we only have a few hundred products and inevitably many of the searches will look pretty similar. Also, I worry about losing ground on the main http://www.example.com/main-keyword.html page, when it's ranking so well at the moment. So I guess the questions are: Is there such a think as having URLs be too specific? Should we noindex or set rel=canonical on the pages whose keywords are nested too deep? Will our main keyword's page suffer when it has to share all the inbound links with these other, more specific searches?
Intermediate & Advanced SEO | | boxcarpress0 -
Removing Dynamic "noindex" URL's from Index
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google. It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index. I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
Intermediate & Advanced SEO | | BeTheBoss0 -
Product URL structure for a marketplace model
Hello All. I run an online marketplace start-up that has around 10000 products listed from around 1000+ sellers. We are a similar model to etsy/ebay in the sense that we provide a platform but sellers to list products and sell them. I have a URL structure question. I have read http://www.seomoz.org/q/how-to-define-best-url-structure-for-product-pages which seems to show everyone suggests to use Products: products/category/product-name Categories: products/category as the structure for product pages. Because we are a marketplace (our category structure has multiple tiers sometimes up to 3) our sellers choose a category for products to go in. How we have handled this before is we have used: Products: products/last-tier-category-chosen/product-name (eg: /products/sweets-and-snacks/fluffy-marshmallows) Categories: products/category (eg: /products/sweets-and-snacks) However we have two issues with this: The categories can sometimes change, or users can change them which means the links completely change and undo any link building work built up. The urls can get a bit long and am worried that the most important data (the fluffy marshmallow that reflects in the page title and content) is left till too late in the URL. As a result we plan to change our URL structure (we are going through a rebuild anyhow so losing old links is not an issue here) so that the new structure was: Products: products/product-name(eg: /products/fluffy-marshmallows) Categories: products/category (eg: /products/sweets-and-snacks) My concern about doing this however, and question here, is whether this willnegatively impact the "structure" of pages when google crawls our marketplace.Because "fluffy marshmallows" will no longer technically fit into the url structure of "sweets and snacks". I dont know if this would have a negative impact or not. FYI etsy (one of the largest marketplace models in the world) us the latter approach and do not have categories in product urls, eg: listing/42003836/vintage-french-industrial-inspired-side Any ideas on this? Many thanks!
Intermediate & Advanced SEO | | LiamPatterson0 -
URL Error or Penguin Penalty?
I am currently having a major panic as our website www.uksoccershop.com has been largely dropped from Google. We have not made any changes recently and I am not sure why this is happening, but having heard all sorts of horror stories of penguin update, I am fearing the worst. If you google "uksoccershop" you will see that the homepage does not rank. We previously ranked in the top 3 for "football shirts" but now we don't, although on page 2, 3 and 4 you will see one of our category pages ranking (this didn't used to happen). Some rankings are intact, but many have disappeared completely and in some cases been replaced by other pages on our site. I should point out our existing rankings have been consistently there for 5-6 years until today. I logged into webmaster tools and thankfully there is no warning message from Google about spam, etc, but what we do have is 35,000 URL errors for pages which are accessible. An example of this is: | URL: | http://www.uksoccershop.com/categories/5_295_327.html | | Error details In Sitemaps Linked from Last crawled: 6/20/12First detected: 6/15/12Googlebot couldn't access the contents of this URL because the server had an internal error when trying to process the request. These errors tend to be with the server itself, not with the request. Is it possible this is the cause of the issue (we are not currently sure why the URL's are being blocked) and if so, how severe is it and how recoverable?If that is unlikely to cause the issue, what would you recommend our next move is?All help is REALLY REALLY appreciated 🙂
Intermediate & Advanced SEO | | ukss19840 -
Multiple URLs for the same page
I am working with a client and recently discovered that they have several URLs that go to the same page. http://www.maps.com/FunFacts.aspx
Intermediate & Advanced SEO | | WebMarketingandDesign
http://www.maps.com/funfacts.aspx
http://www.maps.com/FunFacts.aspx?nav=FF
http://www.maps.com/FunFacts.aspx?nav=FS
http://www.maps.com/funfacts.aspx?nav=FF
http://www.maps.com/funfacts.aspx?nav=ffhttp://www.maps.com/FunFacts.aspx?nav=MShttp://www.maps.com/funfacts.aspx?nav=
http://www.maps.com/FunFacts.aspx?nav=FF#
http://www.maps.com/FunFacts
http://www.maps.com/funfacts.aspx?.nav=FF I am afraid this is happening all over the site. So, my question is: Is this hurting the SEO and how? If so what is the best way to go about fixing this problem? Thanks for your help!0