Change of URLs: "little by little" VS "all at once"
-
Hi guys,
We're planning to change our URLs structure for our product pages (to make them more SEO friendly) and it's obviously something very sensitive regarding the 301 redirections that we have to take with...
I'm having a doubt about Mister Google: if we slowly do that modification (area by area, to minimize the risk of problems in case of bad 301 redirection), would we lose rankings in the search engine? (I'm wondering if they might consider our website is not "coherent" -> not the same product page URLs structure for all the product pages during some time)
Thanks for your kind opinion
-
Hi Nakul,
Maybe the initial post was not explicit enough: we will obviously redirect (301) all the old URLs. And to make sure we won't mess it up with the redirections, we want to update the new product URLs littl by little, product area by product area.
Which means that during this "transition" period, some product URLs will have the old structure, some others will have the new URL structure (both are given above) and the question is: does Google matter about the coherence of (product pages) URLs in the same website?
-
Will the old URLs continue to work or will they redirect ? If you can share the URL here in public here or via PM, that might help.
-
Hi Nakul,
A product can't be in more than one category on our website so that won't be a problem.
-
Hi Keri,
Yes the second one will be the new. It's the word price that will be in the URL and not it's value. We are a price comparison website so the keyword price is core for us.
-
I agree with Keri.You don't want to do that. Also, what happens if your product is in multiple categories.
Do you have multiple URLs of the same product then ? Would you have a canonical tag ?
-
Is the second URL your new URL? You're including your price in your URL? What happens if your price changes?
-
Hi Nakul,
Our domain is quite strong, we are talking about more than 450 K product pages.
Here is an example of URL change that we'll do:
domain/[category ID]/[product ID]/[product name]
-> domain/[category name]/[product name]-price-p[product ID]_[category ID]
-
Pedro
How strong is your domain/website ? Can you give examples of what you are doing ? How many product pages are you talking about ?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Same URL-Structure & the same number of URLs indexed on two different websites - can it lead to a Google penalty?
Hey guys. I've got a question about the url structure on two different websites with a similar topic (bith are job search websites). Although we are going to publish different content (texts) on these two websites and they will differ visually, the url structure (except for the domain name) remains exactly the same, as does the number of indexed landingpages on both pages. For example, www.yyy.com/jobs/mobile-developer & www.zzz.com/jobs/mobile-developer. In your opinion, can this lead to a Google penalty? Thanks in advance!
Intermediate & Advanced SEO | | vde130 -
Value in adding rel=next prev when page 2-n are "noindex, follow"?
Category A spans over 20 pages (not possible to create a "view all" because page would get too long). So I have page 1 - 20. Page 1 has unique content whereas page 2-20 of the series does not. I have "noindex, follow" on page 2-20. I also have rel=next prev on the series. Question: Since page 2-20 is "noindex, follow" doesn't that defeat the purpose of rel=next prev? Don't I run the risk of Google thinking "hmmm….this is odd. This website has noindexed page 2-20, yet using rel=next prev." Even though I do not run the risk, what is my upset in keeping rel=next prev when, again, the pages 2-20 are noindex, follow. thank you
Intermediate & Advanced SEO | | khi50 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Do I need to use rel="canonical" on pages with no external links?
I know having rel="canonical" for each page on my website is not a bad practice... but how necessary is it for pages that don't have any external links pointing to them? I have my own opinions on this, to be fair - but I'd love to get a consensus before I start trying to customize which URLs have/don't have it included. Thank you.
Intermediate & Advanced SEO | | Netrepid0 -
How should I react to my site being "attacked" by bad links?
Hello, We have never bought links or done manipulative linbuilding. Meanwhile, someone has recently (15th of March) pointed at the top 5 websites on my main keyword with lots of bad quality links. So far it has not affected my rankings at all. Actually, I think it will not affect them because I think it was not a massive enough attack. The particular page that has been attacked had about 100 root domains pointing it and now it went up to something like 400. All those were in one day. All of those links use the same anchor text: the keyword we're ranking for. With those extra 300 root domains pointing at us, we went from 600 rootdomain to 900 pointing at our domain as a whole. The page that was targetted by the attack is not the homepage. What I wanted to do was to basically do nothing since I think it won't affect our rankings in any ways but I wanted you guys' opinion. Thanks.
Intermediate & Advanced SEO | | EndeR-0 -
"Iffy" Question
Hi Guys and Girls, I have been studying SEO for a few years now. I have learned quite a bit along the way. One thing that I have had BEAT in my head is "Create Quality Content". When I used to ask "What Should I do to get more visits?" I was told, "Create QUALITY CONTENT". That was great advice, and I have done that. I have created over 600 pages since then. I went from 200 visits per month to now just over 5,000. Even more importantly, I have increased my lead conversions. I say all of this because, about two years ago I told a competitor basically how you rise up in the search engines. He turned around and bought a domain that was 3 years older than mine and had a main keyword in the domain. He then just started building links with a bunch of blog comments and forum posts (The blog comments have like 2,000 comments on them). In other words, he did what I would NOT do yet he is #5 for this keyword and I am #9. (Although, I turned around and built another site and now I am at #13 for that and it has been up for a year now. I say all this, not to bore you but to tell ask you, DOES IT WORK IF YOU DO BLOG COMMENTING, FORUM POSTING, LINK WHEELS, ETC. Do you obtain higher rankings? Should I be doing a bit of video marketing? I know a lot of people will say, "Why would you spend your time on things that may slightly impact your rankings, but obviously it did something for this guy. Any help you can give would be greatly appreciated.
Intermediate & Advanced SEO | | blake-766241 -
No longer to be found for "certain" keywords.
I'd like to see if anyone could potentially shade a light on this rather strange scenario: Basically yesterday I noticed that we are no longer to be found for 'certain' keywords that we had page 2-3 ranking. Yet, for other keywords we still appear on page 2-3. These keywords are very competitive and our rankings has constantly improved in the course of 5-6 months. Now my question is that what could or may have contributed to the fact that for only some keywords we are no longer to be found? Another question is, can Google remove you from their SERPs for certain keywords 'only'? Thank you,
Intermediate & Advanced SEO | | micfo
Maximilian.0 -
No index, follow vs. canonical url
We have a site that consists almost entirely as a directory of videos. Example here: http://realtree.tv/channels/realtreeoutdoorsclassics We're trying to figure out the best way to handle pagination and utility features such as sort for most recent, most viewed, etc. We've been reading countless articles on this topic, but so far have been unable to determine what might be considered the industry standard. Two solutions seem to stand out... Using the canonical url on all the sorted and paginated pages. However, after reading many blog posts, it seems that you should NEVER use the canonical url to solve the issue of paginated, and thus duplicated content because the search bots will never crawl past the first page leaving many results not in the index. (We are considering ruling this method out.) Another solution seems to be using the meta tag for noindex, follow so that a search engine like Google will crawl your directory pages but not add them to the index themselves. All links are followed so content is crawled and any passing link juice remains unchanged. However, I did see a few articles skeptical of this solution as well saying that there are always better alternatives, or that there is no verification that search engines obey this meta tag. This has placed some doubt in our minds. I was hoping to get some expert advice on these methods as it would pertain to our site. Thank you.
Intermediate & Advanced SEO | | grayloon0