Wrong URLs indexed, Failing To Rank Anywhere
-
I’m struggling with a client website that's massively failing to rank.
It was published in Nov/Dec last year - not optimised or ranking for anything, it's about 20 pages. I came onboard recently, and 5-6 weeks ago we added new content, did the on-page and finally changed from the non-www to the www version in htaccess and WP settings (while setting www as preferred in Search Console). We then did a press release and since then, have acquired about 4 partial match contextual links on good websites (before this, it had virtually none, save for social profiles etc.)
I should note that just before we added the (about 50%) new content and optimised, my developer accidentally published the dev site of the old version of the site and it got indexed. He immediately added it correctly to robots.txt, and I assumed it would therefore drop out of the index fairly quickly and we need not be concerned.
Now it's about 6 weeks later, and we’re still not ranking anywhere for our chosen keywords. The keywords are around “egg freezing,” so only moderate competition. We’re not even ranking for our brand name, which is 4 words long and pretty unique. We were ranking in the top 30 for this until yesterday, but it was the press release page on the old (non-www) URL!
I was convinced we must have a duplicate content issue after realising the dev site was still indexed, so last week, we went into Search Console to remove all of the dev URLs manually from the index. The next day, they were all removed, and we suddenly began ranking (~83) for “freezing your eggs,” one of our keywords! This seemed unlikely to be a coincidence, but once again, the positive sign was dampened by the fact it was non-www page that was ranking, which made me wonder why the non-www pages were still even indexed. When I do site:oursite.com, for example, both non-www and www URLs are still showing up….
Can someone with more experience than me tell me whether I need to give up on this site, or what I could do to find out if I do?
I feel like I may be wasting the client’s money here by building links to a site that could be under a very weird penalty
-
Thanks, we'll check all of the old URLs are redirecting correctly (though I'd assume given the htacces and WP settings changes, they would).
Will also perform the other check you mentioned and report back if anything is amiss... Thank you, Lynn.
-
It should sort itself out if the technical setup is ok, so yes keep doing what you are doing!
I would not use the removal request tool to try to get rid of the non-www, it is not really intended for this kind of usage and might bring unexpected results. Usually your 301s should bring about the desired effect faster than most other methods. You can use a tool like this one just to 100% confirm that the non-www is 301 redirecting to the www version on all pages (you probably already have but I mention it again to be sure).
Are the www urls in your sitemap showing all (or mostly) indexed in the search console? If yes then really you should be ok and it might just need a bit of patience.
-
Firstly, thank you both very much for your responses - they were both really helpful. It sounds, then, like the only solution is to keep waiting while continuing our link-buliding and hoping that might help (Lynn, sadly we have taken care of most of the technical suggestions you made).
Would it be worth also submitting removal requests via Search Console for the non-www URLs? I had assumed these would drop out quickly after setting the preferred domain, but that didn't happen, so perhaps forcing it like we did for the development URLs could do the trick?
-
Hi,
As Chris mentions it sounds like you have done the basics and you might just need to be a bit patient. Especially with only a few incoming links it might take google a little while to fully crawl and index the site and any changes.
It is certainly worth double checking the main technical bits:
1. The dev site is fully removed from the index (slightly different ways to remove complete sub domains vs sub folders but in my experience removal via the search console is usually pretty quick. After that make sure the dev site is permanently removed from the current location and returns a 404 or that it is password protected).
2. Double check the www vs non www 301 logic and make sure it is all working as expected.
3. Submit a sitemap with the latest urls and confirm indexing of the pages in the search console (important in order to quickly identify any hidden indexing issues)
Then it is a case of waiting for google to incorporate all the updates into the index. A mixture of www and non www for a period is not unusual in such situations. As long as the 301s are working correctly the www versions should eventually be the only ones you see.
Perhaps important to note that this does not sound like a 'penalty' as such but a technical issue, so it needs a technical fix in the first instance and should not hold you back in the medium - long term as a penalty might. That being said, if your keywords are based on egg freezing of the human variety (ie IVF services etc) then I think that is a pretty competitive area usually, often with a lot of high authority information type domains floating around in the mix in addition to the commercial. So, if the technical stuff is all good then I would start looking at competition/content again - maybe your keywords are more competitive than you think (just a thought!).
-
We've experienced almost exactly the same process in the past when a dev accidentally left staging.domain.com open for indexation... the really bad news is that despite noticing this, blocking via Robots and going through the same process to remove the wrong ones via Search Console etc, getting the correct domain ranking in the top 50 positions took almost 6 infuriating months!
Just like you, we saw the non-www version and the staging.domain version of the pages indexed for a couple of months after we fixed everything up then all of a sudden one day the two wrong versions of the site disappeared from the index and the correct one started grabbing some traction.
All this to say that to my knowledge, there are no active tasks you can really perform beyond what you've already done to speed this process up. Maybe building a good volume of strong links will push a positive signal that the correct one should be recrawled. We did spend a considerable amount of time looking into it and the answer kept coming back the same - "it just takes time for Google to recrawl the three versions of the site and figure it out".
This is essentially educated speculation but I believe the reason this happens is because for whatever reason the wrong versions were crawled first at different points to be the original version so the correct one was seen as 100% duplicate and ignored. This would explain why you're seeing what you are and also why in a magical 24hr window that could come at any point, everything sorted itself out - it seems that the "original" versions of the domain no longer exist so the truly correct one is now unique.
If my understanding of all this is correct, it would also mean that moving your site to yet another domain wouldn't help either since according to Google's cache/index, the wrong versions of your current domain are still live and the "original" so putting that same site/content on a different domain would just be yet another version of the same site.
Apologies for not being able to offer actionable tasks or good news but I'm all ears for future reference if anyone else has a solution!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fetch and render partial result could this affect SERP rankings [NSFW URL]
Moderator's Note: URL NSFW We have been desperately trying to understand over the last 10 days why our homepage disappears for a few days in the SERPS for our most important keywords, before reappearing again for a few more days and then gone again! We have tried everything. Checked Google webmaster - no manual actions, no crawl errors, no messages. The site is being indexed even when it disappears but when it's gone it will not even appear in the search results for our business name. Other internal pages come up instead. We have searched for bad back links. Duplicate content. We put a 301 redirect on the non www. version of the site. We added a H1 tag that was missing. Still after fetching as Google and requesting reindexing we were going through this cycle of disappearing in the rankings (an internal page would actually come in at 6th position as opposed to our home page which had previously spent years in the number 2 spot) and then coming back for a few days. Today I tried fetch and render as Google and was only getting a partial result. It was saying the video that we have embedded on our home page was temporarily unavailable. Could this have been causing the issue? We have removed the video for now and fetched and rendered and returned a complete status. I've now requested reindexing and am crossing everything that this fixes the problem. Do you think this could have been at the root of the problem? If anyone has any other suggestions the address is NSFW https://goo.gl/dwA8YB
Intermediate & Advanced SEO | | GemmaApril2 -
Old product URLs still indexed and maybe causing problems?
Hi all, Need some expertise here: We recently (3 months ago) launched a newly updated site with the same domain. We also added an SSL and dropped the www (with proper redirects). We went from http://www.mysite.com to https://mysite.com. I joined the company about a week after launch of the new site. All pages I want indexed are indexed, on the sitemap and submitted (submitted in July but processes regularly). When I check site:mysite.com everything is there, but so are pages from the old site that are not on the sitemap. These do have 301 redirects. I am finding our non-product pages are ranking with no problem (including category pages) but our product pages are not, unless I type in the title almost exactly. We 301 redirected all old urls to new comparable product, or if the product is not available anymore to the home page. For better or worse, as it turns out and prior to my arrival, in building the new site the team copied much of the content (descriptions, reviews, etc) from the old site to create the new product pages. After some frustration and research I am finding the old pages are still indexed and possibly causing a duplicate content issue. Now, I gather there is supposedly no "penalty", per se, for duplicate content but a page or site will simply not show in the SERPs. Understandable and this seems to be the case. We also sell a lot of product wholesale and it turns out many dealers are using the same descriptions we have (and have had) on our site. Some are much larger than us so I'd expect to be pushed down a bit but we don't even show in the top 10 pages...for our own product. How long will it take for Google to drop the old and rank the new as unique? I have re-written some pages but much is technical specifications and tough to paraphrase or re-write. I know I could do this in Search Console but I don't have access to the old site any longer. Should I remove the 301s a few at a time and see if the old get dropped faster? Maybe just re-write ALL the content? Wait? As a site note, I'm also on a Drupal CMS with a Shopify ecommerce module so maybe the shop.mysite.com vs mysite.com is throwing it off with the products(?) - (again the Drupal non-product AND category pages rank fine). Thoughts on this would be much appreciated. Thx so much!
Intermediate & Advanced SEO | | mcampanaro0 -
How to stop URLs that include query strings from being indexed by Google
Hello Mozzers Would you use rel=canonical, robots.txt, or Google Webmaster Tools to stop the search engines indexing URLs that include query strings/parameters. Or perhaps a combination? I guess it would be a good idea to stop the search engines crawling these URLs because the content they display will tend to be duplicate content and of low value to users. I would be tempted to use a combination of canonicalization and robots.txt for every page I do not want crawled or indexed, yet perhaps Google Webmaster Tools is the best way to go / just as effective??? And I suppose some use meta robots tags too. Does Google take a position on being blocked from web pages. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart0 -
Website not ranking
Firstly, apologies for the long winded question. I'm 'newish' to SEO We have a website built on Magento , www.excelclothing.com We have been online for 5 years and had reasonable success. Having used a few SEO companies in the past we found ourselves under a 'partial manual penalty' early last year. By July we were out of penalty. We have been gradually working our way through getting rid of 'spammy' links. Currently the website ranks for a handful of non competitive keywords looking at the domain on SEM RUSH. This has dropped drastically over the last 2 years. Our organic traffic over the last 2-3 years has seen no 'falling off a cliff' and has maintained a similar pattern. I've been told so many lies by SEO companies trying to get into my wallet I'm not sure who to believe. We have started to add content onto all our Category pages to make more unique although most of our Meta Descriptions are a 'boiler plate' template. I'm wondering.... Am I still suffering from Penquin ? Am I trapped by Panda and if so how can I know that? Do I need more links removed? How can I start to rank for more keywords I have a competitor online with the same DA, PA and virtually same number of links but they rank for 3500 keywords in the top 20. Would welcome any feedback. Many Thanks.
Intermediate & Advanced SEO | | wgilliland1 -
Content From One Domain Mysteriously Indexing Under a Different Domain's URL
I've pulled out all the stops and so far this seems like a very technical issue with either Googlebot or our servers. I highly encourage and appreciate responses from those with knowledge of technical SEO/website problems. First some background info: Three websites, http://www.americanmuscle.com, m.americanmuscle.com and http://www.extremeterrain.com as well as all of their sub-domains could potentially be involved. AmericanMuscle sells Mustang parts, Extremeterrain is Jeep-only. Sometime recently, Google has been crawling our americanmuscle.com pages and serving them in the SERPs under an extremeterrain sub-domain, services.extremeterrain.com. You can see for yourself below. Total # of services.extremeterrain.com pages in Google's index: http://screencast.com/t/Dvqhk1TqBtoK When you click the cached version of there supposed pages, you see an americanmuscle page (some desktop, some mobile, none of which exist on extremeterrain.com😞 http://screencast.com/t/FkUgz8NGfFe All of these links give you a 404 when clicked... Many of these pages I've checked have cached multiple times while still being a 404 link--googlebot apparently has re-crawled many times so this is not a one-time fluke. The services. sub-domain serves both AM and XT and lives on the same server as our m.americanmuscle website, but answer to different ports. services.extremeterrain is never used to feed AM data, so why Google is associating the two is a mystery to me. the mobile americanmuscle website is set to only respond on a different port than services. and only responds to AM mobile sub-domains, not googlebot or any other user-agent. Any ideas? As one could imagine this is not an ideal scenario for either website.
Intermediate & Advanced SEO | | andrewv0 -
URL Parameter Being Improperly Crawled & Indexed by Google
Hi All, We just discovered that Google is indexing a subset of our URL’s embedded with our analytics tracking parameter. For the search “dresses” we are appearing in position 11 (page 2, rank 1) with the following URL: www.anthropologie.com/anthro/category/dresses/clothes-dresses.jsp?cm_mmc=Email--Anthro_12--070612_Dress_Anthro-_-shop You’ll note that “cm_mmc=Email” is appended. This is causing our analytics (CoreMetrics) to mis-attribute this traffic and revenue to Email vs. SEO. A few questions: 1) Why is this happening? This is an email from June 2012 and we don’t have an email specific landing page embedded with this parameter. Somehow Google found and indexed this page with these tracking parameters. Has anyone else seen something similar happening?
Intermediate & Advanced SEO | | kevin_reyes
2) What is the recommended method of “politely” telling Google to index the version without the tracking parameters? Some thoughts on this:
a. Implement a self-referencing canonical on the page.
- This is done, but we have some technical issues with the canonical due to our ecommerce platform (ATG). Even though page source code looks correct, Googlebot is seeing the canonical with a JSession ID.
b. Resubmit both URL’s in WMT Fetch feature hoping that Google recognizes the canonical.
- We did this, but given the canonical issue it won’t be effective until we can fix it.
c. URL handling change in WMT
- We made this change, but it didn’t seem to fix the problem
d. 301 or No Index the version with the email tracking parameters
- This seems drastic and I’m concerned that we’d lose ranking on this very strategic keyword Thoughts? Thanks in advance, Kevin0 -
To index or de-index internal search results pages?
Hi there. My client uses a CMS/E-Commerce platform that is automatically set up to index every single internal search results page on search engines. This was supposedly built as an "SEO Friendly" feature in the sense that it creates hundreds of new indexed pages to send to search engines that reflect various terminology used by existing visitors of the site. In many cases, these pages have proven to outperform our optimized static pages, but there are multiple issues with them: The CMS does not allow us to add any static content to these pages, including titles, headers, metas, or copy on the page The query typed in by the site visitor always becomes part of the Title tag / Meta description on Google. If the customer's internal search query contains any less than ideal terminology that we wouldn't want other users to see, their phrasing is out there for the whole world to see, causing lots and lots of ugly terminology floating around on Google that we can't affect. I am scared to do a blanket de-indexation of all /search/ results pages because we would lose the majority of our rankings and traffic in the short term, while trying to improve the ranks of our optimized static pages. The ideal is to really move up our static pages in Google's index, and when their performance is strong enough, to de-index all of the internal search results pages - but for some reason Google keeps choosing the internal search results page as the "better" page to rank for our targeted keywords. Can anyone advise? Has anyone been in a similar situation? Thanks!
Intermediate & Advanced SEO | | FPD_NYC0 -
Redirecting my new Website URL to my old Website URL
Hi! OK, I am semi - new to SEO Moz but have been self-teaching for 3 years. However I am stuck.. I have been operating my e-commerce site from www.shopadornonline.com for the past 3 years. I just purchased www.shopadorn.com Right now Shopadorn.com re-directs to www.shopadornonline.com because all my products and links go to shopadornonline.com/productblahblahblah I guess I am stuck. Not sure what to tell my web designer to do? Do I give up on having shopadorn.com OR do I start re-directing customers and doing 301 re-directs? I think from what i have read that it is bad to have traffic going to both shopadorn and shopadornonline as they compete for rankings? Where should I start?
Intermediate & Advanced SEO | | Shopadorn0