Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Wrong URLs indexed, Failing To Rank Anywhere
-
I’m struggling with a client website that's massively failing to rank.
It was published in Nov/Dec last year - not optimised or ranking for anything, it's about 20 pages. I came onboard recently, and 5-6 weeks ago we added new content, did the on-page and finally changed from the non-www to the www version in htaccess and WP settings (while setting www as preferred in Search Console). We then did a press release and since then, have acquired about 4 partial match contextual links on good websites (before this, it had virtually none, save for social profiles etc.)
I should note that just before we added the (about 50%) new content and optimised, my developer accidentally published the dev site of the old version of the site and it got indexed. He immediately added it correctly to robots.txt, and I assumed it would therefore drop out of the index fairly quickly and we need not be concerned.
Now it's about 6 weeks later, and we’re still not ranking anywhere for our chosen keywords. The keywords are around “egg freezing,” so only moderate competition. We’re not even ranking for our brand name, which is 4 words long and pretty unique. We were ranking in the top 30 for this until yesterday, but it was the press release page on the old (non-www) URL!
I was convinced we must have a duplicate content issue after realising the dev site was still indexed, so last week, we went into Search Console to remove all of the dev URLs manually from the index. The next day, they were all removed, and we suddenly began ranking (~83) for “freezing your eggs,” one of our keywords! This seemed unlikely to be a coincidence, but once again, the positive sign was dampened by the fact it was non-www page that was ranking, which made me wonder why the non-www pages were still even indexed. When I do site:oursite.com, for example, both non-www and www URLs are still showing up….
Can someone with more experience than me tell me whether I need to give up on this site, or what I could do to find out if I do?
I feel like I may be wasting the client’s money here by building links to a site that could be under a very weird penalty

-
Thanks, we'll check all of the old URLs are redirecting correctly (though I'd assume given the htacces and WP settings changes, they would).
Will also perform the other check you mentioned and report back if anything is amiss... Thank you, Lynn.
-
It should sort itself out if the technical setup is ok, so yes keep doing what you are doing!
I would not use the removal request tool to try to get rid of the non-www, it is not really intended for this kind of usage and might bring unexpected results. Usually your 301s should bring about the desired effect faster than most other methods. You can use a tool like this one just to 100% confirm that the non-www is 301 redirecting to the www version on all pages (you probably already have but I mention it again to be sure).
Are the www urls in your sitemap showing all (or mostly) indexed in the search console? If yes then really you should be ok and it might just need a bit of patience.
-
Firstly, thank you both very much for your responses - they were both really helpful. It sounds, then, like the only solution is to keep waiting while continuing our link-buliding and hoping that might help (Lynn, sadly we have taken care of most of the technical suggestions you made).
Would it be worth also submitting removal requests via Search Console for the non-www URLs? I had assumed these would drop out quickly after setting the preferred domain, but that didn't happen, so perhaps forcing it like we did for the development URLs could do the trick?
-
Hi,
As Chris mentions it sounds like you have done the basics and you might just need to be a bit patient. Especially with only a few incoming links it might take google a little while to fully crawl and index the site and any changes.
It is certainly worth double checking the main technical bits:
1. The dev site is fully removed from the index (slightly different ways to remove complete sub domains vs sub folders but in my experience removal via the search console is usually pretty quick. After that make sure the dev site is permanently removed from the current location and returns a 404 or that it is password protected).
2. Double check the www vs non www 301 logic and make sure it is all working as expected.
3. Submit a sitemap with the latest urls and confirm indexing of the pages in the search console (important in order to quickly identify any hidden indexing issues)
Then it is a case of waiting for google to incorporate all the updates into the index. A mixture of www and non www for a period is not unusual in such situations. As long as the 301s are working correctly the www versions should eventually be the only ones you see.
Perhaps important to note that this does not sound like a 'penalty' as such but a technical issue, so it needs a technical fix in the first instance and should not hold you back in the medium - long term as a penalty might. That being said, if your keywords are based on egg freezing of the human variety (ie IVF services etc) then I think that is a pretty competitive area usually, often with a lot of high authority information type domains floating around in the mix in addition to the commercial. So, if the technical stuff is all good then I would start looking at competition/content again - maybe your keywords are more competitive than you think (just a thought!).
-
We've experienced almost exactly the same process in the past when a dev accidentally left staging.domain.com open for indexation... the really bad news is that despite noticing this, blocking via Robots and going through the same process to remove the wrong ones via Search Console etc, getting the correct domain ranking in the top 50 positions took almost 6 infuriating months!
Just like you, we saw the non-www version and the staging.domain version of the pages indexed for a couple of months after we fixed everything up then all of a sudden one day the two wrong versions of the site disappeared from the index and the correct one started grabbing some traction.
All this to say that to my knowledge, there are no active tasks you can really perform beyond what you've already done to speed this process up. Maybe building a good volume of strong links will push a positive signal that the correct one should be recrawled. We did spend a considerable amount of time looking into it and the answer kept coming back the same - "it just takes time for Google to recrawl the three versions of the site and figure it out".
This is essentially educated speculation but I believe the reason this happens is because for whatever reason the wrong versions were crawled first at different points to be the original version so the correct one was seen as 100% duplicate and ignored. This would explain why you're seeing what you are and also why in a magical 24hr window that could come at any point, everything sorted itself out - it seems that the "original" versions of the domain no longer exist so the truly correct one is now unique.
If my understanding of all this is correct, it would also mean that moving your site to yet another domain wouldn't help either since according to Google's cache/index, the wrong versions of your current domain are still live and the "original" so putting that same site/content on a different domain would just be yet another version of the same site.
Apologies for not being able to offer actionable tasks or good news but I'm all ears for future reference if anyone else has a solution!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving html site to wordpress and 301 redirect from index.htm to index.php or just www.example.com
I found page duplicate content when using Moz crawl tool, see below. http://www.example.com
Intermediate & Advanced SEO | | gozmoz
Page Authority 40
Linking Root Domains 31
External Link Count 138
Internal Link Count 18
Status Code 200
1 duplicate http://www.example.com/index.htm
Page Authority 19
Linking Root Domains 1
External Link Count 0
Internal Link Count 15
Status Code 200
1 duplicate I have recently transfered my old html site to wordpress.
To keep the urls the same I am using a plugin which appends .htm at the end of each page. My old site home page was index.htm. I have created index.htm in wordpress as well but now there is a conflict of duplicate content. I am using latest post as my home page which is index.php Question 1.
Should I also use redirect 301 im htaccess file to transfer index.htm page authority (19) to www.example.com If yes, do I use
Redirect 301 /index.htm http://www.example.com/index.php
or
Redirect 301 /index.htm http://www.example.com Question 2
Should I change my "Home" menu link to http://www.example.com instead of http://www.example.com/index.htm that would fix the duplicate content, as indx.htm does not exist anymore. Is there a better option? Thanks0 -
Ranking 1st for a keyword - but when 's' is added to the end we are ranking on the second page
Hi everyone - hope you are well. I can't get my head around why we are ranking 1st for a specific keyword, but then when 's' is added to the end of the keyword - we are ranking on the second page. What could be the cause of this? I thought that Google would class both of the keywords the same, in this case, let's say the keyword was 'button'. We would be ranking 1st for 'button', but 'buttons' we are ranking on the second page. Any ideas? - I appreciate every comment.
Intermediate & Advanced SEO | | Brett-S0 -
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
What is best practice for "Sorting" URLs to prevent indexing and for best link juice ?
We are now introducing 5 links in all our category pages for different sorting options of category listings.
Intermediate & Advanced SEO | | lcourse
The site has about 100.000 pages and with this change the number of URLs may go up to over 350.000 pages.
Until now google is indexing well our site but I would like to prevent the "sorting URLS" leading to less complete crawling of our core pages, especially since we are planning further huge expansion of pages soon. Apart from blocking the paramter in the search console (which did not really work well for me in the past to prevent indexing) what do you suggest to minimize indexing of these URLs also taking into consideration link juice optimization? On a technical level the sorting is implemented in a way that the whole page is reloaded, for which may be better options as well.0 -
Replace dynamic paramenter URLs with static Landing Page URL - faceted navigation
Hi there, got a quick question regarding faceted navigation. If a specific filter (facet) seems to be quite popular for visitors. Does it make sense to replace a dynamic URL e.x http://www.domain.com/pants.html?a_type=239 by a static, more SEO friendly URL e.x http://www.domain.com/pants/levis-pants.html by creating a proper landing page for it. I know, that it is nearly impossible to replace all variations of this parameter URLs by static ones but does it generally make sense to do this for the most popular facets choose by visitors. Or does this cause any issues? Any help is much appreciated. Thanks a lot in advance
Intermediate & Advanced SEO | | ennovators0 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
Removing UpperCase URLs from Indexing
This search - site:www.qjamba.com/online-savings/automotix gives me this result from Google: Automotix online coupons and shopping - Qjamba
Intermediate & Advanced SEO | | friendoffood
https://www.qjamba.com/online-savings/automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. and Google tells me there is another one, which is 'very simliar'. When I click to see it I get: Automotix online coupons and shopping - Qjamba
https://www.qjamba.com/online-savings/Automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. This is because I recently changed my program to redirect all urls with uppercase in them to lower case, as it appears that all lowercase is strongly recommended. I assume that having 2 indexed urls for the same content dilutes link juice. Can I safely remove all of my UpperCase indexed pages from Google without it affecting the indexing of the lower case urls? And if, so what is the best way -- there are thousands.0 -
Best way to permanently remove URLs from the Google index?
We have several subdomains we use for testing applications. Even if we block with robots.txt, these subdomains still appear to get indexed (though they show as blocked by robots.txt. I've claimed these subdomains and requested permanent removal, but it appears that after a certain time period (6 months)? Google will re-index (and mark them as blocked by robots.txt). What is the best way to permanently remove these from the index? We can't use login to block because our clients want to be able to view these applications without needing to login. What is the next best solution?
Intermediate & Advanced SEO | | nicole.healthline0