Wrong URLs indexed, Failing To Rank Anywhere
-
I’m struggling with a client website that's massively failing to rank.
It was published in Nov/Dec last year - not optimised or ranking for anything, it's about 20 pages. I came onboard recently, and 5-6 weeks ago we added new content, did the on-page and finally changed from the non-www to the www version in htaccess and WP settings (while setting www as preferred in Search Console). We then did a press release and since then, have acquired about 4 partial match contextual links on good websites (before this, it had virtually none, save for social profiles etc.)
I should note that just before we added the (about 50%) new content and optimised, my developer accidentally published the dev site of the old version of the site and it got indexed. He immediately added it correctly to robots.txt, and I assumed it would therefore drop out of the index fairly quickly and we need not be concerned.
Now it's about 6 weeks later, and we’re still not ranking anywhere for our chosen keywords. The keywords are around “egg freezing,” so only moderate competition. We’re not even ranking for our brand name, which is 4 words long and pretty unique. We were ranking in the top 30 for this until yesterday, but it was the press release page on the old (non-www) URL!
I was convinced we must have a duplicate content issue after realising the dev site was still indexed, so last week, we went into Search Console to remove all of the dev URLs manually from the index. The next day, they were all removed, and we suddenly began ranking (~83) for “freezing your eggs,” one of our keywords! This seemed unlikely to be a coincidence, but once again, the positive sign was dampened by the fact it was non-www page that was ranking, which made me wonder why the non-www pages were still even indexed. When I do site:oursite.com, for example, both non-www and www URLs are still showing up….
Can someone with more experience than me tell me whether I need to give up on this site, or what I could do to find out if I do?
I feel like I may be wasting the client’s money here by building links to a site that could be under a very weird penalty
-
Thanks, we'll check all of the old URLs are redirecting correctly (though I'd assume given the htacces and WP settings changes, they would).
Will also perform the other check you mentioned and report back if anything is amiss... Thank you, Lynn.
-
It should sort itself out if the technical setup is ok, so yes keep doing what you are doing!
I would not use the removal request tool to try to get rid of the non-www, it is not really intended for this kind of usage and might bring unexpected results. Usually your 301s should bring about the desired effect faster than most other methods. You can use a tool like this one just to 100% confirm that the non-www is 301 redirecting to the www version on all pages (you probably already have but I mention it again to be sure).
Are the www urls in your sitemap showing all (or mostly) indexed in the search console? If yes then really you should be ok and it might just need a bit of patience.
-
Firstly, thank you both very much for your responses - they were both really helpful. It sounds, then, like the only solution is to keep waiting while continuing our link-buliding and hoping that might help (Lynn, sadly we have taken care of most of the technical suggestions you made).
Would it be worth also submitting removal requests via Search Console for the non-www URLs? I had assumed these would drop out quickly after setting the preferred domain, but that didn't happen, so perhaps forcing it like we did for the development URLs could do the trick?
-
Hi,
As Chris mentions it sounds like you have done the basics and you might just need to be a bit patient. Especially with only a few incoming links it might take google a little while to fully crawl and index the site and any changes.
It is certainly worth double checking the main technical bits:
1. The dev site is fully removed from the index (slightly different ways to remove complete sub domains vs sub folders but in my experience removal via the search console is usually pretty quick. After that make sure the dev site is permanently removed from the current location and returns a 404 or that it is password protected).
2. Double check the www vs non www 301 logic and make sure it is all working as expected.
3. Submit a sitemap with the latest urls and confirm indexing of the pages in the search console (important in order to quickly identify any hidden indexing issues)
Then it is a case of waiting for google to incorporate all the updates into the index. A mixture of www and non www for a period is not unusual in such situations. As long as the 301s are working correctly the www versions should eventually be the only ones you see.
Perhaps important to note that this does not sound like a 'penalty' as such but a technical issue, so it needs a technical fix in the first instance and should not hold you back in the medium - long term as a penalty might. That being said, if your keywords are based on egg freezing of the human variety (ie IVF services etc) then I think that is a pretty competitive area usually, often with a lot of high authority information type domains floating around in the mix in addition to the commercial. So, if the technical stuff is all good then I would start looking at competition/content again - maybe your keywords are more competitive than you think (just a thought!).
-
We've experienced almost exactly the same process in the past when a dev accidentally left staging.domain.com open for indexation... the really bad news is that despite noticing this, blocking via Robots and going through the same process to remove the wrong ones via Search Console etc, getting the correct domain ranking in the top 50 positions took almost 6 infuriating months!
Just like you, we saw the non-www version and the staging.domain version of the pages indexed for a couple of months after we fixed everything up then all of a sudden one day the two wrong versions of the site disappeared from the index and the correct one started grabbing some traction.
All this to say that to my knowledge, there are no active tasks you can really perform beyond what you've already done to speed this process up. Maybe building a good volume of strong links will push a positive signal that the correct one should be recrawled. We did spend a considerable amount of time looking into it and the answer kept coming back the same - "it just takes time for Google to recrawl the three versions of the site and figure it out".
This is essentially educated speculation but I believe the reason this happens is because for whatever reason the wrong versions were crawled first at different points to be the original version so the correct one was seen as 100% duplicate and ignored. This would explain why you're seeing what you are and also why in a magical 24hr window that could come at any point, everything sorted itself out - it seems that the "original" versions of the domain no longer exist so the truly correct one is now unique.
If my understanding of all this is correct, it would also mean that moving your site to yet another domain wouldn't help either since according to Google's cache/index, the wrong versions of your current domain are still live and the "original" so putting that same site/content on a different domain would just be yet another version of the same site.
Apologies for not being able to offer actionable tasks or good news but I'm all ears for future reference if anyone else has a solution!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with PDFs that rank well?
Looking at some reports, I found that a client's site has PDFs that are ranking well for niche terms and getting some traffic. What can I do to get more out of them from a marketing standpoint? The obvious issue is that a PDF doesn't have the interactivity of a site visit, where we have analytics and CTAs. Someone has to follow a link back from the PDF to the site for us to even register a visit, let alone try to get their email or have them otherwise convert. My first guess is to make landing page summaries of the PDF content that link to the PDF, and canonical the PDF to the respective landing page. Has anyone tried this, or done something else that they would recommend again in this situation?
Intermediate & Advanced SEO | | JFA0 -
What should my main sitemap URL be?
Hi Mozzers - regarding the URL of a website's main website: http://example.com/sitemap.xml is the normal way of doing it but would it matter if I varied this to: http://example.com/mainsitemapxml.xml or similar? I can't imagine it would matter but I have never moved away from the former before - and one of my clients doesn't want to format the URL in that way. What the client is doing is actually quite interesting - they have the main sitemap: http://example.com/sitemap.xml - that redirects to the sitemap file which is http://example.com/sitemap (with no xml extension) - might that redirect and missing xml extension the redirected to sitemap cause an issue? Never come across such a setup before. Thanks in advance for your feedback - Luke
Intermediate & Advanced SEO | | McTaggart0 -
URL in russian
Hi everyone, I am doing an audit of a site that currently have a lot of 500 errors due to the russian langage. Basically, all the url's look that way for every page in russian: http://www.exemple.com/ru-kg/pешения-для/food-packaging-machines/
Intermediate & Advanced SEO | | alexrbrg
http://www.exemple.com/ru-kg/pешения-для/wood-flour-solutions/
http://www.exemple.com/ru-kg/pешения-для/cellulose-solutions/ I am wondering if this error is really caused by the server or if Google have difficulty reading the russian langage in URL's. Is it better to have the URL's only in english ?0 -
Google Indexing & Caching Some Other Domain In Place of Mine-Lost Ranking -Sucuri.net Found nothing
Again I am facing same Problem with another wordpress blog. Google has suddenly started to Cache a different domain in place of mine & caching my domain in place of that domain. Here is an example page of my site which is wrongly cached on google, same thing happening with many other pages as well - http://goo.gl/57uluq That duplicate site ( protestage.xyz) is showing fully copied from my client's site but showing all pages as 404 now but on google cache its showing my sites. site:protestage.xyz showing all pages of my site only but when we try to open any page its showing 404 error My site has been scanned by sucuri.net Senior Support for any malware & there is none, they scanned all files, database etc & there is no malware found on my site. As per Sucuri.net Senior Support It's a known Google bug. Sometimes they incorrectly identify the original and the duplicate URLs, which results in messed ranking and query results. As you can see, the "protestage.xyz" site was hacked, not yours. And the hackers created "copies" of your pages on that hacked site. And this is why they do it - the "copy" (doorway) redirects websearchers to a third-party site [http://www.unmaskparasites.com/security-report/?page=protestage.xyz](http://www.unmaskparasites.com/security-report/?page=protestage.xyz) It was not the only site they hacked, so they placed many links to that "copy" from other sites. As a result Google desided that that copy might actually be the original, not the duplicate. So they basically hijacked some of your pages in search results for some queries that don't include your site domain. Nonetheless your site still does quite well and outperform the spammers. For example in this query: [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 But overall, I think both the Google bug and the spammy duplicates have the negative effect on your site. We see such hacks every now and then (both sides: the hacked sites and the copied sites) and here's what you can do in this situation: It's not a hack of your site, so you should focus on preventing copying the pages: 1\. Contact the protestage.xyz site and tell them that their site is hacked and that and show the hacked pages. [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 Hopefully they clean their site up and your site will have the unique content again. Here's their email flang.juliette@yandex.com 2\. You might want to send one more complain to their hosting provider (OVH.NET) abuse@ovh.net, and explain that the site they host stole content from your site (show the evidence) and that you suspect the the site is hacked. 3\. Try blocking IPs of the Aruba hosting (real visitors don't use server IPs) on your site. This well prevent that site from copying your site content (if they do it via a script on the same server). I currently see that sites using these two IP address: 149.202.120.102\. I think it would be safe to block anything that begins with 149.202 This .htaccess snippet should help (you might want to test it) #-------------- Order Deny,Allow Deny from 149.202.120.102 #-------------- 4\. Use rel=canonical to tell Google that your pages are the original ones. [https://support.google.com/webmasters/answer/139066?hl=en](https://support.google.com/webmasters/answer/139066?hl=en) It won't help much if the hackers still copy your pages because they usually replace your rel=canonical with their, so Google can' decide which one is real. But without the rel=canonical, hackers have more chances to hijack your search results especially if they use rel=canonical and you don't. I should admit that this process may be quite long. Google will not return your previous ranking overnight even if you manage to shut down the malicious copies of your pages on the hacked site. Their indexes would still have some mixed signals (side effects of the black hat SEO campaign) and it may take weeks before things normalize. The same thing is correct for the opposite situation. The traffic wasn't lost right after hackers created the duplicates on other sites. The effect build up with time as Google collects more and more signals. Plus sometimes they run scheduled spam/duplicate cleanups of their index. It's really hard to tell what was the last drop since we don't have access to Google internals. However, in practice, if you see some significant changes in Google search results, it's not because of something you just did. In most cases, it's because of something that Google observed for some period of time. Kindly help me if we can actually do anything to get the site indexed properly again, PS it happened with this site earlier as well & that time I had to change Domain to get rid of this problem after I could not find any solution after months & now it happened again. Looking forward for possible solution Ankit
Intermediate & Advanced SEO | | killthebillion0 -
Rankings Tanked since new Site redesign land new url Structure ? Anything Glaringly Obvious I need to check ?
Hi All, I've just checked my rankings and everything on my eCommerce Site has pretty much tanked really badly since my new URL structure and site redesign was put in a place 2 weeks ago. My url structure was originally long and had underscores but we have now made it clean, shorter and use hyphens. We also have location specific pages and we have incorporated these into the new url structure.Basically it now pretty much follows the breadcrumb trail on our website. We were originally a general online hire site but now we have become niche and only concentrating on one types of products, so we got rid of all the other categories/products and pages we do not deal with anymore. Our Rankings issue , was only bought to light in the most recent MOZ Ranking report so it's looking site google hates our new store. Someone mentioned the other day, that Google may have been doing a Panda/Penguin refresh last weekend, but I am surprised to have dropped like 20 to 50 places for most of my keywords. We have set up the 301 redirects, We have also made the site alot smaller and set up a few thousand 404's to get rid of a lot of redundant pages . We have cut down massively on the thin/duplicate content and have lots of good new content on there. We did new sitemaps , set up schema.org. , increase text to code ratio . Setup our H1-H5 tags on all our pages. made site mobile responsive.. Basically , we are trying to do everything right. Is there anything glaringly obvious , I should be checking ?. I attach a Short url link if anyone wants to have a quick glance- http://goo.gl/7mmEx i.e Could it be a problem with the new urls or anything else that I should be looking at ?.. I.e how can I check to make sure the link juice is being passed on to the new url ? Or is all this expected when doing such changes ? Any advice greatly appreciated .. Pete
Intermediate & Advanced SEO | | PeteC120 -
Short Url vs Medium Urls ?
Hello Moooooooooooz ! I got a SEO fight today and though the best would be to involve more people into the fight ! 😛 Do you think it's better to get A- company.com/services/service1.html or B- company/service1.html I was for A as services is also googled to find the service1. I also think that it's better to help google to understand where the service is on the website My friend was for B as URL has to stay as short as possible What do you think ? ps: I can create the URL I want using Joomla and Sh404. The websites has 4 different categoies: /about, /services/ products, /projects Tks ! 🙂
Intermediate & Advanced SEO | | AymanH0 -
How Long Does it Take for Rel Canonical to De-Index / Re-Index a Page?
Hi Mozzers, We have 2 e-commerce websites, Website A and Website B, sharing thousands of pages with duplicate product descriptions. Currently only the product pages on Website B are indexing, and we want Website A indexed instead. We added the rel canonical tag on each of Website B's product pages with a link towards the matching product on Page A. How long until Website B gets de-indexed and Website A gets indexed instead? Did we add the rel canonical tag correctly? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Removing Dynamic "noindex" URL's from Index
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google. It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index. I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
Intermediate & Advanced SEO | | BeTheBoss0