Development/Test Ecommerce Website Mistakenly Indexed
-
My question is - relatively speaking, how damaging to SEO is it to have BOTH your development/testing site and your live version indexed/crawled by Google and appearing in the SERPs?
We just launched about a month ago, and made a change to the robots text on the development site without noticing ... which lead to it being indexed too.So now the ecommerce website is duplicated in Google ... each under different URLs of course (and on diff servers, DNS etc)
We'll fix it right away ... and block crawlers to the development site. But again, may general question is what is the general damage to SEO ... if any ... created by this kind of mistake. My feeling is nothing significant
-
No my friend, no! I'm saying we'll point the existing staging/testing environment to the production version and will stop using it as staging instead of closing it completely like I mentioned earlier. And, we'll launch a fresh instance for staging/testing use case.
This will help us transferring majority if the link juice of already indexed staging/testing instance.
-
Why would you want to 301 a staging/dev environment to a production site? Unless you plan on making live changes to the production server (not safe), you'd want to keep them separate. Especially for eCommerce it would be important to have different environments to test and QA before pushing a change live. Making any change that impacts a number of pages could damage your ability to generate revenue from the site. You don't take down the development/testing site, because that's your safe environment to test changes before pushing updates to production.
I'm not sure I follow your recommendation. Am I missing a critical point?
-
Hi Eric,
Well, that's a valid point that bots might have considered your staging instances as the main website and hence, this could end up giving you nothing but a face palm.
The solution you suggested is similar to the one I suggested where we are not getting any benefit from the existing instance by removing it or putting noindex everywhere.
My bad! I assumed your staging/testing instance(s) got indexed recently only and are not very powerful from domain & page authority perspective. In fact, being a developer, I should have considered the worst case only
Thanks for pointing out the worst case Eric i.e when your staging/testing instances are decently old and you don't want to loose their SEO values while fixing this issue. And, here'e my proposed solution for it: don't removed the instance, don't even put a noindex everywhere. The better solution would be establishing a 301 redirect bridge from your staging/testing instance to your original website. In this case, ~90% of the link juice that your staging/testing instances have earned, will get passed. Make sure each and every URL of the staging/testing instance is properly 301 redirecting to the original instance.
Hope this helps!
-
It could hurt you in the long run (Google may decide the dev site is more relevant than your live site), but this is an easy fix. No-index your dev site. Just slap a site-wide noindex meta tag across all the pages, and when you're ready to move that code to the production site you remove that instance of code.
Disallowing from the robots.txt file will help, but that's a soft request. The best way to keep the dev site from being indexed is to use the noindex tag. Since it seems like you want to QA in a live environment that would prevent search engines from indexing the site, and still allow you to test in a production-like scenario.
-
Hey,
I recently faced the same issue when the staging instances got indexed accidentally and we were open for the duplicate content penalty (well, that's not cool). After a decent bit of research, I followed the following steps and got rid of this issue:
- I removed my staging instances i.e staging1.mysite.com, staging2.mysite.com and so on. Removing such instances helps you deindex already indexed pages faster than just blocking the whole website from robots.txt
- Relaunched the staging instances with a slightly different name like new-staging1.mysite.com, new-staging2.mysite.com and disallow bots on these instances from the day zero to avoid this mess again.
This helped me fixing this issue asap. Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Specific page does not index
Hi, First question: Working on the indexation of all pages for a specific client, there's one page that refuses to index. Google Search console says there's a robots.txt file, but I can't seem to find any tracks of that in the backend, nor in the code itself. Could someone reach out to me and tell me why this is happening? The page: https://www.brody.be/nl/assistentiewoningen/ Second question: Google is showing another meta description than the one our client gave in in Yoast Premium snippet. Could it be there's another plugin overwriting this description? Or do we have to wait for it to change after a specific period of time? Hope you guys can help
Intermediate & Advanced SEO | | conversal0 -
Google not indexing images
Hi there, We have a strange issue at a client website (www.rubbermagazijn.nl). Webpage are indexed by Google but images are not, and have never been since the site went live in '12 (We recently started SEO work on this client). Similar sites like www.damenrubber.nl are being indexed correctly. We have correct robots and sitemap setup and directions. Fetch as google (Search Console) shows all images displayed correctly (despite scripted mouseover on the page) Client doesn't use CDN Search console shows 2k images indexed (out of 18k+) but a site:rubbermagazijn.nl query shows a couple of images from PDF files and some of the thumbnails, but no productimages or category images from homepage. (product page example: http://www.rubbermagazijn.nl/collectie/slangen/olie-benzineslangen/7703_zwart_nbr-oliebestendig-6mm-l-1000mm.html) We've changed the filenames from non-descriptive names to descriptive names, without any result. Descriptive alt texts were added We're at a loss. Has anyone encountered a similar issue before, and do you have any advice? I'd be happy to provide more information if needed. CBqqw
Intermediate & Advanced SEO | | Adriaan.Multiply0 -
Duplicate content on .com .au and .de/europe/en. Would it be wise to move to .com?
This is the scenario: A webstore has evolved into 7 sites in 3 shops: example.com/northamerica example.de/europe example.de/europe/en example.de/europe/fr example.de/europe/es example.de/europe /it example.com.au .com/northamerica .de/europe/en and .com.au all have mostly the same content on them (all 3 are in english). What would be the best way to avoid duplicate content? An answer would be very much appreciated!
Intermediate & Advanced SEO | | SEO-Bas0 -
Getting Your Website Listed
Do you have any suggestiongs? I do not know local websites where I can get some easy backlinks. I guess a record in Google Places.would be great as well. Any sound suggestion will be appreciated. Thanks!
Intermediate & Advanced SEO | | stradiji0 -
URL Redirect: http://www.example.net/ vs. http://www.example.net
I currently have a website set up so that http://www.example.net/ redirects to http://www.example.net but **http://www.example.net/ **has more links and a higher page authority. Should I switch the redirect around? Here's the Open Site Explorer metrics for both: http://www.example.net/ Domain Authority: 38/100 Page Authority: 48/100 Linking Root Domains: 112 Total Links: 235 http://www.example.net Domain Authority: 38/100 Page Authority: 45/100 Linking Root Domains: 18 Total Links: 39
Intermediate & Advanced SEO | | kbrake0 -
How does a competing website with clearly black hat style SEO tactics, have a far higher domain authority than our website that only uses legitimate link building tactics?
Through SEO Moz link analysis tools, we looked at a competing websites external followed links and discovered a large number of links going to Blog pages with domain authorities in the 90's (their blog page authorities were between 40 and 60), however the single blog post written by this website was exactly the same in every instance and had been posted in August 2011. Some of these blog sites had 160 or so links linking back to this competing website whose domain authority is 49 while ours is 28, their Moz Trust is 5.43 while ours is 5.18. An example of some of the blogs that link to the competing website are: http://advocacy.mit.edu/coulter/blog/?p=13 http://pest-control-termite-inspection.posterous.com/\ However many of these links are "no follow" and yet still show up on Open Site Explorer as some of this competing websites top linking pages. Admittedly, they have 584 linking root domains while we have only 35, but if most of them are the kind of websites posted above, we don't understand how Google is rewarding them with a higher domain authority. Our website is www.anteater.com.au Are these tactics now the only way to get ahead?
Intermediate & Advanced SEO | | Peter.Huxley590 -
Does anyone know if certain DMOZ categories are blocked/never get indexed on google?
Hi all, After waiting many months I was happy to see a certain site listed on DMOZ, then months later still haven't seen the dmoz category indexed in google. It makes me wonder if certain categories don't get indexed or blocked or even previously penalized by google. The category in question is a regional one : http://www.dmoz.org/Regional/North_America/United_States/New_Jersey/Localities/G/Garfield/Business_and_Economy/ Anyone come across this before? Dave
Intermediate & Advanced SEO | | davebrown19750