Site moved. Unable to index page : Noindex detected in robots meta tag?!
-
Hope someone can shed some light on this:
We moved our smaller site (into the main site ( different domains) .
The smaller site that was moved ( https://www.bluegreenrentals.com)
Directory where the site was moved (https://www.bluegreenvacations.com/rentals)Each page from the old site was 301 redirected to the appropriate page under .com/rentals. But we are seeing a significant drop in rankings and traffic., as I am unable to request a change of address in Google search console (a separate issue that I can elaborate on).
Lots of (301 redirect) new destination pages are not indexed. When Inspected, I got a message :
Indexing allowed? No: 'index' detected in 'robots' meta tagAll pages are set as Index/follow and there are no restrictions in robots.txtHere is an example URL :https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen/Can someone take a look and share an opinion on this issue?Thank you!
-
That's hugely likely to have had an impact. No-indexing pages before they were ready was a mistake, but the much bigger mistake was releasing the site early before it was 'ready'. The site should only have been set live and released once ALL pages were ported to the new staging environment
Also, if all pages weren't yet live on the staging environment - how can the person looking at staging / the old site, have done all the 301 redirects properly?
When you no-index URLs you kill their SEO authority (dead). Often it never fully recovers and has to be restarted from scratch. In essence, a 301 to a no-indexed URL is moving the SEO authority from the old page into 'nowhere' (cyber oblivion)
The key learning is, don't set a half ready site live and finish development there. WAIT until you are ready, then perform your SEO / architectural / redirect maneuvering
Even if you hadn't no-indexed those new URLs, Google checks to see if the content on the old and new URLs is similar (think Boolean string similarity, in machine terms) before 'allowing' the SEO authority from the old URL to flow to the new one. If the content isn't basically the same, Google expects the pages to 'start over' and 're prove themselves'. Why? Well you tell me why a new page with different content, should benefit from the links of an old URL which was different - when the webmasters who linked to that old URL, may well not choose to link to the new one
Even if you hadn't no-indexed those new URLs, because they were incomplete their content was probably holding content (radically different from the content of the old URLs, on the old site) - it's extremely likely that even without the no-index tags, it still would have fallen flat on its face
In the end, your best course of actions is finish all the content, make sure the 301s are actually accurate (which by the sounds of it many of them won't be), lift the no-index tags, request re-indexation. If you are very, very lucky some of the SEO juice from the old URLs will still exist and the new URLs will get some shreds of authority through (which is better than nothing). In reality though the pooch is already screwed by this point
-
Thank you for the quick reply.
Yes, that's right (URLs and page look from 2017. The site was old and neglected. We decided to give it a facelift, sunset domain in a few months and bring site under our main site.
While pages were still in development (but migrated from staging to live site), we needed to protect them from accidental indexation and flagged every page "no index" no follow". Is it possible that google crawled pages in the past, got no index(as was set at that time) and never returned back? If that's' the case, should I manually request indexing?
-
I love these kinds of questions. You have shared a moved page URL, can you give us the URL it resided at before it was moved, which 'should' be redirecting now? That would massively help
Edit: found this one:
https://www.bluegreenrentals.com/searchresults.aspx?s=CO&sl=COLORADO
(this is what the page apparently used to look like before it was redirected, but the image is a little old from 2017 - OP can you confirm if it did look like this directly prior to redirect?)
... which 301 redirects to:
https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen
... gonna carry on looking but this example of the full chain may help any other Mozzers looking to answer this Q
Suspected issue at this juncture, which could be wrong (not loads to go on right now) - content dissimilarity between URLs leading Google to deny the 301s
FYI: info to help OP, the no-index stuff may be relating moreso to this:
https://developers.google.com/search/reference/robots_meta_tag (may be deployed in the HTML as a tag, but can also be fired through the HTTP header which is another kettle of fish...)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
Changing Canonical Tags on Indexed Pages that are Ranking Well
Hi Guys, I recently rolled out a domain wide canonical tag change. Previously the website had canonical tags without the www, however the website was setup to redirect to www on page load. I noticed that the site competitors were all using www and as far as I understand www versus non www, it's based on preference. In order to keep things consistent, I changed the canonical tag to include the www. Will the site drop in rankings? Especially if the pages are starting to rank quite well. Any feedback is appreciated. Thanks!
Intermediate & Advanced SEO | | QuickToImpress0 -
Indexing a several millions pages new website
Hello everyone, I am currently working for a huge classified website who will be released in France in September 2013. The website will have up to 10 millions pages. I know the indexing of a website of such size should be done step by step and not in only one time to avoid a long sandbox risk and to have more control about it. Do you guys have any recommandations or good practices for such a task ? Maybe some personal experience you might have had ? The website will cover about 300 jobs : In all region (= 300 * 22 pages) In all departments (= 300 * 101 pages) In all cities (= 300 * 37 000 pages) Do you think it would be wiser to index couple of jobs by couple of jobs (for instance 10 jobs every week) or to index with levels of pages (for exemple, 1st step with jobs in region, 2nd step with jobs in departements, etc.) ? More generally speaking, how would you do in order to avoid penalties from Google and to index the whole site as fast as possible ? One more specification : we'll rely on a (big ?) press followup and on a linking job that still has to be determined yet. Thanks for your help ! Best Regards, Raphael
Intermediate & Advanced SEO | | Pureshore0 -
K3 duplicate page content and title tags
I'm running a Joomla site, have just installed k2 as our blogging platform. Our Crawl Report with SEOMOZ shows a good bit of duplicate content and duplicate title tags with our K2 blog. We've installed sh404SEF. Will I need to go into sh404SEF each time we generate a blog entry to point the titles to one URL? If there is something simpler please advise. Thank you, Don
Intermediate & Advanced SEO | | donaldmoore0 -
Adding Orphaned Pages to the Google Index
Hey folks, How do you think Google will treat adding 300K orphaned pages to a 4.5 million page site. The URLs would resolve but there would be no on site navigation to those pages, Google would only know about them through sitemap.xmls. These pages are super low competition. The plot thickens, what we are really after is to get 150k real pages back on the site, these pages do have crawlable paths on the site but in order to do that (for technical reasons) we need to push these other 300k orphaned pages live (it's an all or nothing deal) a) Do you think Google will have a problem with this or just decide to not index some or most these pages since they are orphaned. b) If these pages will just fall out of the index or not get included, and have no chance of ever accumulating PR anyway since they are not linked to, would it make sense to just noindex them? c) Should we not submit sitemap.xml files at all, and take our 150k and just ignore these 300k and hope Google ignores them as well since they are orhpaned? d) If Google is OK with this maybe we should submit the sitemap.xmls and keep an eye on the pages, maybe they will rank and bring us a bit of traffic, but we don't want to do that if it could be an issue with Google. Thanks for your opinions and if you have any hard evidence either way especially thanks for that info. 😉
Intermediate & Advanced SEO | | irvingw0 -
Magneto site with many pages
just finsihed scan to a magento site. off course I am getting thousand of pages that are dynamic. search pages and other. checking with site command on Google I see 154,000 results which pages it is recommended to block? some people are talking about blocking the search pages and some actually talking about allowing them? any answer on this? Thanks
Intermediate & Advanced SEO | | ciznerguy0 -
Static index page or not?
Are there any advantages of dis-advantages to running a static homepage as opposed to a blog style homepage. I have be running a static page on my site with the latest posts displayed as links after the homepage content. I would like to remove the static page and move to a more visually appealing homepage that includes graphics for each post and the posts droppping down the page like normal blogs do. How will this effect my site if I move from a static page to a more dynamic blog style page layout? Could I still hold the spot I currently rank for with the optimized index content if I turn to a more traditional blog format? cheers,
Intermediate & Advanced SEO | | NoCoGuru0 -
Push for site-wide https, but all pages in index are http. Should I fight the tide?
Hi there, First Q&A question 🙂 So I understand the problems caused by having a few secure pages on a site. A few links to the https version a page and you have duplicate content issues. While there are several posts here at SEOmoz that talk about the different ways of dealing with this issue with respect to secure pages, the majority of this content assumes that the goal of the SEO is to make sure no duplicate https pages end up in the index. The posts also suggest that https should only used on log in pages, contact forms, shopping carts, etc." That's the root of my problem. I'm facing the prospect of switching to https across an entire site. In the light of other https related content I've read, this might seem unecessary or overkill, but there's a vaild reason behind it. I work for a certificate authority. A company that issues SSL certificates, the cryptographic files that make the https protocol work. So there's an obvious need our site to "appear" protected, even if no sensitive data is being moved through the pages. The stronger push, however, stems from our membership of the Online Trust Alliance. https://otalliance.org/ Essentially, in the parts of the internet that deal with SSL and security, there's a push for all sites to utilize HSTS Headers and force sitewide https. Paypal and Bank of America are leading the way in this intiative, and other large retailers/banks/etc. will no doubt follow suit. Regardless of what you feel about all that, the reality is that we're looking at future that involves more privacy protection, more SSL, and more https. The bottom line for me is; I have a site of ~800 pages that I will need to switch to https. I'm finding it difficult to map the tips and tricks for keeping the odd pesky https page out of the index, to what amounts to a sitewide migratiion. So, here are a few general questions. What are the major considerations for such a switch? Are there any less obvious pitfalls lurking? Should I even consider trying to maintain an index of http pages, or should I start work on replacing (or have googlebot replace) the old pages with https versions? Is that something that can be done with canonicalization? or would something at the server level be necessary? How is that going to affect my page authority in general? What obvious questions am I not asking? Sorry to be so longwinded, but this is a tricky one for me, and I want to be sure I'm giving as much pertinent information as possible. Any input will be very much appreciated. Thanks, Dennis
Intermediate & Advanced SEO | | dennis.globalsign0