Https-pages still in the SERP's
-
Hi all,
my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content.
Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow>
I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up.
Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index?
Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))...
thanx in advance
-
Hi Irving,
yes, you are right. The https login page is the "problem", other pages that I visit after are staying on https, as all the links on these page are https links. So you could surf all the pages on the domain in a https mode, if you visited the login page before
I spoke to our it department about this problem and they told me it would take time to program our CMS different. My boss then told me to find another, cheaper solution - so I came up with the noindex,nofollow.
So, do you see another solution whithout having to ask our it department again? They< are always very busy and almost have no time for nobody
-
Hi Malcolm,
thankx for the help. Before we put the noindex, nofollow on these pages, I thought about using the rel=canonical.
To be honest, I did not choose rel=canonical because I think that the noindex,nofollow ia a stronger sign for Google, and that the rel=canonical is more like a hint, which G does not always follow... but sure, i can be wrong!
You are saying that the noindex could end worse. The https-pages only contain links to https-pages, think of these pages like "normal" pages, same content, link structure etc. etc. Every URL just is a https, internal, external....
So I thought the noindex,nofollow would not hurt the http pages, because they cannot be found on the https ones - what do you think?
-
Is there a reason you're supporting both http and https versions of every page? If not, 301 redirect to either http or https for each page. I'd only leave pages that need to be secure as https, e.g. purchase pages. Non-secure pages are generally a better user experience in terms of load time since the user can use cached files from previous pages and non-encrypted pages are more lightweight.
If you're out to support both for those secure users who like https everywhere, I'd go with Malcolm's solution and rel canonical to the version you'd like to have indexed rather than using noindex nofollow.
-
do you have absolute links on your site that are keeping https?
For example, if you go to a secure login page and then click a homepage navigation link on the secure https page do you see the homepage link going back to http or staying on https?
That is usually the cause of this problem you should look into that. I would not manually request removal of the pages in WMT i would just fix the problem and let google update it itself.
-
have you tried canonicalising the http version?
Using a noindex nofollow rule could end up being worse as you are telling Google not to follow the pages or index them and this will include both http and https.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have recently re-done my website. My buyers guide and my category page are ranking for keywords I'm after.
I have recently re-done my entire site (only a few days). I believe Google is still re-crawling and updating (however, the amount of movement on other searches has been significant). My buyers guide is ranking very high for its intended keywords, as well as high for the keywords of the category page. Both are at the beginning of the second page and I wonder if its dragging me down. What do you think I should do? Is it to early to take action as everything has been completed redone.
Technical SEO | | Code2Chil0 -
Google adding text to SERP title which isn't relevant
Hi guys, I have a site with around 300 articles on it and these articles came from three old domains which were migrated during a Wordpress domain migration almost four months back. There The problem I'm having is that for quite a lot of the articles in the SERP, Google is adding '- Maine Coons' to the end of the title. One of our old domains was related to this breed of cat so at least in Google's eyes it must have something to do with this I guess. I've attached a screenshot that shows one such example. What's odd is a lot of the new content that has been created also has this suffix added and it doesn't show in any other search engine. So, it doesn't appear in other search engines and it's not coming from the article itself (proved also via developer tools inspecting the code). So, Google is adding it but as you can see in this example (there are many more) it has absolutely no relevance to the post. Has anyone seen this behavior or have any idea how to fix it? I've tried all kinds of things and have even hired SEO 'experts' that haven't been able to see any problems. Any clues? Thanks, Matt K71Y3P9
Technical SEO | | mattpettitt0 -
Blog archive pages are meta noindexed but still flagged as duplicate
Hi all. I know there several threads related to noindexing blog archives and category pages, so if this has already been answered, please direct me to that post. My blog archive pages have preview text from the posts. Each time I post a blog, the last post on any given archive page shifts to the first spot on the next archive page. Moz seems to report these as new duplicate content issues each week. I have my archive pages set to meta noindex, so can I feel good about continuing to ignore these duplicate content issues, or is there something else I should be doing to prevent penalties? TIA!
Technical SEO | | mkupfer1 -
Do you still loose 15% of value of inbound links when you redirect your site from http to https (so all inbound links to http are being redirected to https version)?
I know when you redesign your on website, you loose about 15% internally due to the 301 redirects (see moz article: https://moz.com/blog/accidental-seo-tests-how-301-redirects-are-likely-impacting-your-brand), but I'm wondering if that also applies to value of inbound links when you redirect your http://www.sitename.com to https://www.sitename.com. I appreciate your help!
Technical SEO | | JBMediaGroup0 -
Redirecting .edu subdomains to our site or taking the link, what's more valuable?
We have a relationship built through a service we offer to universities to be issued a .edu subdomain that we could redirect to our landing page relevant to that school. The other option is having a link from their website to that same page. My first question is, what would be more valuable? Can you pass domain authority by redirecting a subdomain to a subdirectory in my root domain? Or would simply passing the link equity from a page in their root domain to our page pass enough value? My second question is, if creating a subdomain with a redirect is much more valuable, what is the best process for this? Would we simply have their webmaster create the subdomain for us an have them put a 301 redirect to our page? Is this getting in the greyer hat area? Thanks guys!
Technical SEO | | Dom4410 -
When do you use 'Fetch as a Google'' on Google Webmaster?
Hi, I was wondering when and how often do you use 'Fetch as a Google'' on Google Webmaster and do you submit individual pages or main URL only? I've googled it but i got confused more. I appreciate if you could help. Thanks
Technical SEO | | Rubix1 -
How to remove entire directory off Google's Cache
The old version of Webmaster tools used to allow you to select whether to remove a single page from index or an entire directory.
Technical SEO | | vpahwa
http://www.canig.com/pageimages/submitremovalrequest.jpg How can I do this with the new Webmaster Tools? I can't find the option to remove an entire directory.0 -
As a wholesale website can our independent retailer's website use (copy) our content?
As a wholesaler of villa rentals, we have descriptions, images, prices etc can our agents (independent retailers) use the content from our website for their site or will this penalize us or them in Google rankings?
Technical SEO | | ewanTHH0