Https-pages still in the SERP's
-
Hi all,
my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content.
Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow>
I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up.
Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index?
Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))...
thanx in advance
-
Hi Irving,
yes, you are right. The https login page is the "problem", other pages that I visit after are staying on https, as all the links on these page are https links. So you could surf all the pages on the domain in a https mode, if you visited the login page before
I spoke to our it department about this problem and they told me it would take time to program our CMS different. My boss then told me to find another, cheaper solution - so I came up with the noindex,nofollow.
So, do you see another solution whithout having to ask our it department again? They< are always very busy and almost have no time for nobody
-
Hi Malcolm,
thankx for the help. Before we put the noindex, nofollow on these pages, I thought about using the rel=canonical.
To be honest, I did not choose rel=canonical because I think that the noindex,nofollow ia a stronger sign for Google, and that the rel=canonical is more like a hint, which G does not always follow... but sure, i can be wrong!
You are saying that the noindex could end worse. The https-pages only contain links to https-pages, think of these pages like "normal" pages, same content, link structure etc. etc. Every URL just is a https, internal, external....
So I thought the noindex,nofollow would not hurt the http pages, because they cannot be found on the https ones - what do you think?
-
Is there a reason you're supporting both http and https versions of every page? If not, 301 redirect to either http or https for each page. I'd only leave pages that need to be secure as https, e.g. purchase pages. Non-secure pages are generally a better user experience in terms of load time since the user can use cached files from previous pages and non-encrypted pages are more lightweight.
If you're out to support both for those secure users who like https everywhere, I'd go with Malcolm's solution and rel canonical to the version you'd like to have indexed rather than using noindex nofollow.
-
do you have absolute links on your site that are keeping https?
For example, if you go to a secure login page and then click a homepage navigation link on the secure https page do you see the homepage link going back to http or staying on https?
That is usually the cause of this problem you should look into that. I would not manually request removal of the pages in WMT i would just fix the problem and let google update it itself.
-
have you tried canonicalising the http version?
Using a noindex nofollow rule could end up being worse as you are telling Google not to follow the pages or index them and this will include both http and https.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My 'complete guide' is cannibalising my main product page and hurting rankings
Hi everyone, I have a main page for my blepharoplasty surgical product that I want to rank. It's a pretty in-depth summary for patients to read all about the treatment and look at before and after pictures and there's calls to action in there. It works great and is getting lots of conversions. But I also have a 'complete guide' PDF which is for patients who are really interested in discovering all the technicalities of their eye-lift procedure including medical research, clinical stuff and risks. Now my main page is at position 4 and the complete guide is right below it in 5. So I tried to consolidate by adding the complete guide as a download on the main page. I've looked into rel canonical but don't think it's appropriate here as they are not technically 'duplicates' because they serve different purposes. Then I thought of adding a meta noindex but was not sure whether this was the right thing to do either. My report doesn't get any clicks from the serps, people visit it from the main page. I saw in Wordpress that there's options for the link, one says 'link to media file', 'custom URL' and 'attachment'. I've got the custom URL selected at the moment. There's also a box for 'link rel' which i figure is where I'd put the noindex. If that's the right thing to do, what should go in that box? Thanks.
Technical SEO | | Smileworks_Liverpool0 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
What's going on with google index - javascript and google bot
Hi all, Weird issue with one of my websites. The website URL: http://www.athletictrainers.myindustrytracker.com/ Let's take 2 diffrenet article pages from this website: 1st: http://www.athletictrainers.myindustrytracker.com/en/article/71232/ As you can see the page is indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:dfbzhHkl5K4J:www.athletictrainers.myindustrytracker.com/en/article/71232/10-minute-core-and-cardio&hl=en&strip=1 (that the "text only" version, indexed on May 19th) 2nd: http://www.athletictrainers.myindustrytracker.com/en/article/69811 As you can see the page isn't indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:KeU6-oViFkgJ:www.athletictrainers.myindustrytracker.com/en/article/69811&hl=en&strip=1 (that the "text only" version, indexed on May 21th) They both have the same code, and about the dates, there are pages that indexed before the 19th and they also problematic. Google can't read the content, he can read it when he wants to. Can you think what is the problem with that? I know that google can read JS and crawl our pages correctly, but it happens only with few pages and not all of them (as you can see above).
Technical SEO | | cobano0 -
Why are these URL's suddenly appearing in WMT?
One of our clients has suddenly experienced a sudden increase in crawl errors for smart phones overnight for pages which no longer exist and there are no links to these pages according to Google. There is no evidence as to why Google would suddenly start to crawl these pages as they have not existed for over 5 years, but it does come after a new site design has been put live. Pages do not appear to be in the index when a site search is used. There was a similar increase in crawl errors on desktop initially after the new site went live, but these quickly returned to normal. Mobile crawl errors only became apparent after this. There are some URL's showing which have no linking page detected so we don't know where these URL's are being found. WMT states "Googlebot couldn't crawl this URL because it points to a non-existent page". Those that do have a linking page are showing an internal page which also doesn't exist so it can't possibly link to any page. Any insight is appreciated. Andy and Mark at Click Consult.
Technical SEO | | ClickConsult0 -
What's our easiest, quickest "win" for page load speed?
This is a follow up question to an earlier thread located here: http://www.seomoz.org/q/we-just-fixed-a-meta-refresh-unified-our-link-profile-and-now-our-rankings-are-going-crazy In that thread, Dr. Pete Meyers said "You'd really be better off getting all that script into external files." Our IT Director is willing to spend time working on this, but he believes it is a complicated process because each script must be evaluated to determine which ones are needed "pre" page load and which ones can be loaded "post." Our IT Director went on to say that he believes the quickest "win" we could get would be to move our SSL javascript for our SSL icon (in our site footer) to an internal page, and just link to that page from an image of the icon in the footer. He says this javascript, more than any other, slows our page down. My question is two parts: 1. How can I verify that this javascript is indeed, a major culprit of our page load speed? 2. Is it possible that it is slow because so many styles have been applied to the surrounding area? In other words, if I stripped out the "Secured by" text and all the syles associated with that, could that effect the efficiency of the script? 3. Are there any negatives to moving that javascript to an interior landing page, leaving the icon as an image in the footer and linking to the new page? Any thoughts, suggestions, comments, etc. are greatly appreciated! Dana
Technical SEO | | danatanseo0 -
According to 1 of my PRO campaigns - I have 250+ pages with Duplicate Content - Could my empty 'tag' pages be to blame?
Like I said, my one of my moz reports is showing 250+ pages with duplicate content. should I just delete the tag pages? Is that worth my time? how do I alert SEOmoz that the changes have been made, so that they show up in my next report?
Technical SEO | | TylerAbernethy0 -
If two links from one page link to another, how can I get the second link's anchor text to count?
I am working on an e-commerce site and on the category pages each of the product listings link to the product page twice. The first is an image link and then the second is the product name. I want to get the anchor text of the second link to count. If I no-follow the image link will that help at all? If not is there a way to do this?
Technical SEO | | JordanJudson0 -
Should I have a 'more' button for links?
I have a website that has a page for each town. rather than listing all the towns with a link to each, I want to show only the most popular towns and have a 'more' button that shows all of them when you click it. I know that the search engine can always see the full list of links and even though the visitor can't this doesn't go against Google guidelines because there is no deception involved, the more button is quite clear. However, my colleague is concerned that this is 'making life hard' for the search engines and so the pages are less likely to be indexed. I disagree. Is he right to worry about this??
Technical SEO | | mascotmike0