Website Displayed by Google as Https: when all Secure Content is Blocked - Causing Index Prob.
-
Basically, I have no inbound likes going to https://www.mysite.com , but google is indexing the Homepage only as https://www.mysite.com
In June, I was re included to the google index after receiving a penalty... Most of my site links recovered fairly well. However my homepage did not recover for its top keywords.
Today I notice that when I search for my site, its displayed as https://
Robots.txt blocks all content going to any secure page. Leaving me sort of clueless what I need to do to fix this. Not only does it pose a problem for some users who click, but I think its causing the homepage to have an indexing problem.
Any ideas? Redirect the google bot only? Will a canonical tag fix this?
Thx
-
Yeah, I have all of that in place. I found 1 external link from an https , and 1 on my blog that was just an error one of my employees made. 2 Links total, at least thats what I found. Robots blocking everything you mentioned. My header uses absolute paths.
I do agree with you on one thing, once kicked, the little things that may not have mattered over the past 15 years all the sudden pop up as problems... At the same time I have heard the complete opposite, people are kicked and then they are right back where they used to be a few weeks after being included.
Competitive sabotage is positively happening, unless a random person who happens to live in the same city my competitor is located just went awol and decided they wanted to spam my offsite forums, attempt to hack the website multiple times, and add me to a spam link rink.
Anyway a webmaster says he has changed the canonical on their end to http , although it hasnt changed yet. I'm sure this could take a few days or longer to take place. Hopefully that is the fix, we'll see though and thanks for the advise!
-
Someone could probably have an answer to you within minutes if they had the domain URL available.
RE: Competitive sabotage, I very highly doubt it.
RE: Having just occurred - That is often a sticking-point for no good reason. Do not be concerned so much as to why it wasn't an issue before and focus on how to fix it now. Google's algorithm changes all the time. Your standing in the algorithm changes all the time. Trust can be lost if you get a penalty, even if you get out of it. One external link too many going to https, or one change in the crawl path so Googlebog ends up on the https site via a relative path link... Things can suddenly change for a variety of reasons. However, if you do what is being suggested you are very likely to put this issue behind you.
Here is what I do with eCommerce sites, typically:
- Rel canonical both versions to the http version
- Add a robots.txt block and robots meta noindex tag to shopping cart pages
- Use absolute paths, if possible (e.g. http://www.domain.com/file/ instead of .../file/), especially in your primary navigation and footer links.
If that doesn't work please let us know and we can evaluate the site for you.
Good luck!
-
Hmm, see no major changes have been made to the cart. The website has ranked for 15 years, so the https thing just popped up after the penalty/ re inclusion.
I'm wondering, since the canonical tag was added fairly recently. Do you think I should just fetch the homepage and submit again? Or even add a new page, and fetch/crawl/submit that?
Just to get a fresh crawl? Crawl stats show about 2250 on average daily, so I was expecting this https thing to be gone by now... Regardless of why they chose it to index over my normal link.
thx for the input
-
How about changing all of your links from relative to absolute in the HTML? If they're truly only getting there from navigation internally after visiting the shopping cart, this would solve that, yes? Just a thought.
-
If that is the case, then your shopping cart is not "acting right". Https will exist for every page in your site and it shouldn't. What cart are you using? I would redirect everything outside of the payment, cart, and contact pages to non secure. There is a disconnect from what robots files actually do and what people think they do. They are a suggestion, no index means not to add it to the index, but it does not mean don't go on that page. I have spiders on pages that are blocked from them all of the time.
-
My only concern with doing a redirect is this. The shopping cart is https: , so if you start the checkout process you will enter https:
If person decides to continue shopping... They will stay in the https, but since the checkout page is restricted to bots, essentially https doesnt exist and shouldnt show on any searches.
The sitemaps is clean, and a canonical is in place...
I have been having some issues with a competitor, is it possible they submitted https://www.mysite.com/ version of my website knowing that google will prefer this version?
thx for the advise
-
I would redirect the https version to http. Then I would make sure that there is a cannonical tag in place, next I would go over my site map and make sure that there isn't an link to the https page in there. After that you should be set, I wouldn't put it in the robots.txt though.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google not Indexing images on CDN.
My URL is: http://bit.ly/1H2TArH We have set up a CDN on our own domain: http://bit.ly/292GkZC We have an image sitemap: http://bit.ly/29ca5s3 The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: http://bit.ly/29eNSXv. We used to have a disallow to /thumb/ which had a 301 redirect to our CDN but we removed both the disallow in the robots.txt as well as the 301. Yet, GWT still reports none of our images on the CDN are indexed. The above screenshot is from the GWT of our main domain.The GWT from the CDN subdomain just shows 0. We did not submit a sitemap to the verified subdomain property because we already have a sitemap submitted to the property on the main domain name. While making a search of images indexed from our CDN, nothing comes up: http://bit.ly/293ZbC1While checking the GWT of the CDN subdomain, I have been getting crawling errors, mainly 500 level errors. Not that many in comparison to the number of images and traffic that we get on our website. Google is crawling, but it seems like it just doesn't index the pictures!? Can anyone help? I have followed all the information that I was able to find on the web but yet, our images on the CDN still can't seem to get indexed.
Intermediate & Advanced SEO | | alphonseha0 -
Duplicate Internal Content on E-Commerce Website
Hi, I find my e-commerce pharmacy website is full of little snippets of duplicate content. In particular: -delivery info widget repeated on all the product pages -product category information repeated product pages (e.g. all medicines belonging to a certain category of medicines have identical side effects and I also include a generic snippet of the condition the medicine treats) Do you think it will harm my rankings to do this?
Intermediate & Advanced SEO | | deelo5550 -
My website is not indexing
Hello Experts As i search site :http://www.louisvuittonhandbagss.com or just entering http://www.louisvuittonhandbagss.com on Google i am not getting my website . I have done following steps 1. I have submitted sitemaps and indexed all the site maps 2.i have used GWT feature fetch as Google . 3. I have submitted my website to top social book marking websites and to some classified sites also . Pleae
Intermediate & Advanced SEO | | aschauhan5210 -
4 websites with same content?
I have 4 websites (1 Main, 3 duplicate) with same content. Now I want to change the content for duplicate websites and main website will remain the same content. Is there any problem with my thinking?
Intermediate & Advanced SEO | | marknorman0 -
My website has disapeared from all google queries except the ones that contains it´s own website name
Hi, My website URL is: www.nixiweb.com Before June of 2013 my website was always shown at first or second place at google when searching for "hosting gratis". After June of 2013 my website has disappeared from all searches, it only appears when I search for the site name, eg: "nixiweb" or “www.nixiweb.com” At webmaster tools, the search queries table only shows queries related to my website name (eg: "nixiweb" or “xixiweb”), and none related to any other keyword. Can anybody help me understanding which is the problem with my site? Thanks
Intermediate & Advanced SEO | | nixiweb0 -
De-indexed by Google! ?
So it looks as though the content from myprgenie.com is no longer being indexed. Anyone know what happened and what they can do to fix it fast?
Intermediate & Advanced SEO | | siteoptimized0 -
Does google can read the content of one Iframe and use it for the pagerank?
Beginners doubt: When one website has its content inside Iframe's, google will read it and consider for the pagerank?
Intermediate & Advanced SEO | | Naghirniac0