If I have an https page with an http img that redirects to an https img, is it still considered by google to be a mixed content page?
-
With Google starting to crack down on mixed content I was wondering, if I have an https page with an http img that redirects to an https img, is it still considered by Google to be a mixed content page?
e.g. In an old blog article, there are images that weren't updated when the blog migrated to https, but just 301ed to new https images. is it still considered a mixed content page?
-
Thanks, I think I'm going to try to get it done, just because I like things neat and tidy, lol. Also, who knows when Google will switch it, might as well fix it now.
-
That is a leading cause of that error! If you have someone smart and confident who can write script to re-write all the links in like 30mins it's worth it. If it sounds like more of a 3-hour thing don't bother
-
I also caught them in SEMRush and there are a lot of them. I assume when they migrated the site they didn't bother with all the images and just 301ed them in a big batch later when they saw an issue in search console.
The question is, is it worth getting the developers to update all the imgs. I agree, ideally it should be done, just from a practical and time-consuming perspective, I know they are going to ask me whether it really matters.
-
It comes up as an error in SEMRush a lot when you produce mixed content like that. Myself I'd play it safe, it's not much effort to just rewrite the links to HTTPS using a script or something. If it takes seconds to fix it's probably not worth the potential risk (to leave it). If you think that for some reason, on your site it might take much longer to patch, it may not be worth doing
-
Thanks, I thought so, I just wasn't sure by a 301 if google follows the end source or doesn't even look at it relevant to the current page. Also, I checked in the developer's tools and a page I know to have an http img redirecting to an https img, isn't showing any security issues.
-
Yes, if you are directing users or their browser away from the secure web in any way (HTTP over HTTPS) then it counts as mixed content and you should sort it out
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What does it exactly means when Google brings the "brand name" to the beggining of the page title in search results when it was actually given at the end?
We see many times...page titles starts with "brand name: page for etc" where actually "brand name" has been given at the end and keywords at beginning. Why does Google make this change? I noticed this happens when similar title tags are used by multiple websites for high difficulty keywords. Thanks
Algorithm Updates | | vtmoz0 -
Is there any way to prevent Google from using structured data on specific pages?
I've noticed that Google is now serving what looks like host-specific video cards on mobile for our site. Is there any way to control which videos are included in these lists without removing the structured data on those clip pages or user pages? We don't want to noindex those pages but we don't want content from those pages to appear as video cards. 1kzPW
Algorithm Updates | | Garrett570 -
Fetch as Google in GWT - Functionality
Hi, For example, some of the HTML improvements notices from GWT, says dupe meta descriptions or titles, for pages that have since been 301 redirected or had a canonical tag added. So, my idea is to force google to read it using "Fetch as Google" - hoping that it will now see 301 redirection or the fix we have implemented. Does this work? How long does it take? Lastly, should I just click the "fetch as google" or should I also click on the "Submit to index" button? Thanks!
Algorithm Updates | | bjs20100 -
Google Authorship and Hobby Blog
I hope that someone can help me come up with the best option. Please forgive my ignorance on this issue. I have a hobby blog and up until now I have not wanted to associate it with my real name. It is a menswear blog about classic American style. I was afraid that it may be a hindrance if I was ever looking for a more conservative career than SEO. I am now reconsidering this and thinking that claiming it may be of more help than harm. Which brings me to Google Authorship. My dilemma and misunderstanding stems from the fact that I have mutliple Gmail accounts. I am guessing that some of the newer accounts have a G+ associated with them. So my question is do I use the email that is associated with my blog or my main gmail that I use personally? If I do use the gmail associated with the blog will it then become my default Google plus profile? Any insight would be helpful. Thanks in advance. If any of you are interested the hobby blog is Oxford Cloth Button Down.
Algorithm Updates | | JerrodDavid0 -
Google doesnt index my Google+ Profile
Hey guys! I know it sounds like a novice question, but I have checked ALL THE BOXES THAT TELL GOOGLE TO INDEX MY GOOGLE+ PROFILE. It is Visible for search - 100%. It's been 3 weeks since I opened a Google+ profile and it still hasn't been indexed for its name. Any guesses what's going on? (It's not this name so don't try to google me)
Algorithm Updates | | Yoav_Vilner0 -
Would Google Remove Pages for Inactivity?
Hi, I've been watching the Total Indexed number for 4 domains that I work with for the last few months. In Google Webmaster Tools three of them were holding steady up until August-September, when suddenly they started declining by hundreds of thousands of URLs a week. I've asked my IT department and they say they haven't done anything technically different in the last few months that would affect indexation. I've also searched on google and on search marketing blogs to see if anyone else has experience this to no avail. As you can see in the image, the "Not Selected" pages have not increased so it appears this is not due to duplicate content (of which we have a lot). However, the "Ever Crawled" number is increasing. The only reasonable answer that I can conclude is that Google is now de-indexing inactive URLs? Anyone have a better answer? yIYDm.jpg
Algorithm Updates | | OfficeFurn0 -
Google has indexed a lot of test pages/junk from the development days.
With hind site I understand that this could have been avoided if robots.txt was configured properly. My website is www.clearvisas.com, and is indexed with both the www subdomain and with out. When I run site:clearvisas.com in Google I get 1,330 - All junk from the development days. But when I run site:www.clearvisas.com in Google I get 66 - these results all post development and more in line with what I wanted to be indexed. Will 1,330 junk pages hurt my seo? Is it possible to de-index them and should I? If the answer is yes to any of the questions how should I proceed? Kind regards, Fuad
Algorithm Updates | | Fuad_YK0 -
Duplicate content penalisation?
Hi We are pulling in content snippets from our product blog to our category listing pages on our ecommerce site to provide fresh, relevant content which is working really well. What I am wondering is if we are going to get penalised for dupicate content as both our our blog and ecommerce site are on the same ip address? If so would moving the blog to a separate server and / or a separate domain name be a wise move? Thanks very much
Algorithm Updates | | libertybathrooms0