Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Https pages indexed but all web pages are http - please can you offer some help?
-
Dear Moz Community,
Please could you see what you think and offer some definite steps or advice..
I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
-
You have a lot of questions in here. We are going to need to limit this thread to your main question of the https URLs being indexed.
Can you share the domain?
Have you claimed the https domain in Search Console yet to see if these indexed URLs are being shown in search results?
-
Hi Kate,
Thanks for your reply. Here is an update as to what is happening so far. Please excuse the length of this message.
-
The database according to the host is fine (please see below) but WordPress is still calling https:
-
In the WP database wp-actions, http is definitely being called* All certificates are ok and SSL is not active* The WordPress database is returning properly* The WP database mechanics are ok* The WP config-file is not doing https returns, it is calling http correctly
-
They said that the only other possibility could be one of the plugins causing the problem. But how can a plugin cause https problems?...I can see 50 different https pages indexed in Google. Bing has been checked and there are no https pages indexed there. All internal urls always have been http only and that is still the case.
-
I have Google fetched the website pages and in the 50 https pages most are images which I think probably must have came from the Yoast sitemap which was originally submitted to the search engines (more recently though I have taken all media image url's out of the Yoast sitemap and put noindex, follow on all image attachments files (the pages and the images on the pages will still be crawled and indexed in Google and search engines, it just means that any image url's won't. What will happen to those unwanted https files though? If I place rel canonical links on the pages that matter will the https pages drop out of the index eventually? I just wish I could find what is causing it (analogy: best to fix a hole in a roof to stop having to use a bowl to catch the water each time it rains).
-
** I looked at analytics today and saw something really interesting (see attached image) - you can see 5 instances of the trailing slash at the home page and to my knowledge there should only be 1 for a website. The Moz Crawl shows just 1 home domain http://example.co.uk/ so I am somewhat confused. Google search results showed 256 results for https url references, and there were 50 available to click on. So perhaps there are 50 https pages being referenced for each trailing slash (could there be 4 other trailing slash duplicate pages indexed and how would I fix it if that is the case?). This might sound naive but I don't have the skillset to fix this at this time so any help and advice would be appreciated.
-
Would Search and Replace plugin help at all or would it be a waste of time since the WordPress database mechanics seem to be ok.
-
I can't place any https to http 301 redirects for the 50 https url's that are indexed in Google, and I can't add any https rewrite rules in htaccess since that type of redirect will only work if a SSL is active. I already tried several redirect rules in htaccess and as expected they wouldn't work which again would probably mean that the SSL is not active for the site.
-
When https is entered instead of http, there should be an automatic resolve to http without me having to worry about that, but I tried again and the https version with a red diagonal line through it appears instead. The problem is that once a web visitor lands on that page they stay in that land of https (visually the main nav bar contents stretch across the page and the images and videos don't appear), and so the traffic will drop off..so hence a bad experience for the user and dropped traffic, decreasing income and bad for seo (split page juice, decreased rankings). There are no crawl errors in Google Search Console and Analytics shows Google Fetch completed for all pages - but when I request fetch and render for the home page it shows as partial instead of completed.
-
I don't want to request any https url removals through Google and search engines - it's not recommended because Google states that http version could be removed as well as https.
-
I did look at this last week:
http://www.screamingfrog.co.uk/5-easy-steps-to-fix-secure-page-https-duplicate-content/
-
Do you think that the https urls are indexed because of links pointing to the site are using https? Perhaps most of the backlinks are https but the preferred setting in Webmaster Tools / Search Console is already set to the non-www version instead of the www version; there has never been a https version of the site.
-
This was one possibility re duplicate content. Here are two pages and the listed duplicates:
-
The first Moz crawl I ever requested came back with hundreds of duplicate errors and I have resolved this. Google crawl had not picked this up previously (so I figured everything had been ok) and it was only realised after that Moz crawl. So https links were seen to be indexed and so the goals are to stop the root cause of the problem and to fix the damage so that any https url's can drop off out of the serps and the index.
-
I considered that the duplicate links in question might not be considered as true duplicates as such - it is actually just that the duplicate pages (these were page attachments created by WordPress for each image uploaded to the site) have no real content so the template elements outweighed the actual unique content elements which was flagging them as duplicates in the moz tool. So I thought that these were unlikely to hurt as they were not duplicates as such but they were indexed thin content. I did a content audit and tidy tidied things up as much as I could (blank pages and weak ones) hence the new recent sitemap submission and fetch to Google.
-
I have already redirected all attachments to the parent page in Yoast, and removed all attachments from the Yoast sitemap and set all media content (in Yoast) to 'noindex, follow'.
-
Naturally it's really important to eliminate the https problem before external backlinks link back to any of the unwanted https pages that are currently indexed. Luckily I haven't started any backlinking work yet, and any links I have posted in search land have all been http version. As I understand it, most server configurations should redirect by default to http when https isn’t configured, so I am confused as to where to take this especially as the host has given the WP database the all clear.
-
It could be taxonomies related to the theme or a slider plugin as I have learned these past few weeks. Disallowing and deindexing those unwanted http URLs would be amazing since I have so far spent weeks already trying to get to the bottom of the problem.
-
Ideally I understand from previous weeks that these 2 things would be very important:
(1)301 redirects from http to https (the host in this case cannot enable this directly through their servers and I can only add these redirects in the htaccess file if there is an active SSL in place).(2)Have in place a canonical url using http for both the http and https variations. Both of those solutions might work on their own and if the 301 redirect can't work with the host then the canonical will fix it? I saw that I could just set a canonical with a fixed transport protocol of http:// - then Google will then sort out the rest. Not preferred from a crawl perspective but would suffice? (Even so I don't know how to put that in place).
-
There are around 180 W3C validation errors. Would it help matters to get these fixed? Would this help to fix the problem do you know? The homepage renders with critical errors and a couple of warnings.
-
The 907 Theme scores well for its concept and functionality but its SEO reviews aren't that great.
-
Duplicate problems are not related to the W3 Total Cache plugin which is one of the plugins in place.
-
Regarding addons (trailing slash): Example: http://domain.co.uk/events redirects to http://domain.co.uk/events/ the addon must only do it on active urls - even if it didn't there were no reports of / duplicate errors in the Moz Crawl so its a different issue that would need looking at separately I would think.
-
At the bottom of each duplicate page there is an option for noindex. There are page sections and parallax sections that make up the home page, and each has to be published to become a live part of the home page. This isn't great for SEO I understand that because only the top page section is registered in Yoast as being the home page the other sections on the home page are not crawled as part of the home page but are instead separate page sections. Is it ok to index those page sections? If I noindex, follow them would that be good practice here. The theme does not auto block the page section from appearing in search engines.
-
Can noindex only be put on whole pages and not the specific page sections? I just want to make sure that the content on all the pages (media and text) and page sections are crawlable.
-
To ultimately fix the https problem re indexed pages out there could this eventually be a case of having to add SSL to the site just because there is no better way - just so the https to http redirect rule can be added to the htaccess file? If so, I don't think that would fix the root cause of the problem, but the root cause could be one of the plugins? Confused.
-
With Canonical url's does that mean the https links that don't have canonicals will deindex eventually? Are the https links giving a 404 (I'm worried because normally 404's need 301's as you know and I can't put a 301 on a https url in this situation). Do I have to do set a canonical for every single page on the website because of the extent of the problem that has occurred?
-
Nearly all of the traffic is being dropped after visiting the home page, and I can't for the life of me see why. Is it because of all these https pages? Once canonicals are in place how long will it take for everything to return to how it should be? Is it worthwhile starting a ppc campaign or should I wait until everything has calmed down on the site?
-
Is this a case of setting the canonical URL and then the rest will sort itself out? (please see the screenshot attached regarding the 5 home pages that each have a trailing slash).
-
This is the entire current situation. I understand this might not be so straight forward but I would really appreciate help as the site continues to drop traffic and income. Others will be able to learn from this string of questions and responses too. Thank you for reading this far and have a nice day. Kind Regards,
-
-
If the https is resolving, I would go ahead and use an htaccess command to have any https redirected to the same URL that is http. I don't see why it would be happening, but it is and that's the best way to take care of any traffic issues.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Reason for robots.txt file blocking products on category pages?
Hi I have a website with thosands of products. On the category pages, all the products are linked to with the code “?cgid” in the URL. But “?cgid” is also blocked in the robots.txt file for some reason. So I'm thinking it's stopping all my products getting crawled by Google. Am I right here? Is there any reason why a website would want to limit so many URL's? I'm only here a week and the sites getting great traffic, so don't want to go breaking it!!! Thanks
Web Design | | Frankie-BTDublin0 -
NO Meta description pulling through in SERP with react website - Requesting Indexing & Submitting to Google with no luck
Hi there, A year ago I launched a website using react, which has caused Google to not read my meta descriptions. I've submitted the sitemap and there was no change in the SERP. Then, I tried "Fetch and Render" and request indexing for the homepage, which did work, however I have over 300 pages and I can't do that for every one. I have requested a fetch, render and index for "this url and linked pages," and while Google's cache has updated, the SERP listing has not. I looked in the Index Coverage report for the new GSC and it says the urls and valid and indexable, and yet there's still no meta description. I realize that Google doesn't have to index all pages, and that Google may not also take your meta description, but I want to make sure I do my due diligence in making the website crawlable. My main questions are: If Google didn't reindex ANYTHING when I submitted the sitemap, what might be wrong with my sitemap? Is submitting each url manually bad, and if so, why? Am I simply jumping the gun since it's only been a week since I requested indexing for the main url and all the linked urls? Any other suggestions?
Web Design | | DigitalMarketingSEO1 -
How to prevent development website subdomain from being indexed?
Hello awesome MOZ Community! Our development team uses a sub-domain "dev.example.com" for our SEO clients' websites. This allows changes to be made to the dev site (U/X changes, forms testing, etc.) for client approval and testing. An embarrassing discovery was made. Naturally, when you run a "site:example.com" the "dev.example.com" is being indexed. We don't want our clients websites to get penalized or lose killer SERPs because of duplicate content. The solution that is being implemented is to edit the robots.txt file and block the dev site from being indexed by search engines. My questions is, does anyone in the MOZ Community disagree with this solution? Can you recommend another solution? Would you advise against using the sub-domain "dev." for live and ongoing development websites? Thanks!
Web Design | | SproutDigital0 -
How to find out that none of the images on my site violates copyrights? Is there any tool that can do this without having to check manually image by image?
We plan to add several thousand images to our site and we outsourced the image search to some freelancers who had instructions to just use royalty free pictures. Is there any easy and quick way to check that in fact none of these images violates copyrights without having to check image by image? In case there are violations we are unaware of, do you think we need to be concerned about a risk of receiving Takedown Notices (DMCA) before owner giving us notification for giving us opportunity to remove the photo?
Web Design | | lcourse1 -
Help with error: Not Found The requested URL /java/backlinker.php was not found on this server.
Hi all, We got this error for almost a month now. Until now we were outsourcing the webdesign and optimization, and now we are doing it in house, and the previous company did not gave us all the information we should know. And we've been trying to find this error and fix it with no result. Have you encounter this issue before? Did anyone found or knows a solution? Also would this affect our website in terms of SEO and in general. Would be very grateful to hear from you. Many thanks. Here is what appears on the bottom of the site( www.manvanlondon.co.uk) Not Found The requested URL /java/backlinker.php was not found on this server. <address>Apache/2.4.7 (Ubuntu) Server at 01adserver.com Port 80</address> <address> </address> <address> </address>
Web Design | | monicapopa0 -
Multiple Local Schemas Per Page
I am working on a mid size restaurant groups site. The new site (in development) has a drop down of each of the locations. When you hover over a location in the drop down it shows the businesses info (NAP). Each of the location in the Nav list are using schema.org markup. I think this would be confusing for search robots. Every page has 15 address schemas and individual restaurants pages NAP is at the below all the locations' schema/NAP in the DOM. Have any of you dealt with multiple schemas per page or similar structure?
Web Design | | JoshAM0 -
ECWID How to fix Duplicate page content and external link issue
I am working on a site that has a HUGE number of duplicate pages due to ECWID ecommerce platform. The site is built with Joomla! How can I rectify this situation? The pages also show up as "external " links on crawls... Is it the ECWID platform? I have never worked on a site that uses this. Here is an example of a page with the issue (there are 6280 issues) URL: http://www.metroboltmi.com/shop-spare-parts?Itemid=218&option=com_rokecwid&view=ecwid&ecwid_category_id=3560081
Web Design | | Atlanta-SMO0 -
What else should you call the Home page?
In the menu bar and footer the main page is called Home. Would it confuse people to rename it to Business Name Home or Business Name? How do you handle this?
Web Design | | CFSSEO0