Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
One of my Friend's website Domain Authority is Reducing? What could be the reason?
-
Hello Guys,
One of my friend's website domain authority is decreasing since they have moved their domain from HTTP to https.
There is another problem that his blog is on subfolder with HTTP.
So, can you guys please tell me how to fix this issue and also it's losing some of the rankings like 2-5 positions down.Here is website URL: myfitfuel.in/
here is the blog URL: myfitfuel.in/mffblog/ -
http://www.redirect-checker.org/index.php
http://www.contentforest.com/seo-tools/redirect-checker
See http://i.imgur.com/mIqqCla.png
Redirecting all traffic to the www SSL domain
You can force all of your traffic to go to the
www
domain, and to use SSL, even if they did not request it initially.ensure www.
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the bare SSL domain
With dedicated load balancers or who have purchased a slot on the UCC certificate on shared load balancers have the option of redirecting all traffic to the bare domain using the HTTPS protocol:
# Redirecting http://www.domain.com and https://www.domain.com to https://domain.com RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ https://%1%{REQUEST_URI} [L,R=301]
Redirecting http://domain.com to https://domain.com
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]An example of how the requests work
The preceding examples of how and when you would use a rewrite are complex; here's a breakdown of the scenarios, which may help you determine what your website really needs.
A security warning will occur on a bare domain only if the request specifically includes the https protocol, like https://mysite.com, and there's no SSL certificate on the load balancer that covers the bare domain. A request for
http://mysite.com
using the http protocol, however, will not produce a security warning because a secure connection to the bare domain has not been requested.| Domain | DNS record type | IP/Hostname |
| www.mysite.com | CNAME | dc-2459-906772057.us-east-1.elb.amazonaws.com |
| mysite.com | A | 123.45.67.89 |For AWS ELB,
www.mysite.com
has a CNAME record that points to the hostname of the elastic load balancer (ELB), because that's where the SSL certificate is installed when it's uploaded using the self-service UI. But, bare domains/non-FQDNs like mysite.com can't have CNAME records without something like Route 53, so it must point to the elastic IP address of the balancer pair behind the ELB.If there's a redirect in the
.htaccess
file that will take all requests for the bare domain and redirect them towww
, due to how the DNS records are set up, this is what happens if you requesthttp://example.com
:- The request for
http://mysite.com
hits the load balancers behind the ELB. - The
.htaccess
rule 301 redirects request tohttps://www.mysite.com
. - A new request for
https://www.mysite.com
hits the ELB where the certificate lives and everything is happy, secure, and green.
But, if a specific request is sent to
https://mysite.com
with the https protocol, here's what happens:- A request for
https://mysite.com
hits the load balancers behind the ELB. - Your browser displays the normal security warning.
- You examine the certificate and decide to move ahead.
- The .
htaccess
rule 301 redirects request tohttps://www.mysite.com
. - A new request for
https://www.mysite.com
hits the ELB where the cert lives and everything is happy, secure, and green.
Redirecting all HTTP traffic to HTTPS
In the following example, the server variable
HTTP_X_FORWARDED_PROTO
is set tohttps
if you're accessing the website using HTTPS, the following code will work with yourRedirect HTTP to HTTPS
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all HTTPS traffic to HTTP
In addition, if visitors to a customer's website are receiving insecure content warnings due to Google indexing documents using the HTTPS protocol, traffic may need to be redirected from HTTPS to HTTP.
The rule is basically the same as the preceding example, but without the first
Rewrite
condition. If no SSL certificate is installed, the value of%{HTTPS}
is always set tooff
, even when you are accessing the website using HTTPS. Use the following rule set in this case:Redirect HTTPS to HTTP
RewriteCond %{HTTP:X-Forwarded-Proto} =https
RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting from a bare domain to the www subdomain
SSL certificates can not cover the bare domain for websites unless you are using Route 53 or some other similar provider. This is because the SSL certificates for Acquia Cloud Professional websites are placed on an Elastic Load Balancer (ELB). While ELBs require CNAME records for domain name resolution, bare domains require an IP address in an A-record for the domain name (DNS) configuration and cannot have CNAME records. Therefore, it's not possible to terminate traffic to bare domains on the ELB where your SSL certificate is located without Route 53.
Even if all requests for the bare domain are redirected to
www
, visitors to ELB websites that explicitly request the bare domain using the HTTPS protocol, likehttps://mysite.com
, will always receive a security warning in their browser before being redirected tohttps://www.mysite.com
. For a more detailed explanation of why this happens, refer to the An example of how the requests work section.Redirect http://domain.com to http://www.domain.com
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the www SSL domain You want this!
You can force all of your traffic to go to the
www
domain, and to use SSL, even if they did not request it initially.ensure www.
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]Redirecting all traffic to the bare SSL domain
AWS dedicated load balancers or who have purchased a slot on the UCC certificate on our shared load balancers have the option of redirecting all traffic to the bare domain using the HTTPS protocol:
Redirecting http://www.domain.com and https://www.domain.com to https://domain.com
RewriteCond %{HTTP_HOST} ^www.(.+)$ [NC]
RewriteRule ^(.*)$ https://%1%{REQUEST_URI} [L,R=301]Redirecting http://domain.com to https://domain.com
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]As an example, if you wanted to ensure that all the domains were redirected to
https://www.
except for Acquia domains acquia-sites.com, you would use something like this:ensure www.
RewriteCond %{HTTP_HOST} !prod.acquia-sites.com [NC] # exclude Acquia domains
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]ensure https
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]elb 2.2.15 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
This Amazon Web Services CloudFormation template will create an Elastic Load Balancer which terminates HTTPS connections using the Mozilla recommended ciphersuites and protocols.
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Example ELB with Mozilla recommended ciphersuite", "Parameters": { "SSLCertificateId": { "Description": "The ARN of the SSL certificate to use", "Type": "String", "AllowedPattern": "^arn:[^:]*:[^:]*:[^:]*:[^:]*:.*$", "ConstraintDescription": "SSL Certificate ID must be a valid ARN. http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-arns" } }, "Resources": { "ExampleELB": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Listeners": [ { "LoadBalancerPort": "443", "InstancePort": "80", "PolicyNames": [ "Mozilla-intermediate-2015-03" ], "SSLCertificateId": { "Ref": "SSLCertificateId" }, "Protocol": "HTTPS" } ], "AvailabilityZones": { "Fn::GetAZs": "" }, "Policies": [ { "PolicyName": "Mozilla-intermediate-2015-03", "PolicyType": "SSLNegotiationPolicyType", "Attributes": [ { "Name": "Protocol-TLSv1", "Value": true }, { "Name": "Protocol-TLSv1.1", "Value": true }, { "Name": "Protocol-TLSv1.2", "Value": true }, { "Name": "Server-Defined-Cipher-Order", "Value": true }, { "Name": "ECDHE-ECDSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-RSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "DHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA", "Value": true }, { "Name": "DHE-RSA-AES128-SHA256", "Value": true }, { "Name": "DHE-RSA-AES128-SHA", "Value": true }, { "Name": "DHE-RSA-AES256-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-DES-CBC3-SHA", "Value": true }, { "Name": "ECDHE-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "EDH-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "AES128-GCM-SHA256", "Value": true }, { "Name": "AES256-GCM-SHA384", "Value": true }, { "Name": "AES128-SHA256", "Value": true }, { "Name": "AES256-SHA256", "Value": true }, { "Name": "AES128-SHA", "Value": true }, { "Name": "AES256-SHA", "Value": true }, { "Name": "DES-CBC3-SHA", "Value": true } ] } ] } } }, "Outputs": { "ELBDNSName": { "Description": "DNS entry point to the stack (all ELBs)", "Value": { "Fn::GetAtt": [ "ExampleELB", "DNSName" ] } } } }
- You can get managed Magento hosting here.
- https://www.armor.com/security-solutions/armor-complete/
- https://www.mgt-commerce.com/
- https://www.rackspace.com/en-us/digital/magento
- https://www.cogecopeer1.com/en/services/managed-it/ecommerce/magento/
- https://www.cogecopeer1.com/en/services/cloud/mission-critical/
- https://www.engineyard.com/magento
- https://www.cloudways.com/en/magento-managed-cloud-hosting.php
- https://www.rochen.com/magento-hosting/
- http://www.tenzing.com/ecommerce-hosting-2/magento-optimized-hosting-on-aws/
- https://www.siteground.com/dedicated-hosting.htm#tab-3
- https://www.siteground.com/cloud-hosting.htm#tab-2
- https://www.siteground.com/speed
- The request for
-
May I ask did your friend modify any of the site structure aside from adding HTTPS?
make sure you have followed all the steps in this list by Google link to your and the list below. There are more resources
if needed. Read what Google's John Mueller has to say on the subject of redirects.
Official Google moving to HTTS how to
https://support.google.com/webmasters/answer/6033049
** tools you can use**
- https://www.screamingfrog.co.uk/log-file-analyser/
- https://www.deepcrawl.com
- https://www.screamingfrog.co.uk/seo-spider/
** a very important checklist make sure you do this one below.**
SEO checklist to preserve your rankings
-
Make sure every element of your website uses HTTPS, including widgets, java script, CSS files, images and your content delivery network.
-
Use 301 redirects to point all HTTP URLs to HTTPS. This is a no-brainer to most SEOs, but you'd be surprised how often a 302 (temporary) redirect finds its way to the homepage by accident
-
Make sure all canonical tags point to the HTTPS version of the URL.
-
Use relative URLs whenever possible.
-
Rewrite hard-coded internal links (as many as is possible) to point to HTTPS. This is superior to pointing to the HTTP version and relying on 301 redirects.
-
Register the HTTPS version in both Google and Bing Webmaster Tools.
-
Use the Fetch and Render function in Webmaster Tools to ensure Google can properly crawl and render your site.
-
Update your sitemaps to reflect the new URLs. Submit the new sitemaps to Webmaster Tools. Leave your old (HTTP) sitemaps in place for 30 days so search engines can crawl and "process" your 301 redirects.
-
Update your robots.txt file. Add your new sitemaps to the file. Make sure your robots.txt doesn't block any important pages.
-
If necessary, update your analytics tracking code. Most modern Google Analytics tracking snippets already handle HTTPS, but older code may need a second look.
-
Implement HTTP Strict Transport Security (HSTS). This response header tells user agents to only access HTTPS pages even when directed to an HTTP page. This eliminates redirects, speeds up response time, and provides extra security.
-
If you have a disavow file, be sure to transfer over any disavowed URLs into a duplicate file in your new Webmaster Tools profile.
-
NGINX
Add the following to your Nginx config.
server { listen 80; server_name domain.com www.domain.com; return 301 https://domain.com$request_uri; }
Apache
Add the following to your
.htaccess
file.RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
-
** Here are some more extremely helpful resources**
-
https://www.seroundtable.com/google-seo-http-to-https-migration-checklist-19268.html
It is not abnormal for a site to see a dip in rankings or search visibility after migration or a change of structure. I have a very regimented list that I stick to and have not seen anything dip for more than three days, but all sites are unique, and Google indexes all sites differently.
Depending on your domain authority you may or may not have a higher crawl budget based on whether or not you tell Google you are making these changes will make an enormous difference in whether or not your site recovers quickly or sees a dip in traffic.
I hope this is helpful and remember Google has to reindex everything.
Thomas
-
It makes no sense that you would have your blog on a subfolder that was non-encrypted why did you choose to do this? I like the site to be 100% encrypted?
Read the second post first please
http://www.myfitfuel.in/mffblog/ should be https://www.myfitfuel.in/mffblog/
why not https?
if your hosting provider does not allow you to use HTTP/2 I suggest adding a WAF four as little as $20 a month you can run your site on HTTP/2
Now the cost of Akamai might scare people just from hearing the name, but I can assure you there are very good pricing options now that companies are competing against them in the same area. One thing in my opinion that no other CDN Waf company has is the amount of points of presence or pops/ Akamai exceeds over 250
https://community.akamai.com/community/web-performance/blog/2015/01/26/enabling-http2-h2-in-akamai
https://www.cloudflare.com/http2/
https://www.incapsula.com/cdn-guide/cdn-and-ssl-tls.html
when you switch your entire site over to https, then you can use the Google change of address tool and migrate your site to HTTPS
This should be encrypted you don't need a next or certificate you want to encrypt the entire site ideally. Add it to Google Webmaster Tools four times
- http://www.myfitfuel.in/
- http://myfitfuel.in/
- https://www.myfitfuel.in/
- https://myfitfuel.in/ Canonical chooses this in Webmaster tools like the site you want traffic to go to.
https://support.google.com/webmasters/topic/6029673?hl=en&ref_topic=6001951
https://www.deepcrawl.com/knowledge/best-practice/the-zen-guide-to-https-configuration/
https://www.deepcrawl.com/knowledge/best-practice/hsts-a-tool-for-http-to-https-migration/
elb 2.2.15 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
This Amazon Web Services CloudFormation template will create an Elastic Load Balancer which terminates HTTPS connections using the Mozilla recommended ciphersuites and protocols.
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Example ELB with Mozilla recommended ciphersuite", "Parameters": { "SSLCertificateId": { "Description": "The ARN of the SSL certificate to use", "Type": "String", "AllowedPattern": "^arn:[^:]*:[^:]*:[^:]*:[^:]*:.*$", "ConstraintDescription": "SSL Certificate ID must be a valid ARN. http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-arns" } }, "Resources": { "ExampleELB": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Listeners": [ { "LoadBalancerPort": "443", "InstancePort": "80", "PolicyNames": [ "Mozilla-intermediate-2015-03" ], "SSLCertificateId": { "Ref": "SSLCertificateId" }, "Protocol": "HTTPS" } ], "AvailabilityZones": { "Fn::GetAZs": "" }, "Policies": [ { "PolicyName": "Mozilla-intermediate-2015-03", "PolicyType": "SSLNegotiationPolicyType", "Attributes": [ { "Name": "Protocol-TLSv1", "Value": true }, { "Name": "Protocol-TLSv1.1", "Value": true }, { "Name": "Protocol-TLSv1.2", "Value": true }, { "Name": "Server-Defined-Cipher-Order", "Value": true }, { "Name": "ECDHE-ECDSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-RSA-CHACHA20-POLY1305", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "DHE-RSA-AES128-GCM-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-GCM-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA256", "Value": true }, { "Name": "ECDHE-ECDSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-RSA-AES128-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA384", "Value": true }, { "Name": "ECDHE-ECDSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-RSA-AES256-SHA", "Value": true }, { "Name": "DHE-RSA-AES128-SHA256", "Value": true }, { "Name": "DHE-RSA-AES128-SHA", "Value": true }, { "Name": "DHE-RSA-AES256-SHA256", "Value": true }, { "Name": "DHE-RSA-AES256-SHA", "Value": true }, { "Name": "ECDHE-ECDSA-DES-CBC3-SHA", "Value": true }, { "Name": "ECDHE-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "EDH-RSA-DES-CBC3-SHA", "Value": true }, { "Name": "AES128-GCM-SHA256", "Value": true }, { "Name": "AES256-GCM-SHA384", "Value": true }, { "Name": "AES128-SHA256", "Value": true }, { "Name": "AES256-SHA256", "Value": true }, { "Name": "AES128-SHA", "Value": true }, { "Name": "AES256-SHA", "Value": true }, { "Name": "DES-CBC3-SHA", "Value": true } ] } ] } } }, "Outputs": { "ELBDNSName": { "Description": "DNS entry point to the stack (all ELBs)", "Value": { "Fn::GetAtt": [ "ExampleELB", "DNSName" ] } } } }
** here are some fantastic resources from https://mozilla.github.io/server-side-tls/ssl-config-generator/ for setting up your server These things need to be put in place**
Nginx 1.10.1 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
server { listen 80 default_server; listen [::]:80 default_server; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /path/to/signed_cert_plus_intermediates; ssl_certificate_key /path/to/private_key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /path/to/dhparam.pem; # intermediate configuration. tweak to your needs. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling --- # fetch OCSP records from URL in ssl_certificate and cache them ssl_stapling on; ssl_stapling_verify on; ## verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates; resolver <ip dns="" resolver="">; .... }</ip>
Apache 2.4.18 | intermediate profile | OpenSSL 1.0.1e | link
Oldest compatible clients : Firefox 27, Chrome 30, IE 11 on Windows 7, Edge, Opera 17, Safari 9, Android 5.0, and Java 8
<virtualhost *:443="">... SSLEngine on SSLCertificateFile /path/to/signed_certificate_followed_by_intermediate_certs SSLCertificateKeyFile /path/to/private/key # Uncomment the following directive when using client certificate authentication #SSLCACertificateFile /path/to/ca_certs_for_client_authentication # HSTS (mod_headers is required) (15768000 seconds = 6 months) Header always set Strict-Transport-Security "max-age=15768000" ...</virtualhost> # intermediate configuration, tweak to your needs SSLProtocol all -SSLv3 SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS SSLHonorCipherOrder on SSLCompression off SSLSessionTickets off # OCSP Stapling, only in httpd 2.3.3 and later SSLUseStapling on SSLStaplingResponderTimeout 5 SSLStaplingReturnResponderErrors off SSLStaplingCache shmcb:/var/run/ocsp(128000)
After you change the architecture of any website it normally takes a little bit of a dive. John Mu stated Google would not be punishing people to redirect to encrypted sites so while that might be true it doesn't mean Google has figured out what is going on yet.
I think you need to get Google crawling your site and have it in Webmaster tools with all of the pages redirected to https including adding things like HSTS and HTTP/2 to speed up your site.
Hope this helps,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over-optimizing Internal Linking: Is this real and, if so, what's the happy medium?
I have heard a lot about having a solid internal linking structure so that Google can easily discover pages and understand your page hierarchies and correlations and equity can be passed. Often, it's mentioned that it's good to have optimized anchor text, but not too optimized. You hear a lot of warnings about how over-optimization can be perceived as spammy: https://neilpatel.com/blog/avoid-over-optimizing/ But you also see posts and news like this saying that the internal link over-optimization warnings are unfounded or outdated:
Intermediate & Advanced SEO | | SearchStan
https://www.seroundtable.com/google-no-internal-linking-overoptimization-penalty-27092.html So what's the tea? Is internal linking overoptimization a myth? If it's true, what's the tipping point? Does it have to be super invasive and keyword stuffy to negatively impact rankings? Or does simple light optimization of internal links on every page trigger this?1 -
Why is /home used in this company's home URL?
Just working with a company that has chosen a home URL with /home latched on - very strange indeed - has anybody else comes across this kind of homepage URL "decision" in the past? I can't see why on earth anybody would do this! Perhaps simply a logic-defying decision?
Intermediate & Advanced SEO | | McTaggart0 -
Domain Authority: 23, Page Authority: 33, Can My Site Still Rank?
Greetings: Our New York City commercial real estate site is www.nyc-officespace-leader.com. Key MOZ metric are as follows: Domain Authority: 23
Intermediate & Advanced SEO | | Kingalan1
Page Authority: 33
28 Root Domains linking to the site
179 Total Links. In the last six months domain authority, page authority, domains linking to the site have declined. We have focused on removing duplicate content and low quality links which may have had a negative impact on the above metrics. Our ranking has dropped greatly in the last two months. Could it be due to the above metrics? These numbers seem pretty bad. How can I reverse without engaging in any black hat behavior that could work against me in the future? Ideas?
Thanks, Alan Rosinsky0 -
Can I dissavow links on a 301'd website?
So we are performing link removal for a client on his old website (A), which is being 301 redirected to his new website (B). We have identified toxic links on site A and are removing, once complete we will undo the current 301, confirm a new GWT account for website A, and then submit the disavow report. We would then like to reapply the 301 redirect to site B while we are waiting for Google to process the disavow report, the logic being we can retain some current rankings on site B while waiting for the disavow to process on site A. Has anyone had experience with this method? I foresee some potential issues here but am interested to here from others on this. Thanks!
Intermediate & Advanced SEO | | SEOdub1 -
Removing Dynamic "noindex" URL's from Index
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google. It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index. I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
Intermediate & Advanced SEO | | BeTheBoss0 -
Multiple Domain names pointing at one website
Hello, A collegue has asked if we can buy multiple domain names which contain keywords and point them at our website. Is this good practise or will it be seen as spam? Will these domains actually get ranked? I'm sure I'm not the first person to raise this but can't seem to find any questions and answers about this. Thanks Mark
Intermediate & Advanced SEO | | markc-1971830 -
Culling 99% of a website's pages. Will this cause irreparable damage?
I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick
Intermediate & Advanced SEO | | Townpages0 -
Tool to calculate the number of pages in Google's index?
When working with a very large site, are there any tools that will help you calculate the number of links in the Google index? I know you can use site:www.domain.com to see all the links indexed for a particular url. But what if you want to see the number of pages indexed for 100 different subdirectories (i.e. www.domain.com/a, www.domain.com/b)? is there a tool to help automate the process of finding the number of pages from each subdirectory in Google's index?
Intermediate & Advanced SEO | | nicole.healthline0