Once on https should Moz still be picking up errors on http
-
Hello,
Should Moz be picking up http errors still if the sites on https? Or has the https not been done properly? I'm getting duplicate errors amoung other things.
Cheers,
Ruth
-
Wow - sorry this question slipped past, Ruth.
As long as the proper HTTP redirect has been written to HTTPS, there's nothing that needs to be done with canonical tags.
The beauty of 301-redirects is that they are server directives - once in place, its no longer even possible to reach the non-HTTPS URLs. The HTTPS URLs should of course still keep their own self-referential canonical tags, but that's handled automatically in most CMSs (Content Management Systems like WordPress.)
Hope that covers what you were asking?
Paul
-
Hi Ruth,
I am more than happy to help. Please have a look at these resources if you believe the migration has not been done correctly.
Please have a look at these resources I think they will be immensely helpful. Please remember the Google doc referenced in number one is available in line 3 as well as In the instructions anchor text on line 2.
- https://www.aleydasolis.com/en/search-engine-optimization/http-https-migration-checklist-google-docs/
- To easily make a copy, add to your own Drive, download or print, go to the Google Docs, then choose “File” and select your preferred option: (Or use link below)
- https://docs.google.com/spreadsheets/d/1XB26X_wFoBBlQEqecj7HB79hQ7DTLIPo97SS5irwsK8/edit?usp=sharing
Please let me know if I can be of any help,
Thomas
-
Thanks, Thomas will have a look at those.
-
Hi Ruth,
Paul brought up a very good point.
If using Apache and it sounds like you are you can force the HTTP to HTTPS redirect using
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]</ifmodule>See this for more https://www.aleydasolis.com/htaccess-redirects-generator/https-vs-http/
301 Redirects are in place verify that by using a tool like this one. you can drop all the URLs you want to check in here. https://httpstatus.io/
Then make sure any HTTP is a 301 and any HTTPS is a 200
Then run a search and replace
http://mydomain.com
tohttps://mydomain.com
http://www.mydomain.com
tohttps://www.mydomain.com
If you are running a WordPress website this is a very effective plug-in ( and free if you type the name into wordpress.org's plugin repo)
https://bettersearchreplace.com/ or https://interconnectit.com/products/search-and-replace-for-wordpress-databases/
Your canonical tags (unless very good reason like third-party content) should be self-referencing meaning add canonical's to the https:// urls not the http:// URLs like below. A good example is what you see below
Moz will tell you as will https://www.screamingfrog.co.uk/redirect-checker/ & https://deepcrawl.com will easily find any of the problems. some references below for rel-canonical tags
https://moz.com/blog/rel-canonical
https://yoast.com/rel-canonical/
Last but not least here is a wonderful tutorial that goes over migrating from http to https
https://www.keycdn.com/blog/http-to-https/
I hope this was of help,
Tom
-
Hi Paul,
Yes thank you, that's brilliant and confirms what I've been thinking - that the https hasn't been done properly as they're not redirecting when going to the http version. I thought the client had at the htaccess so will look into that.
If the redirects are done properly do I need to add canonical tags or are the redirects enough? Just want to make sure I'm covering all bases.
Thanks so much for your advice.
Cheers,
Ruth
-
You need to ensure that the HTTP version of the site's URLs are no longer reachable, Ruth. That means adding a 301-redirect to force all URLs to their HTTPS versions. This is the most likely cause of your issue
To test, simply go to a page URL in the browser address bar and remove the s from the HTTPS and hit enter. Watch what happens. If the address bar shows the automatic change back to the HTTPS version of the URL, you're good. If it doesn't, you'll need to add the redirect.
You should also ensure that all the internal links within the site have been rewritten to use the HTTPS version of the URLs - like menus, sidebars, widgets, and in-content links to other pages.
Hope that helps?
Paul
-
Thanks, Thomas. The https urls are in place but I was concerned that if the http urls are showing up still that the move to https hasn't been done properly.
-
Do a search & replace on your site then recheck it.
search for HTTP url's & replace with https URL's
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix an 803 error?
Error Code 803: Incomplete HTTP Response Received How can I fix this error?
Technical SEO | | netprodjb0 -
What effect does HTTPS have on SEO for a public site?
I have a client who I've been working with for 4 months but getting NO TRACTION at all on their SERPS. This is unusual for me! The only difference to their site from other clients is that the whole site is https so I'm wondering if that's making a big difference. The site is: https://www.cnc-ltd.co.uk Any help of hints would be great thanks in advance Steve
Technical SEO | | stevecounsell0 -
What if 404 Error not possible?
Hi Everyone, I get an 404 error in my page if the URL is simply wrong, but for some parameters, like if a page has been deleted, or has expired, I get an error page indicating that the ID is wrong, but no 404 error. It is for me very difficult to program a function in php that solve the problem and modify the .htaccess with the mod_rewrite. I ask the developer of the system to give a look, but I am not sure if I will get an answer soon. I can control the content of the deleted/expired page, but the URL will be very similar to those that are ok (actually the url could has been fine, but now expired). Thinking of solutions I can set the expired/deleted pages as noindex, would it help to avoid duplicated title/description/content problem? If an user goes to i.e., mywebsite.com/1-article/details.html I can set the head section to noindex if it has expired. Would it be good enough? Other question, is it possible anyhow to set the pages as 404 without having to do it directly in the .htacess, so avoiding the mod_rewrite problems that I am having? Some magical tag in the head section of the page? Many thanks in advance for your help, Best Regards, Daniel
Technical SEO | | te_c0 -
Client error 404
I have an 404 error but what does that mean? I go to the site and click on the link to exampleX.com there is no problem. What can it be? The error message http://www.example.com/www.example.com/exampleX.html
Technical SEO | | mato0 -
Nginx 403 and 503 errors
I have a client with a website that is hosted on a shared webserver running on an Nginx server. When I started working on the website a few months ago I found the server was throwing 100s of 403s and 503s and at one point googlebot couldn't access robots.txt. Needless to say this didn't help rankings! Now the web hosting company has partially resolved the errors by switching to a new server and I'm now just seeing intermittent spikes in Webmaster Tools of 30 to 70 403 ad 503 errors. My questions: Am I right in saying there should (pretty much) be no such errors (for pages that we make public and crawlable). Having already asked the web hosting company to look in to this. Any advice on specifically what I should be asking them to look at on the server? If this doesn't work out, does anyone having a recommendation for a reliable web hosting company in the U.S. for a lead generation website with over 20,000 pages and currently 500 to 1000 visits per day? Thanks for the help Mozzers 🙂
Technical SEO | | MatShepSEO0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340 -
Google Crawler Error / restricting crawling
Hi On a Magento Instance we manage there is an advanced search. As part of the ongoing enhancement of the instance we altered the advance search options so there are less and more relevant. The issue is Google has crawled and catalogued the advanced search with the now removed options in the query string. Google keeps crawling these out of date advanced searches. These stale searches now create a 500 error. Currently Google is attempting to crawl these pages twice a day. I have implemented the following to stop this:- 1. Submitted requested the url be removed via Webmaster tools, selecting the directory option using uri: http://www.domian.com/catalogsearch/advanced/result/ 2. Added Disallow to robots.txt Disallow: /catalogsearch/advanced/result/* Disallow: /catalogsearch/advanced/result/ 3. Add rel="nofollow" to the links in the site linking to the advanced search. Below is a list of the links it is crawling or attempting to crawl, 12 links crawled twice a day each resulting in a 500 status. Can anything else be done? http://www.domain.com/catalogsearch/advanced/result/?bust_line=94&category=55&color_layered=128&csize[0]=0&fabric=92&inventry_status=97&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=115&category=55&color_layered=130&csize[0]=0&fabric=0&inventry_status=97&length=116&price=3%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=94&category=55&color_layered=126&csize[0]=0&fabric=92&inventry_status=97&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=137&csize[0]=0&fabric=93&inventry_status=96&length=0&price=8%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=142&csize[0]=0&fabric=93&inventry_status=96&length=0&price=4%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=137&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=142&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=135&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=128&csize[0]=0&fabric=93&inventry_status=96&length=0&price=5%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=127&csize[0]=0&fabric=93&inventry_status=96&length=0&price=4%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=127&csize[0]=0&fabric=93&inventry_status=96&length=0&price=3%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=128&csize[0]=0&fabric=93&inventry_status=96&length=0&price=10%2C10http://www.domain.com/catalogsearch/advanced/result/?bust_line=0&category=55&color_layered=122&csize[0]=0&fabric=93&inventry_status=96&length=0&price=8%2C10
Technical SEO | | Flipmedia1120 -
HTTPS attaching to home page
Hi!! Okay - weird tech question. Domain is http://hiphound.com. I have SSL attaching to checkout and my account pages. Tested and works well. Issue - I am able to reach the home page at https://hiphound.com AND http://hiphound.com. If I access the home page via HTTPS and click on a link (any link) then the site is redirected to HTTP again which is good. My concern is the home page displaying via HTTPS and HTTP. Is this is an issue that can be resolved or is it expected behavior I have to live with.? I am being told by DEV there is nothing they can do about it but want to understand why and if they are correct. Thoughts? Thank you!! Lynn
Technical SEO | | hiphound0