Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
-
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
-
Quoting here, to ask again, why this is happening with out pages too? is Google going crazy or what?
@James-Avery said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update such as our page at backwards 3 index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
First off, it's great that your entire website made the transition to HTTPS and HTTP/2 three years ago. That's definitely a step in the right direction for performance and security.
Since your hosting provider has confirmed that the server is configured correctly for HTTP/2 and you've got the 301 redirects set up properly, it's puzzling why GoogleBot is still sticking to HTTP/1.1 for accessing the homepage. One thing you might want to double-check is if there are any specific directives in your server configuration that could be affecting how GoogleBot accesses your site. Sometimes, even seemingly minor configurations can have unintended consequences.
Regarding the non-secure version of your website still showing up in the Discovery section of Google Search Console (GSC), despite the homepage being correctly indexed with the HTTPS version, it could be a matter of Google's index taking some time to catch up. However, it's worth investigating further to ensure there aren't any lingering issues causing this discrepancy.
As for the home page not ranking as well in SERPs compared to other pages, despite having better content and speed, this could be due to a variety of factors. It's possible that Google's algorithms are prioritizing other pages for certain keywords or that there are specific technical issues with the homepage that are affecting its visibility.
In terms of next steps, I'd recommend continuing to monitor the situation closely and perhaps reaching out to Google's support team for further assistance. They may be able to provide additional insights or suggestions for resolving these issues.
Overall, it sounds like you've done a thorough job of troubleshooting so far, but sometimes these technical SEO mysteries require a bit of persistence to unravel. Keep at it, and hopefully, you'll be able to get to the bottom of these issues soon!
-
-
@john1408 said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
It's baffling that GoogleBot persists with HTTP/1.1 for the homepage despite proper setup. Consider exploring Google Search Console further for indexing insights, and reach out to Google Support for assistance in resolving this unusual behavior.
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update backwards 3 index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
First off, it's great that your entire website made the transition to HTTPS and HTTP/2 three years ago. That's definitely a step in the right direction for performance and security.
Since your hosting provider has confirmed that the server is configured correctly for HTTP/2 and you've got the 301 redirects set up properly, it's puzzling why GoogleBot is still sticking to HTTP/1.1 for accessing the homepage. One thing you might want to double-check is if there are any specific directives in your server configuration that could be affecting how GoogleBot accesses your site. Sometimes, even seemingly minor configurations can have unintended consequences.
Regarding the non-secure version of your website still showing up in the Discovery section of Google Search Console (GSC), despite the homepage being correctly indexed with the HTTPS version, it could be a matter of Google's index taking some time to catch up. However, it's worth investigating further to ensure there aren't any lingering issues causing this discrepancy.
As for the home page not ranking as well in SERPs compared to other pages, despite having better content and speed, this could be due to a variety of factors. It's possible that Google's algorithms are prioritizing other pages for certain keywords or that there are specific technical issues with the homepage that are affecting its visibility.
In terms of next steps, I'd recommend continuing to monitor the situation closely and perhaps reaching out to Google's support team for further assistance. They may be able to provide additional insights or suggestions for resolving these issues.
Overall, it sounds like you've done a thorough job of troubleshooting so far, but sometimes these technical SEO mysteries require a bit of persistence to unravel. Keep at it, and hopefully, you'll be able to get to the bottom of these issues soon!
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocolRobots file is correct (simply allowing all and referring to https://www. sitemap
Sitemap is referencing https://www. pages including homepage
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
301 redirects set up for non-secure and non-www versions of website all to https://www. version
Not using a CDN or proxy
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!t seems like you've taken several steps to ensure the correct protocol (HTTP/2) for your website, and it's puzzling that GoogleBot still accesses the home page via HTTP/1.1. A few additional suggestions:
Crawl Rate Settings: Check your Google Search Console (GSC) for crawl rate settings. Google might be intentionally crawling your site slowly.
Server Logs: Reanalyze server logs to confirm that GoogleBot is indeed accessing via HTTP/1.1 for the home page. This could help identify patterns or anomalies.
Mobile Usability: Ensure your home page is mobile-friendly. Google tends to prioritize mobile indexing.
Fetch and Render Tool: Use GSC's Fetch and Render tool to see how Google renders your home page. It might provide insights into how Google sees your page.
Structured Data and Markup: Ensure structured data and markup on your home page are correct and up-to-date.
Manual Submission: Consider manually requesting indexing for your home page through GSC.
Regarding the new pages performing well compared to the home page, it might be worth revisiting your on-page SEO elements and analyzing the competition for relevant keywords.
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why MOZ just index some of the links?
hello everyone i've been using moz pro for a while and found a lot of backlink oppertunites as checking my competitor's backlink profile.
Link Building | | seogod123234
i'm doing the same way as my competitors but moz does not see and index lots of them, maybe just index 10% of them. though my backlinks are commenly from sites with +80 and +90 DA like Github, Pinterest, Tripadvisor and .... and the strange point is that 10% are almost from EDU sites with high DA. i go to EDU sites and place a comment and in lots of case, MOZ index them in just 2-3 days!! with maybe just 10 links like this, my DA is incresead from 15 to 19 in less than one month! so, how does this "SEO TOOL" work?? is there anyway to force it to crawl a page?0 -
English pages given preference over local language
We recently launched a new design of our website and for SEO purposes we decided to have our website both in English and in Dutch. However, when I look at the rankings in MOZ for many of our keywords, it seems the English pages are being preferred over the Dutch ones. That never used to be the case when we had our website in the old design. It mainly is for pages that have an English keyword attached to them, but even then the Dutch page would just rank. I'm trying to figure out why English pages are being preferred now and whether that could actually damage our rankings, as search engines would prefer copy in the local language. An example is this page: https://www.bluebillywig.com/nl/html5-video-player/ for the keywords "HTML5 player" and "HTML5 video player".
Local SEO | | Billywig0 -
Escort directory page indexing issues
Re; escortdirectory-uk.com, escortdirectory-usa.com, escortdirectory-oz.com.au,
Technical SEO | | ZuricoDrexia
Hi, We are an escort directory with 10 years history. We have multiple locations within the following countries, UK, USA, AUS. Although many of our locations (towns and cities) index on page one of Google, just as many do not. Can anyone give us a clue as to why this may be?0 -
Unsolved error in crawling
hello moz . my site is papion shopping but when i start to add it an error appears that it cant gather any data in moz!! what can i do>???
Moz Tools | | valigholami13860 -
Keywords are indexed on the home page
Hello everyone, For one of our websites, we have optimized for many keywords. However, it seems that every keyword is indexed on the home page, and thus not ranked properly. This occurs only on one of our many websites. I am wondering if anyone knows the cause of this issue, and how to solve it. Thank you.
Technical SEO | | Ginovdw1 -
SEO advice on ecommerce url structure where categories contain "/c/"
Hi! We use Hybris as plattform and I would like input on which url to choose. We must keep "/c/" before the actual category. c stands for category. I.e. this current url format will be shortened and cleaned:
Technical SEO | | hampgunn
https://www.granngarden.se/Sortiment/Husdjur/Hund/Hundfoder-%26-Hundmat/c/hundfoder To either: a.
https://www.granngarden.se/husdjur/hund/hundfoder/c/hundfoder b.
https://www.granngarden.se/husdjur/hund/c/hundfoder (hundfoder means dogfood) The question is whether we should keep the duplicated category name (hundfoder) before the "/c/" or not. Will there be SEO disadvantages by removing the duplicate "hundfoder" before the "/c/"? I prefer the shorter version ofc, but do not want to jeopardize any SEO rankings or send confusing signals to search engines or customers due to the "/c/" breaking up the url breadcrumb. What do you guys say and prefer from the above alternatives? Thanks /Hampus0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
Google Off/On Tags
I came across this article about telling google not to crawl a portion of a webpage, but I never hear anyone in the SEO community talk about them. http://perishablepress.com/press/2009/08/23/tell-google-to-not-index-certain-parts-of-your-page/ Does anyone use these and find them to be effective? If not, how do you suggest noindexing/canonicalizing a portion of a page to avoid duplicate content that shows up on multiple pages?
Technical SEO | | Hakkasan1