Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Googlebot HTTP 204 Status Code Handling?
-
If a user runs a search that returns no results, and the server returns a 204 (No Content), will Googlebot treat that as the rough equivalent of a 404 or a noindex? If not, then it seems one would want to noindex the page to avoid low quality penalties, but that might require more back and forth with the server, which isn't ideal.
Kurus
-
Thanks for your input.
-
I believe Google handles 204 codes the same as 200. They index a page with basically no content. Unless someone links to a 204 page however, Google will never see one by your example. Google is not out and about running searches on websites to see what comes up to find more content to index. If someone were to search on your site and get a 204, then link to it, then yeah, Google could crawl and index it. In that case though you might see it in your webmaster tools under crawl errors. Then you could noindex it or block it with robots.txt or something else.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to handle sorting, filtering, and pagination in ecommerce? Canonical is enough?
Hello, after reading various articles and watching several videos I'm still not sure how to handle faceted navigation (sorting/filtering) and pagination on my ecommerce site. Current indexation status: The number of "real" pages (from my sitemap) - 2.000 pages Google Search Console (Valid) - 8.000 pages Google Search Console (Excluded) - 44.000 pages Additional info: Vast majority of those 50k additional pages (44 + 8 - 2) are pages created by sorting, filtering and pagination. Example of how the URL changes while applying filters/sorting: example.com/category --> example.com/category/1/default/1/pricefrom/100 Every additional page is canonicalized properly, yet as you can see 6k is still indexed. When I enter site:example.com/category in Google it returns at least several results (in most of the cases the main page is on the 1st position). In Google Analytics I can see than ~1.5% of Google traffic comes to the sorted/filtered pages. The number of pages indexed daily (from GSC stats) - 3.000 And so I have a few questions: Is it ok to have those additional pages indexed or will the "real" pages rank higher if those additional would not be indexed? If it's better not to have them indexed should I add "noindex" to sorting/filtering links or add eg. Disallow: /default/ in robots.txt? Or perhaps add "noindex, nofollow" to the links? Google would have then 50k pages less to crawl but perhaps it'd somehow impact my rankings in a negative way? As sorting/filtering is not based on URL parameters I can't add it in GSC. Is there another way of doing that for this filtering/sorting url structure? Thanks in advance, Andrew
Intermediate & Advanced SEO | | thpchlk0 -
Hreflang : mixing with/without country code for same language
Hello, I would like to display 3 different english versions of my website : 1 for UK, 1 for CA and 1 for other english users. It would look like this for a page: . (english content with £ prices) <link rel="alternate" href="https: xxx.com="" en-ca" hreflang="en-CA">(english content with $CA prices)</link rel="alternate" href="https:> <link rel="alternate" href="https: xxx.com="" en="" " hreflang="en">(english content without currency)</link rel="alternate" href="https:> I wonder if I can mix this hreflang without country code with hreflangs with country code for the 2 other specific versions... or if the hreflang without country code version will appear whatever the country, even if i specified it . In other terms, is hreflang="en" > hreflang="en-CA" + hreflang="en-GB" if tagged together on a same page? Thank you
Intermediate & Advanced SEO | | AlexisH0 -
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
How to handle potentially thousands (50k+) of 301 redirects following a major site replacement
We are looking for the very best way of handling potentially thousands (50k+) of 301 redirects following
Intermediate & Advanced SEO | | GeezerG
a major site replacement and I mean total replacement. Things you should know
Existing domain has 17 years history with Google but rankings have suffered over the past year and yes we know why. (and the bitch is we paid a good sized SEO company for that ineffective and destructive work)
The URL structure of the new site is completely different and SEO friendly URL's rule. This means that there will be many thousands of historical URL's (mainly dynamic ones) that will attract 404 errors as they will not exist anymore. Most are product profile pages and the God Google has indexed them all. There are also many links to them out there.
The new site is fully SEO optimised and is passing all tests so far - however there is a way to go yet. So here are my thoughts on the possible ways of meeting our need,
1: Create 301 redirects for each an every page in the .htaccess file that would be one huge .htaccess file 50,000 lines plus - I am worried about effect on site speed.
2: Create 301 redirects for each and every unused folder, and wildcard the file names, this would be a single redirect for each file in each folder to a single redirect page
so the 404 issue is overcome but the user doesn't open the precise page they are after.
3: Write some code to create a hard copy 301 index.php file for each and every folder that is to be replaced.
4: Write code to create a hard copy 301 .php file for each and every page that is to be replaced.
5: We could just let the pages all die and list them with Google to advise of their death.
6: We could have the redirect managed by a database rather than .htaccess or single redirect files. Probably the most challenging thing will be to load the data in the first place, but I assume this could be done programatically - especially if the new URL can be inferred from the old. Many be I am missing another, simpler approach - please discuss0 -
Googlebot being redirected but not users?
Hi, We seem to have a slightly odd issue. We noticed that a number of our location category pages were slipping off 1 page, and onto page 2 in our niche. On inspection, we noticed that our Arizona page had started ranking in place of a number of other location pages - Cali, Idaho, NJ etc. Weirdly, the pages they had replaced were no longer indexed, and would remain so, despite being fetched, tweeted etc. One test was to see when the dropped out pages had been last crawled, or at least cached. When conducting the 'cache:domain.com/category/location' on these pages, we were getting 301 redirected to, you guessed it, the Arizona page. Very odd. However, the dropped out pages were serving 200 OK when run through header checker tools, screaming frog etc. On the face of it, it would seem Googlebot is getting redirected when it is hitting a number of our key location pages, but users are not. Has anyone experienced anything like this? The theming of the pages are quite different in terms of content, meta etc. Thanks.
Intermediate & Advanced SEO | | Sayers0 -
Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?
Intermediate & Advanced SEO | | Steven_Macdonald0 -
How to handle individual page redirects on Wix?
I switched from one domain to another because I wanted a domain that had our company name so it was more brand-y. However, the old domain had better DA/PA. Originally I set up a global 301 from the old to the new, but now I'm finding that I actually need to set up individual 301's from each URL of the old site, or at least from each page. However, I am using Wix so it looks like I can't always do URL-URL 301's, although I can redirect any URL to a page on the new website. The problem is that, in some cases, the content on the new site is different (or, for example, I can only link a particular blog post on the old site back to the new site's blog's main page). How closely do URLS/pages need to resemble each other for link juice to be transferred? Also, should I try to set up all these redirects manually or bite the bullet and go back to using the old domain? The problem is that I did a lot of beginner SEO junk for the new domain, like submitting to a few higher-quality directories, and getting our website on various industry resource sites, etc. I'd need to re-do this entirely if I go back to the old page. What do you think?
Intermediate & Advanced SEO | | BohmKalish1230 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0