404's being re-indexed
-
Hi All,
We are experiencing issues with pages that have been 404'd being indexed. Originally, these were /wp-content/ index pages, that were included in Google's index. Once I realized this, I added in a directive into our htaccess to 404 all of these pages - as there were hundreds. I tried to let Google crawl and remove these pages naturally but after a few months I used the URL removal tool to remove them manually.
However, Google seems to be continually re/indexing these pages, even after they have been manually requested for removal in search console. Do you have suggestions? They all respond to 404's.
Thanks
-
Just to follow up - I have now actually 410'd the pages and the 410's are still being re-indexed.
-
I'll check this one out as well, thanks! I used a header response extension which reveals the presence of x-botots headers called web developer.
-
First it would be helpful to know how you are detecting that it isn't working. What indexation tool are you using to see whether the blocks are being detected? I personally really like this one: https://chrome.google.com/webstore/detail/seo-indexability-check/olojclckfadnlhnlmlekdihebmjpjnoa?hl=en-GB
Or obviously at scale - Screaming Frog
-
Thank you for the quick response,
The pages are truly removed, however, because there were so many of these types of pages that leaked into the index, I added a redirect to keep users on our site - no intentions of being "shady", I just didn't want hundreds of 404's getting clicked and causing a very high bounce rate.
For the x-robots header, could you offer some insight into why my directive isn't working? I believe it's a regex issue on the wp-content. I have tried to troubleshoot to no avail.
<filesmatch <strong="">"(wp-content)">
Header set X-Robots-Tag: "noindex, nofollow"</filesmatch>I appreciate the help!
-
Well if a page has been removed and has not been moved to a new destination - you shouldn't redirect a user anyway (which kind of 'tricks' users into thinking the content was found). That's actually bad UX
If the content has been properly removed or was never supposed to be there, just leave it at a 410 (but maybe create a nice custom 410 page, in the same vein as a decent UX custom 404 page). Use the page to admit that the content is gone (without shady redirects) but to point to related posts or products. Let the user decide, but still be useful
If the content is actually still there and, hence you are doing a redirect - then you shouldn't be serving 404s or 410s in the first place. You should be serving 301s, and just doing HTTP redirects to the content's new (or revised) destination URL
Yes, the HTTP header method is the correct replacement when the HTML implementation gets stripped out. HTTP Header X-Robots is the way for you!
-
Thank you! I am in the process of doing so, however with a 410 I can not leave my JS redirect after the page loads, this creates some UX issues. Do you have any suggestions to remedy this?
Additionally, after the 410 the non x-robots noindex is now being stripped so it only resolves to a 410 with no noindex or redirect. I am still working on a noindex header, as the 410 is server-side, I assume this would be the only way, correct?
-
You know that 404 means "temporarily gone but will be coming back" right? By saying a page is temporarily unavailable, you actively encourage Google to come back later
If you want to say that the page is permanently gone use status code 410 (gone)
Leave the Meta no-index stuff in the HTTP header via X-Robots, that was a good call. But it was a bad call to combine Meta no-index and 404, as they contradict each other ("don't index me now but then do come back and index me later as I'll probably be back at some point")
Use Meta no-index and 410, which agree with each other ("don't index me now and don't bother coming back")
-
Yes, all pages have a noindex. I have also tried to noindex them using htaccess, to add an extra layer of security, but it seems to be incorrect. I believe it is an issue with the regex. Attempting to match anything with wp-content.
<filesmatch "(wp-content)"="">Header set X-Robots-Tag: "noindex, nofollow"</filesmatch>
-
Back to basics. Have you marked those pages/posts as 'no-index'. With many wp plugins, you can no-index them in bulk then submit for re-indexation.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console 'Change of Address' Just 301s on source domain?
Hi all. New here, so please be gentle. 🙂 I've developed a new site, where my client also wanted to rebrand from .co.nz to .nz On the source (co.nz) domain, I've setup a load of 301 redirects to the relevant new page on the new domain (the URL structure is changing as well).
Technical SEO | | WebGuyNZ
E.G. On the old domain: https://www.mysite.co.nz/myonlinestore/t-shirt.html
In the HTACCESS on the old/source domain, I've setup 301's (using RewriteRule).
So that when **https://www.mysite.co.nz/**myonlinestore/t-shirt.html is accessed, it does a 301 to;
https://mysite.nz/shop/clothes/t-shirt All these 301's are working fine. I've checked in dev tools and a 301 is being returned. My question is, is having the 301's just on the source domain only enough, in regards to starting a 'Change of Address' in Google's Search Console? Their wording indicates it's enough but I'm concerned, maybe I also need redirects on the target domain as well? I.E. Does the Search Console Change of Address process work this way?
It looks at the source domain URL (that's already in Google's index), sees the 301 then updates the index (and hopefully pass the link juice) to the new URL. Also, I've setup both source and target Search Console properties as Domain Properties. Does that mean I no longer need to specify that the source and target properties are HTTP or HTTPS? I couldn't see that option when I created the properties. Thanks!0 -
What's best practice for cart pages?
i don't mean e-commerce site in general, but the actual cart page itself. What's best practice for the links that customers click to add products to the cart, and the cart page itself? Also, I use vanity URLs for my cart links which redirect to the actual cart page with the parameters applied. Should I use use 301 or 302 redirects for the links? Do I make the cart page's canonical tag point back to the store home page so that I'm not accruing link juice to a page that customers don't actually want to land on from search? I'm kinda surprised at the dearth of information out there on this, or maybe I'm not looking in the right places?
Technical SEO | | VM-Oz0 -
'domain:example.com/' is this line with a '/' at the end of the domain valid in a disavow report file ?
Hi everyone Just out of curiosity, what would happen if in my disavow report I have this line : domain:example.com**/** instead of domain:example.com as recommended by google. I was just wondering if adding a / at the end of a domain would automatically render the line invalid and ignored by Google's disavow backlinks tool. Many thanks for your thoughts
Technical SEO | | LabeliumUSA0 -
Spam URL'S in search results
We built a new website for a client. When I do 'site:clientswebsite.com' in Google it shows some of the real, recently submitted pages. But it also shows many pages of spam url results, like this 'clientswebsite.com/gockumamaso/22753.htm' - all of which then go to the sites 404 page. They have page titles and meta descriptions in Chinese or Japanese too. Some of the urls are of real pages, and link to the correct page, despite having the same Chinese page titles and descriptions in the SERPS. When I went to remove all the spammy urls in Search Console (it only allowed me to temporarily hide them), a whole load of new ones popped up in the SERPS after a day or two. The site files itself are all fine, with no errors in the server logs. All the usual stuff...robots.txt, sitemap etc seems ok and the proper pages have all been requested for indexing and are slowly appearing. The spammy ones continue though. What is going on and how can I fix it?
Technical SEO | | Digital-Murph0 -
New theme adds ?v=1d20b5ff1ee9 to all URL's as part of cache. How does this affect SEO
New theme I am working in ads ?v=1d20b5ff1ee9 to every URL. Theme developer says its a server setting issue. GoDaddy support says its part of cache an becoming prevalent in new themes. How does this impact SEO?
Technical SEO | | DML-Tampa0 -
Is putting a manufacturer's product manual on my site in PDF duplicate content
I add the product manuals to our product pages to provide additional product information to our customers. Is this considered duplicate content? Is there a best way to do this so that I can offer the information to my customers without getting penalized for it? Should they be indexable? If not how do I control?
Technical SEO | | merch_zzounds0 -
Blank pages in Google's webcache
Hello all, Is anybody experiencing blanck page's in Google's 'Cached' view? I'm seeing just the page background and none of the content for a couple of my pages but when I click 'View Text Only' all of teh content is there. Strange! I'd love to hear if anyone else is experiencing the same. Perhaps this is something to do with the roll out of Google's updates last week?! Thanks,
Technical SEO | | A_Q
Elias0 -
ECommerce Site, URL's, Canonical and Tracking Referral Traffic
I'm very, very new to eCommerce websites that employ many different URL's to track referral traffic. I have a client that has 18 different URL's that land on the Home Page in order to track traffic from different referral sources. For example: http://erasedisease.com/?ref=abot - Tracks traffic from an affiliate source http://erasedisease.com/?ref=FB01 - Tracks traffic from a FB Ad http://erasedisease.com/?ref=sas&SSAID=289169 - Tracks more affiliate traffic ...and the list goes on and on. My first question is do you think this could hinder our Google rankings? SEOMoz Crawl doesn't show any Duplicate Content Errors, so I guess that's good. I've just been reading a lot about Canonical Url's and eCommerce sites, but I'm not sure if this is a situation where I'd want to use some kind of canonical plugin for this Wordpress website or not. Any advice would be greatly appreciated. Thanks so much!!
Technical SEO | | Linwright0