What should I do with a large number of 'pages not found'?
-
One of my client sites lists millions of products and 100s or 1000s are de-listed from their inventory each month and removed from the site (no longer for sale). What is the best way to handle these pages/URLs from an SEO perspective? There is no place to use a 301.
1. Should we implement 404s for each one and put up with the growing number of 'pages not found' shown in Webmaster Tools?
2. Should we add them to the Robots.txt file?
3. Should we add 'nofollow' into all these pages?
Or is there a better solution?
Would love some help with this!
-
I would leave the pages up but mark them as "no follow". When I worked in eCommerce, this was a great tactic. For UX purposes, you could try to steer people to similar-products, but keep the originating page as "no follow" or "no index".
-
Thanks Jane and Lesley for your responses. Great ideas from you both. I think I'll keep the pages but change the content/buying options, as you've both suggested.
I had considered 410s and might fall back on this for historical URLs in the instance that we can no longer retrieve the content.
-
I always take notes from giants on how to handle things like this. Amazon is the giant in this arena, what do they do? They do not disable the product, they leave it on the site as unavailable. I would do the same thing personally. What platform are you using, does it have a suggested products module / plugin? If so, it can be modified to be more promient on pages that are disabled from selling. But I would keep the page and keep the authority of the page.
If you 301 it to another product, the search satisfaction level goes down and your bounce rate will rise. I would be careful with this, because Google wants to serve results that are relevant and what people are looking for.
The other option I would give is to return a 410 status code to get them de-indexed.
-
Hi Claire,
If you really can't 301, consider serving a page providing alternative products, a search function and an explanation of why the page's former content is no longer available. Many estate websites are quite good at this. Using real estate as an example, some maintain the URLs of properties that regularly go on the market (big city apartments, for example) but grey out the information to show a user that the property is not currently for lease. Other URLs will show properties in the former listing's post code.
Your robots.txt file is going to get out of control if you are having to add millions of pages to it on a regular basis, so I would personally not pursue that route.
-
Why aren't 301s an option?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is my page being indexed?
To put you all in context, here is the situation, I have pages that are only accessible via an intern search tool that shows the best results for the request. Let's say i want to see the result on page 2, the page 2 will have a request in the url like this: ?p=2&s=12&lang=1&seed=3688 The situation is that we've disallowed every URL's that contains a "?" in the robots.txt file which means that Google doesn't crawl the page 2,3,4 and so on. If a page is only accessible via page 2, do you think Google will be able to access it? The url of the page is included in the sitemap. Thank you in advance for the help!
Technical SEO | | alexrbrg0 -
Landing pages showing up as HTTPS when we haven't made the switch
Hi Moz Community, Recently our tech team has been taking steps to switch our site from http to https. The tech team has looked at all SEO redirect requirements and we're confident about this switch, we're not planning to roll anything out until a month from now. However, I recently noticed a few https versions of our landing pages showing up in search. We haven't pushed any changes out to production yet so this shouldn't be happening. Not all of the landing pages are https, only a select few and I can't see a pattern. This is messing up our GA and Search Console tracking since we haven't fully set up https tracking yet because we were not expecting some of these pages to change. HTTPS has always been supported on our site but never indexed so it's never shown up in the search results. I looked at our current site and it looks like landing page canonicals are already pointing to their https version, this may be the problem. Anyone have any other ideas?
Technical SEO | | znotes0 -
Test site got indexed in Google - What's the best way of getting the pages removed from the SERP's?
Hi Mozzers, I'd like your feedback on the following: the test/development domain where our sitebuilder works on got indexed, despite all warnings and advice. The content on these pages is in active use by our new site. Thus to prevent duplicate content penalties we have put a noindex in our robots.txt. However off course the pages are currently visible in the SERP's. What's the best way of dealing with this? I did not find related questions although I think this is a mistake that is often made. Perhaps the answer will also be relevant for others beside me. Thank you in advance, greetings, Folko
Technical SEO | | Yarden_Uitvaartorganisatie0 -
Consistent top 10 in G image search - but a different 'stolen' version every time!
I have a photo that was uploaded back in 2005. It is an aerial shot and has received a fair bit of traffic over the years. I'm pretty sure it was ranked #1 in Google Images for the town name for a while. Now, however, it never ranks. Well actually it does. But every single time it is a version on a different website that is being used without permission.
Technical SEO | | Cornwall
And I'm not talking about one website. Every time I fill out a DMCA and have the image removed only to see a completely different website featuring in the top 10. This has happened 5 times so far and I'm just about to fill out another DMCA request. What is going on? Surely Google in its infinite wisdom is smart enough to check the timestamp or date cues on page to figure out which is the original. These other sites are often complete unknowns compared to my site which is a 12yr old authority site on the subject.
Don't get it!0 -
Page feedback
We recently wrote a new website page to cover the direct mail services our organization offers. We kept the title tag to 70 characters, the meta description under 150 characters. H1 tag has what we feel is the most important term. If anyone out there has time to review & provide a little feedback, we'd really appreciate it. It would be great to know if it is built well and providing a solid end user experience. http://www.cushingco.com/print_products/additional_services/direct_mail.shtml At the moment, the only links pointing to this page are from our blog. One bit of content I am contemplating is a short paragraph - What is Direct Mail Marketing? Literally providing a short definition of it. The page was activated last Thursday and showing up in some Google results on the 4th/5th page but I am thinking this is probably just a temporary bump for now. Anyway, thanks in advance for any advice!!!
Technical SEO | | SEOSponge0 -
When Is It Good To Redirect Pages on Your Site to Another Page?
Suppose you have a page on your site that discusses a topic that is similar to another page but targets a different keyword phrase. The page has medium quality content, no inbound links, and the attracts little traffic. Should you 301 redirect the page to a stronger page?
Technical SEO | | ProjectLabs1 -
Https-pages still in the SERP's
Hi all, my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content. Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow> I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up. Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index? Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))... thanx in advance 😉
Technical SEO | | accessKellyOCG0 -
Seek help correcting large number of 404 errors generated, 95% traffic halt
Hi, The following GWT screen tells a bit of the story: site: http://bit.ly/mrgdD0 http://www.diigo.com/item/image/1dbpl/wrbp On about Feb 8 I decided to fix a large number of 'duplicate title' warnings being reported in GWT "HTML Suggestions" -- these were for URLs which differed only in parameter case, and which had Canonical tags, but were still reported as dups in GWT. My traffic had been steady at about 1000 clicks/day. At midnight on 2/10, google traffic completely halted, down to 11 clicks/day. I submitted a recon request and was told 'no manual penalty' Also, the 'sitemap' indexes in GWT showed 'pending' for 24x7 starting then. By about the 18th, the 'duplicate titles' count dropped to about 600 or so... the next day traffic hopped right back to about 800 clicks/day - for a week - then stopped again, down to 10/day, a week later, on the 26th. I then noticed that GWT was reporting 20K page-not found errors - this has now grown to 35K such errors! I realized that bogus internal links were being generated as I failed to disable the PHP warning messages.... so I disabled PHP warnings and fixed what I thought was the source of the errors. However, the not-found count continues to climb -- and I don't know where these bad internal links are coming from, because the GWT report lists these link sources as 'unavailable'. I'v been through a similar problem last year and it took months (4) for google to digest all the bogus pages ad recover. If I have to wait that long again I will lose much $$. Assuming that the large number of 404 internal errors is the reason for the sudden shutoff... How can I a) verify the source of these internal links, given that google says the source pages are 'unavailable'.. Most critically, how can I do a 'RESET" and have google re-spider my site -- or block the signature of these URLs in order to get rid of these errors ASAP?? thanks
Technical SEO | | mantucket0