How to handle pages with no information at the moment, but are not 404?
-
As people who may have seen my past questions know, I run a small website which acts as a business/review directory for local businesses in a specific niche. Right now every business has a page on a url such as:
http://site.com/businesses/business-name
which shows the top 5 reviews with a link to the full review list which is located at:
http://site.com/businesses/business-name/reviews
The problem(?) I have is that even for a business with 0 reviews, the latter URL is available and responds with a 200 status code, but ultimately just says "There aren't any" which results in search terms for "business name reviews" often leading to these dead-end pages when I would rather have them land on the business page itself.
How is everyone handling URLs? Until the business has has reviews, this URL is useless, but it is a completely valid URL.
Some ideas I've had are in order of what I think is best to worst:
- Return 200, but with a meta 'noindex' tag if the business has no reviews at the time requested
- Return 404 if the business has no reviews at the time requested
- Return 302 back to the main business page if the business has no reviews at the time requested
Anyone have any better ideas than above for how to handle this situation? One other option is to completely get rid of the full review list and rework the main business profile page, but that would obviously require a lot more development. I'm looking for the best option in the meantime.
Thanks in advance for your insight.
-
Hello Daniel,
I would not represent 404 as that indicates a broken url to the search engine and it won't know why.
i would not use noindex as the page should still forward pagerank however it has no content.
I think when there are not any reviews I would simply add some more text to "There aren't any" like "There are not any reviews yet, you can look XY business details here, or you may check back at a later time. If you have your opinion feel free to add the first review yourself".
"Xy business details" can be a link to business detail page, so there is some anchor value carried. The page still has the ability to rank in google an to be found if you search xy business reviews. From engine perspective this is bettter than redirecting or 404, and still makes sense from user perspective as you offer them opporunities to get engaged with your page. Maybe I would do it that way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can a page rank for keywords that it does not have on it?
I have a client that is ranking in the top 10 for several keywords on their homepage. Their site has no purposeful SEO in it, there is barely any text on the homepage at all and none of the text are the keywords it is ranking for.
On-Page Optimization | | woodchuckarts1 -
Google Search Console issue: "This is how Googlebot saw the page" showing part of page being covered up
Hi everyone! Kind of a weird question here but I'll ask and see if anyone else has seen this: In Google Search Console when I do a fetch and render request for a specific site, the fetch and blocked resources all look A-OK. However, in the render, there's a large grey box (background of navigation) that covers up a significant amount of what is on the page. Attaching a screenshot. You can see the text start peeking out below (had to trim for confidentiality reasons). But behind that block of grey IS text. And text that apparently in the fetch part Googlebot does see and can crawl. My question: is this an issue? Should I be concerned about this visual look? Or no? Never have experienced an issue like that. I will say - trying to make a play at a featured snippet and can't seem to have Google display this page's information, despite it being the first result and the query showing a featured snippet of a result #4. I know that it isn't guaranteed for the #1 result but wonder if this has anything to do with why it isn't showing one. VmIqgFB.png
On-Page Optimization | | ChristianMKG0 -
Page title contents
In my page title, I have my product name. Is it beneficial to also include another keyword like: Buy wedding dress online Australia: e..g. (page title) amelie wedding dress | buy wedding dress online Australia. Or is it better just using: Amelie wedding dress
On-Page Optimization | | CostumeD0 -
Locating Duplicate Pages
Hi, Our website consists of approximately 15,000 pages however according to our Google Webmaster Tools account Google has around 26,000 pages for us in their index. I have run through half a dozen sitemap generators and they all only discover the 15,000 pages that we know about. I have also thoroughly gone through the site to attempt to find any sections where we might be inadvertently generating duplicate pages without success. It has been over six months since we did any structural changes (at which point we did 301's to the new locations) and so I'd like to think that the majority of these old pages have been removed from the Google Index. Additionally, the number of pages in the index doesn't appear to be going down by any discernable factor week on week. I'm certain it's nothing to worry about however for my own peace of mind I'd like to just confirm that the additional 11,000 pages are just old results that will eventually disappear from the index and that we're not generating any duplicate content. Unfortunately there doesn't appear to be a way to download a list of the 26,000 pages that Google has indexed so that I can compare it against our sitemap. Obviously I know about site:domain.com however this only returned the first 1,000 results which all checkout fine. I was wondering if anybody knew of any methods or tools that we could use to attempt to identify these 11,000 extra pages in the Google index so we can confirm that they're just old pages which haven’t fallen out of the index yet and that they’re not going to be causing us a problem? Thanks guys!
On-Page Optimization | | ChrisHolgate0 -
Pages vs Posts
What are your thoughts on pages vs posts? I am setting up a new blog for a client but not sure how to structure the content. I may just do posts or a whole bunch of page listed down the sidebar. It seems like my pages always rank better than my posts. Has anyone else noticed this? Could it be because of the dates tied posts?
On-Page Optimization | | SixTwoInteractive0 -
How should we handle ecommerce section pages (flagged with duplication) containing the same products?
We've removed a ton of errors, duplication and other stuff since signing up to SEOmoz Pro, but we're getting to the point where what we have left isn't that easy to fix. On one of our (ecommerce) sites we have several sections where people buy products that are applicable to the area of the home. In one or two instances, a particular list of products is the same for two or more different areas - for instance the "Bedroom and Landings" and "Hallway and Stairs" sections may list the same 10 products. This is obviously flagging up as duplication in our reports. What is the best way to handle this situation? Make the one with the highest authority canonical? Point both to another canonical page? Or, try and convince the product department that we should have a more generically name section that both link to? Thanks for any advice!
On-Page Optimization | | Safelincs0 -
Rate of similarity between two pages
hello, what is the maximum rate of similarity between two pages to avoid duplicate content? We often talk (in France) at a rate of 70% Thanks for your answers Denis
On-Page Optimization | | android_lyon0 -
Page Cache And Index
If you are browsing a site, what is the best way or programs to use to see if the page has been indexed and cached? Thanks
On-Page Optimization | | gregster10000