Should I block non-informative pages from Google's index?
-
Our site has about 1000 pages indexed, and the vast majority of them are not useful, and/or contain little content. Some of these are:
-Galleries
-Pages of images with no text except for navigation
-Popup windows that contain further information about something but contain no navigation, and sometimes only a couple sentencesMy question is whether or not I should put a noindex in the meta tags.
I think it would be good because the ratio of quality to low quality pages right now is not good at all.
I am apprehensive because if I'm blocking more than half my site from Google, won't Google see that as a suspicious or bad practice?
-
To the spiders, would the content in the lightbox be considered on the page?
-
I would discriminate these pages on the basis of income or search engine traffic rather than use their informativeness.
I have semiinformative pages that pull lots of traffic and make lots of money - and informative pages that make next to nothing.
-
More a technical answer than SEO-specific, but you could place the pop up content in a lightbox similar to your gallery items with a script like http://fancyapps.com/fancybox/, colorbox, etc. These will allow you to lightbox on page content in addition to just photos.
So you could technically have the price table displayed in the page for non-javascript enabled clients, and the lightbox script would show it when clicked, and you wouldn't have to worry about pop-up blockers or having the popup content be a separate page.
-
I know PR shaping is most commonly done with nofollows but the same core principle holds: you don't want the spiders to do something out of fear that you're "diluting" the site's value. Doing it with noindex is just as bad as nofollow, if not worse.
-
When it comes to popups, keep in mind that some users' popup blockers might prevent these from even loading. As is, I don't think it matters much whether you noindex these price list pages or not. You certainly could, as they're not going to appear in any search result, and they're not going to attract links.
I would play with ways to improve the user experience, but putting the large tables on the page probably isn't the way to do that. To me, I think a better option would be (somewhere above the fold) allowing the user to select the type (plain/patched/etc.) quantity, and other variables. They would then get a price quote (as on the bottom of the page), along with a button to continue the checkout process or otherwise continue to the next step. I'd also display the original price per item crossed out, the phrase "bulk discounts" somewhere close, and then the new price per item.
Telling people what they need to do next (it took me a while to find where to buy) and simplifying the pricing at the same time could help a lot. I also noticed that the price quote on the contact page seems to be loading inside the same cramped frame.
-
Hi there,
Sorry I didn't see this when I posted. PR sculpting generally refers to the practice of using internal nofollows - which I'm not a fan of either, not least because it doesn't work. I also agree that pages that users could find useful should generally remain in the index.
-
Thanks for that great information. This is a good example of what I'm taking about:
http://www.stadriemblems.com/scouting/neckerchiefs/index.htm
Under "Plain Neckerchief" click on "view pricelist" or "color chart"
So, you think a better practice would be to just include that pricelist on the same page instead?
-
Hi Marisa,
To determine which pages should be noindexed, first ask yourself first whether a user would want to land on the URL in question. Second, is the URL receiving traffic as an organic landing page right now? Third, does the content serve a purpose to the user? Does it need to exist?
If the answer to all of the above questions is "no," then go ahead and noindex the page. If you answer yes to one of the above, some evaluation is in order. Can you add content, improve the navigation and appearance, or make the page more useful rather than noindexing it?
Generally you can enhance gallery pages for search engines and users by labeling/captioning the images and making sure the alt text is in order. On category pages, add some content, label products, and provide them with a next action.
Do the popups contain useful, non-repeating, or important info? If so, can the content be placed on the page somewhere instead? The only way I would use a popup and noindex it is if the content in the popup is optional and duplicated, such as the often-seen "What's This?" that explains a field or term that is repeated across the site, and each instance makes a new URL.
I've never heard of anyone running into problems with Google for noindexing too much stuff. You're essentially just telling them that the page is not good for users to find. You will, however, tend to improve organic traffic and user experience by making each page useful and adding an appropriate amount of content.
Hope that helps,
Carson
-
I'm not a fan of this (commonly called page rank shaping). First, you're trying to tell Google what to index and what to ignore. Second, how do you know those pages have no value? What if I found an image in your gallery and linked to it off my blog? Now you're missing out on link juice. It might not be viewed as suspicious, but it won't help your site any.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Country Redirection Change
Analytics is showing a substantial decrease in referring traffic from Google specific regional domains like .ca, .co.uk, .de, etc vs an uptick from .com starting as of March 2018. Did anyone note when this change happened when Google stopped directing traffic to their regional domains? Was there any press about it (couldn't find any). Using a VPN for different countries, I compared regional specific domain SERPs vs .com and they're pretty much identical. Thanks!
Algorithm Updates | | Bragg1 -
Page content is not very similar but topic is same: Will Google considers the rel canonical tags?
Hi Moz community, We have multiple pages from our own different sub-domains for same topics. These pages even rank in SERP for related keywords. Now we are planning to show only one of the pages in SERP. We cannot redirect unfortunately. We are planning to use rel canonical tags. But the page content is not same, only 20% is similar and 80% is different but the context is same. If we use rel canonicals, does Google accepts this? If not what should I do? Making header tags similar works? How Google responds if content is not matching? Just ignore or any negative score? Thanks
Algorithm Updates | | vtmoz0 -
One of my pages doesn't appear in Google's search
Our page has been indexed (I just checked) but literally doesn't exist in the first 300 results despite having a respectable DA & PA. Is there something I can do? There's no reason why this specific page doesn't rank, as far as I can see. It's not a new page. Cheers, Rhys
Algorithm Updates | | SwanseaMedicine0 -
Google Mobile Algorithm update
Hi there, On April the 21st Google seems to going to update their Mobile algorithm. I have a few questions about this one. Our current mobile website is very mobile friendly. We block all mobile pages with a noindex, so the desktop pages have been indexed on mobile devices. We use a redirect from desktop page to mobile page when someone hits a result on a mobile device. My gut tells me this is not April 21st-proof so I'm thinking about an update to make this whole thing adaptive. By making the thing adaptive, our mobile pages will be indexed instead of the desktop pages. Two questions: Will Google treat the mobile page as a 100% different page than the desktop page? Or will it match those two because everything will tell Google those belong together. In other words: will the mobile page start with a zero authority and will pages lose good organic positions because of authority or not? Which ranking factor will be stronger after April 21st for mobile pages: page authority or mobile friendliness? In other words: is it worth ignoring the 21 April update because the authority of the desktop pages is more important than making every page super mobile friendly? Hope to get some good advice! Marcel
Algorithm Updates | | MarcelMoz0 -
Thoughts on Google's Autocomplete hurting organic SEO?
A client sent over an article about how Google's Autocomplete eliminates your chance for clicks. Saying that if your competitor is higher than you, the user will bypass the page one organic rank and click on a specific business from the autocomplete which in turn presents an entire page one result for that business. So in a sense they are wondering why they're doing organic SEO if potential customers are just going to bypass the page one organic results. I would love to hear thoughts from like minded people on this as I have to start proving my case with articles, facts, data, and research.
Algorithm Updates | | MERGE-Chicago0 -
Google Dropped 3,000+ Pages due to 301 Moved !! Freaking Out !!
We may be the only people stupid enough to accidentally prevent the google bot from indexing our site. In our htaccess file someone recently wrote the following statement RewriteEngine On
Algorithm Updates | | David_C
RewriteCond %{HTTP_HOST} ^mysite.com$ [NC]
RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] Its almost funny because it was a rewrite that rewrites back to itself... We found in webmaster tools that the site was not able to be indexed by the google bot due to not detecting the robots.txt file. We didn't have one before as we didn't really have much that needed to be excluded. However we have added one now for kicks really. The robots.txt file though was never the problem with regard to the bot accessing the site. Rather it was the rewrite statement above that was blocking it. We tested the site not knowing what the deal was so we went under webmaster tools then health and then selected "Fetch as Google" to have the website. This was our way of manually requesting the site be re-indexed so we could see what was happening. After doing so we clicked on status and it provided the following: HTTP/1.1 301 Moved Permanently
Content-Length: 250
Content-Type: text/html
Location: http://www.mystie.com/
Server: Microsoft-IIS/7.5
MicrosoftOfficeWebServer: 5.0_Pub
MS-Author-Via: MS-FP/4.0
X-Powered-By: ASP.NET
Date: Wed, 22 Aug 2012 02:27:49 GMT
Connection: close <title>301 Moved Permanently</title> Moved Permanently The document has moved here. We changed the screwed up rewrite mistake in the htaccess file that found its way in there but now our issue is that all of our pages have been severely penalized with regard to where they are now ranking compared to just before the indecent. We are essentially freaking out because we don't know the real time consequences of this and if or how long it will take for the certain pages to regain their prior ranks. Typical pages when down anywhere between 9-40 positions on high volume search terms. So to say the least our company is already discussing the possibilities of fairly large layoffs based on what we anticipate with regard to the drop in traffic. This sucks because this is peoples lives but then again a business must make money and if you sell less you have to cut the overhead and the easiest one is payroll. I'm on a team with three other people that I work with to keep the SEO side up to snuff as much as we can and we sell high ticket items so the potential effects if Google doesn't restore matters could be significant. My question is what would you guys do? Is there any way we can contact Google about such a matter? If you can I've never seen such a thing. I'm sure the pages that are missing from the index now might make their way back in but what will there rank look like next time and with that type of rewrite has it permanently effected every page site wide, including those that are still in the index but severely effected by the index. Would love to see things bounce back quick but I don't know what to expect and neither do my counterparts. Thanks for any speculation, suggestions or insights of any kind!!!0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0 -
CTR for Google Rankings
I run a local business, and I'm working on ranking for keyword + city. I currently rank on the first page for just about every keyword I'm working on, but only the top 3 for a little less than half. Because the search volume is so low for each keyword (for most cities Google doesn't have an estimated monthly search volume) the grand total of a few searches a month for each keyword + city combination is where I get my traffic. Although I seem to be getting consistently higher in the rankings, I am curious as to how much more traffic I can expect. I read somewhere that sites that are ranked number one are clicked 50% of the time, number two 20% of the time, number three 15% and from there on it goes down fast. Rank 7 and on is below 1%. Probably around 30% of my keywords are ranked between 7-10 and probably about 20% are ranked 4-6. Are the CTR numbers fairly accurate? I understand that there are a lot of influences on CTR, such as title/description, but generally is that somewhat accurate? If it is, I am missing out on A LOT of traffic. I am pulling about 800 unique visitors a month from Google. If I get in the top 3 for most of my keywords, can I expect significantly more traffic? I ask the question because there are many other things I could be doing with my time to help the business aside from SEO. I don't want to be working constantly on SEO if traffic is only going to increase very little.
Algorithm Updates | | bjenkins240