Prevent indexing of dynamic content
-
Hi folks!
I discovered bit of an issue with a client's site. Primarily, the site consists of static html pages, however, within one page (a car photo gallery), a line of php coding:
dynamically generates a 100 or so pages comprising the photo gallery - all with the same page title and meta description. The photo gallery script resides in the /gallery folder, which I attempted to block via robots.txt - to no avail. My next step will be to include a:
within the head section of the html page, but I am wondering if this will stop the bots dead in their tracks or will they still be able to pick-up on the pages generated by the call to the php script residing a bit further down on the page?
Dino
-
Hello Steven,
Thank you for providing another perspective. However, all factors considered, I agree with Shane's approach on this one. The pages add very little merit to the site and exist primarily to provide the site users with eye-candy (e.g. photos of classic cars).
-
Just personally, I would still deindex or canonical them - they are just pages with a few images - so not of much value and unless all titles and descriptions are targeting varying keywords and content is added, they will canabalize eachother, and possibly even drag down the site due to 100's of pages of thin content....
So actually from an SEO perspective it probably IS better to deindex or canonical 3 - 5 or so years ago, maybe the advice would have been keep them and keyword target - but not in the age of content
(unless the images were optimized for image searches for sale able products (but I do not think it is)
-
Hi Dino,
I know this won't solve the immediate problem you asked for, but wouldn't it be better for your client's site (and for SEO) to alter the PHP so that the title and meta data description are replaced with variables that can also be dynamic, depending on whichever of the 100 or so pages gets created?
That way, rather than worrying about a robot seeing 100 pages as duplicate content, it could see 100 pages as 100 pages.
-
It depends on how the pages are being created (I would assume it is off of a template page)
So within the template of this dynamically created page you would place
But if this is the global template - you cannot do this as it will noindex every page which of course is bad.
If you want to PM me the URL of the page I can take a look at your code, and see what is going on and how to recitify, as right now i think we are talking about the same principles, but different words are being used.
It really is pretty straightforward. (what I am saying) The pages that you want to be not indexed DO NOT need a nofollow they need a meta noindex
But there are many variables, as if you have already robot.txt disallowed the directory, then no bot will go there to get the updated noindex directive....
If there is no way to add a meta noindex then you need to nofollow and put in for a manual removal
-
I completely understand and agree with all points you have conveyed. However, I am not certain as to the best approach to "noindex" the urls which are being created dynamically from within the static html page? Maybe I am making this more complex than it needs to be...
-
So it is the pages themselves that are dynamically created you want out of index, not the page the contains the links?
If this is so ---
noindex the pages that are created dynamically
Therein lies the problem. I did have the nofollow directive in place specifying the /gallery/ folder, but apparently, the bots still crawled it.
Nofollow does not remove from index, it only tells the bot not to pass authority, as it is still feasible that the bot will crawl the link, so without the noindex, nofollow is not the correct directive due to the page (even though nofollowed) is still being reached and indexed.
PS. also if you have the nofollow on the links, you may want to remove it, so the bots will go straight through to the page and grab the noindex directive, but if you wanted to try to not let any authority "evaporate" you can continue to nofollow, but you may need to request the dynamically generated pages (URLS) be removed using webmaster tools.
-
The goal is to have the page remain in the index, but not follow any dynamically generated links on the page. The nofollow directive (in place for months) has not done the job.
-
?
If a link is coming into the page, and you have Noindex, Nofollow - this would remove from index and prevent the following of any links -
This is NOT instant, and can take months to occur depending on depth of page, crawl schedule ect... (you can try to speed it up by using webmaster tools to remove the URL)
What is the goal You are attempting to achieve?
To get the page out of index, but still followed?
Or remain in index, but just not follow links on page?
?
-
Therein lies the problem. I did have the nofollow directive in place specifying the /gallery/ folder, but apparently, the bots still crawled it. I agree that the noindex removes the page, but I wasn't certain if it prevented crawling of the page, as I have read mixed opinions on this.
I just thought of something else... perhaps an external url is linking to this page - allowing it to be crawled. I am off to examine this perspective.
Thanks for your response!
-
noindex will only remove from Index and dissallow the act of indexing the specific page (or pages created off template) you place the tag in upon the next page crawl.
Bots will still follow the page, and follow any links that are readable as long as there is not a nofollow directive.
I am not sure I fully understand the situation, so I would not say this is my "reccomendation" but an answer to the specific question.....
but I am wondering if this will stop the bots dead in their tracks or will they still be able to pick-up on the pages generated
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How does Google treat Dynamic Titles?
Let's say my website can be accessed in only 3 states Colorado, Arizona and Ohio. I want to display different information to each visitor based on where they are located. For this I would also like the title to change based on their location. Not quite sure how Google we treat the title and rank the site.... Any resources you can provide would be helpful. Thanks
On-Page Optimization | | Firestarter-SEO0 -
Do quotation marks in content effect SERPs?
Some of my art object products have words and phrases engraved on them. The words relate to the images on the product. In the product descriptions, I have been putting quotes around the entire list. Would I get better long tail results if I didn't use the quotation marks? In other words, do the quotes make everything between them an exact match phrase? For example:
On-Page Optimization | | stephenfishman
Current product description:
The worlds around the edge of the lazy susan read, "Explore nature. Dream big. Take time to smell the flowers. Enjoy the changing seasons. Seize the day. Relish the night. Live life to the fullest." Thank you for helping with this, all comments on how to present this kind of content are welcomed- Stephen kSOjt5a0 -
Duplicate Content - But it isn't!
Hi All, I have a site that releases alerts for particular problem/events/happenings. Due to legal stuff we keep the majority of the content the same on each of these event pages. The URLs are all different but it keeps coming back as duplicate content. The canonical tag is not right (i dont think for this) egs http://www.holidaytravelwatch.com/alerts/call-to-arms/egypt/coral-sea-waterworld-resort-sharm-el-sheikh-egypt-holiday-complaints-july-2014 http://www.holidaytravelwatch.com/alerts/call-to-arms/egypt/hotel-concorde-el-salam-sharm-el-sheikh-egypt-holiday-complaints-may-2014
On-Page Optimization | | Astute-Media0 -
Content with changing URL and duplicate content
Hi everyone, I have a question regarding content (user reviews), that are changing URL all the time. We get a lot of reviews from users that have been dining at our partner restaurants, which get posted on our site under (new) “reviews”. My worry however is that the URL for these reviews is changing all the time. The reason for this is that they start on page 1, and then get pushed down to page 2, and so on when new reviews come in. http://www.r2n.dk/restaurant-anmeldelser I’m guessing that this could cause for serious indexing problems? I can see in google that some reviews are indexed multiple times with different URLs, and some are not indexed at all. We further more have the specific reviews under each restaurant profile. I’m not sure if this could be considered duplicate content? Maybe we should tell google not to index the “new reviews section” by using robots.txt. We don’t get much traffic on these URLs anyways, and all reviews are still under each restaurant-profile. Or maybe the canonical tag can be used? I look forward to your input. Cheers, Christian
On-Page Optimization | | Christian_T2 -
Is This A Reason To Move Content?
Dear All, I am questioning my initial decisions when I planned a site due to reading lots of info on moz. Although what I have read has made me question what I have already done, I can't find anything that is specific to my exact case, so here goes. I recently built a shopping cart in OpenCart. I want the site to have lots of information on the products it sells. I have populated each category with at least 1000 words of content that is specific to the products in that category, also I have some information pages that have no products in them at all, just copy. So the shopping site actually has a few pages that look like a static website and a few that look like a normal shopping cart. My thought behind this was I wanted the pages with lots of info to rank and become authoritative, in some way elevating the whole site. I have recently put a blog on the site, and a combination of that, and reading Moz has lead me think that I should move all the content from the category pages to the blog, and deep link each blog post to it's relevant products and category. From what I have read it would be easier to get the blog ranking and acknowledged as an authority rather than 30 category pages. Also each 1500+ word category page will make at least 3-4 nice blog posts, and each post can be focused on a single keyword rather than a large category page that has maybe 3-4 keywords it's trying to rank for. Also the blog is much better optimised than a standard OC category page (even using extensions with them). The only negative I can see is moving the content, but the site is less that 2 months old, and the amount of link juice it has is negligible. Does google cut new sites a bit of slack in these situations of moving content around, or will I be seen as 'up to something' by google? I guess my question is, am I barking up the right tree? Or is the old adage 'a little information is dangerous' true in this case, and I just about to make a load of work for the sake of it with no real benefit. However, if I am to make such a dramatic change to the sites architecture I think the time is now, before things start gaining juice & rank. I hope I have explained my situation clearly and I thank anyone who can offer me any advice. Great forum, Thank you, Ian
On-Page Optimization | | cookie7770 -
Duplicate content in the title
Good morning, I am developing an application that searches offers in the press. The problem I have is the follow one:
On-Page Optimization | | ofuente
When I find an offer that I have already post, I cant use the same URL because it generates duplicate content , as the URL is generated from the title. If I find two offers in different stores (for example Thomson TV) I am studying two options. The first would be to add a number at the end of the URL
http://www.offertazo.com/televisor-thomson
http://www.offertazo.com/televisor-thomson1
http://www.offertazo.com/televisor-thomson2 Another option I propose would be to add semantic data to provide value (such as the date). For example:
http://www.offertazo.com/01-12-12/televisor-thomson I appreciate your help.0 -
Duplicate content
the report shows duplicate content for a category page that has more than one page. how can we avoid this since i cannot make a different meta content for the second page of the category page: http://www.geographics.com/2-Cool-Colors-Poster-Board-14x22/c183_66_327_387/index.html http://www.geographics.com/2-Cool-Colors-Poster-Board-14x22/c183_66_327_387/index.html?page=2 thanks, Madlena
On-Page Optimization | | Madlena0 -
Duplicate Page Content Issue
For one of our campaigns, we have 164 errors for Duplicate Page Content. We have a website where much of the same content lives in two different places on their website. The information needs to be accessible from both areas. What is the best way to tackle this problem? Is there anything that can be done so these pages are not competing against one another? If the only solution is to edit the content on one of the pages, how much of the content has to be different? Is there a certain percentage to go by? Here is an example of what I am referring to: 1.) http://www.valleyorthopedicassociates.com/services/foot-center/preventing-sprains-and-strains 2.) http://www.valleyorthopedicassociates.com/patient-resources/service/foot-and-ankle-center/preventing-sprains-and-strains
On-Page Optimization | | cmaseattle1