Prevent indexing of dynamic content
-
Hi folks!
I discovered bit of an issue with a client's site. Primarily, the site consists of static html pages, however, within one page (a car photo gallery), a line of php coding:
dynamically generates a 100 or so pages comprising the photo gallery - all with the same page title and meta description. The photo gallery script resides in the /gallery folder, which I attempted to block via robots.txt - to no avail. My next step will be to include a:
within the head section of the html page, but I am wondering if this will stop the bots dead in their tracks or will they still be able to pick-up on the pages generated by the call to the php script residing a bit further down on the page?
Dino
-
Hello Steven,
Thank you for providing another perspective. However, all factors considered, I agree with Shane's approach on this one. The pages add very little merit to the site and exist primarily to provide the site users with eye-candy (e.g. photos of classic cars).
-
Just personally, I would still deindex or canonical them - they are just pages with a few images - so not of much value and unless all titles and descriptions are targeting varying keywords and content is added, they will canabalize eachother, and possibly even drag down the site due to 100's of pages of thin content....
So actually from an SEO perspective it probably IS better to deindex or canonical 3 - 5 or so years ago, maybe the advice would have been keep them and keyword target - but not in the age of content
(unless the images were optimized for image searches for sale able products (but I do not think it is)
-
Hi Dino,
I know this won't solve the immediate problem you asked for, but wouldn't it be better for your client's site (and for SEO) to alter the PHP so that the title and meta data description are replaced with variables that can also be dynamic, depending on whichever of the 100 or so pages gets created?
That way, rather than worrying about a robot seeing 100 pages as duplicate content, it could see 100 pages as 100 pages.
-
It depends on how the pages are being created (I would assume it is off of a template page)
So within the template of this dynamically created page you would place
But if this is the global template - you cannot do this as it will noindex every page which of course is bad.
If you want to PM me the URL of the page I can take a look at your code, and see what is going on and how to recitify, as right now i think we are talking about the same principles, but different words are being used.
It really is pretty straightforward. (what I am saying) The pages that you want to be not indexed DO NOT need a nofollow they need a meta noindex
But there are many variables, as if you have already robot.txt disallowed the directory, then no bot will go there to get the updated noindex directive....
If there is no way to add a meta noindex then you need to nofollow and put in for a manual removal
-
I completely understand and agree with all points you have conveyed. However, I am not certain as to the best approach to "noindex" the urls which are being created dynamically from within the static html page? Maybe I am making this more complex than it needs to be...
-
So it is the pages themselves that are dynamically created you want out of index, not the page the contains the links?
If this is so ---
noindex the pages that are created dynamically
Therein lies the problem. I did have the nofollow directive in place specifying the /gallery/ folder, but apparently, the bots still crawled it.
Nofollow does not remove from index, it only tells the bot not to pass authority, as it is still feasible that the bot will crawl the link, so without the noindex, nofollow is not the correct directive due to the page (even though nofollowed) is still being reached and indexed.
PS. also if you have the nofollow on the links, you may want to remove it, so the bots will go straight through to the page and grab the noindex directive, but if you wanted to try to not let any authority "evaporate" you can continue to nofollow, but you may need to request the dynamically generated pages (URLS) be removed using webmaster tools.
-
The goal is to have the page remain in the index, but not follow any dynamically generated links on the page. The nofollow directive (in place for months) has not done the job.
-
?
If a link is coming into the page, and you have Noindex, Nofollow - this would remove from index and prevent the following of any links -
This is NOT instant, and can take months to occur depending on depth of page, crawl schedule ect... (you can try to speed it up by using webmaster tools to remove the URL)
What is the goal You are attempting to achieve?
To get the page out of index, but still followed?
Or remain in index, but just not follow links on page?
?
-
Therein lies the problem. I did have the nofollow directive in place specifying the /gallery/ folder, but apparently, the bots still crawled it. I agree that the noindex removes the page, but I wasn't certain if it prevented crawling of the page, as I have read mixed opinions on this.
I just thought of something else... perhaps an external url is linking to this page - allowing it to be crawled. I am off to examine this perspective.
Thanks for your response!
-
noindex will only remove from Index and dissallow the act of indexing the specific page (or pages created off template) you place the tag in upon the next page crawl.
Bots will still follow the page, and follow any links that are readable as long as there is not a nofollow directive.
I am not sure I fully understand the situation, so I would not say this is my "reccomendation" but an answer to the specific question.....
but I am wondering if this will stop the bots dead in their tracks or will they still be able to pick-up on the pages generated
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content that's behind CSS..
For content that's been loaded onto the page.. but it requires a click for it to be revealed.. as in a slider, or a tab, to save space or for a page's organization.. what are your thoughts on Google counting or weighting this content? It would make sense for Google to give it partial or no weighting as if Google attributes the content to being there, its confusion for the user to land on the page and have to find it/click around to find it.. Sorry if this is an obvious question to SEOs.. I've always assumed as long as it was loaded, it'd be mostly counted.. but I'm beginning to doubt my assumption. Thanks!
On-Page Optimization | | speedcommerce0 -
Website Not Indexing
My website is not indexing due to someone's complaint in Google that was registered in 2013. The complaint says I am using copyrighted images of someone else on my website but those images were immediately removed by me. I just came to know about this complaint and lodged my problem in Lumen database. Already 14 days has passed but still my page is not indexed. Can anyone please help me?
On-Page Optimization | | varun18000 -
Show or hide content in responsive design
Hi everybody! I'm trying to design the mobile version of an ecommerce with lots of filters in the menu (size, colours, material, etc.) I'd like to know if it's better for this kind of sites to use an m.version or a complete responsive design. I have analyzed my keywords and it's true that my website is founded in a really similiar way in both cases (desktop and mobile) so I wanted to design the whole site in responsive at first, but I'm afraid of being penalized if I hide content in the mobile version. This is why I'm thinking about making the responsive for desktop and tablets and the m. option as well, so as to hide menus and images. What's your advice? Thanks in advance!
On-Page Optimization | | esthergvent0 -
PDF's - Dupe Content
Hi I have some pdfs linked to from a page with little content. Hence thinking best to extract the copy from the pdf and have on-page as body text, and the pdf will still be linked too. Will this count as dupe content ? Or is it best to use a pdf plugin so page opens pdf automatically and hence gives page content that way ? Cheers Dan
On-Page Optimization | | Dan-Lawrence0 -
How much content does Google Crawl on your site?
Hi, We've had a debate around the office where some people believe that Google only crawls the first 150-200 words on a page and some people believe that they priority content that is above the fold and other people believe that all content has the same priority. Can you help us? Thanks,
On-Page Optimization | | mdorville
Matt0 -
Article on site and distribution, is it duplicate content?
I was always taught to place all original articles on site, let them get indexed by Google, then put out for distribution through various press release outlets. With the latest penguin update, how does this practice work out concerning duplicate content? In theory, I wrote the article so I should get credit for it on my site first, then push through various distribution outlets to get it out to my targeted audience in my niche field. Typing out loud I would tend to think if the article is on my site first then I would get credit and any others following would be hit by duplicate content if in fact google considered it a dupe violation. Any input on this? Am I on track or am I heading for a train wreck.
On-Page Optimization | | anthonytjm0 -
Duplicate content on area specific sites
I have created some websites for my company Dor-2-Dor and there is a main website where all of the information across the board is on (www.dor2dor.com) but I also have area specific sites which are for our franchisees who run certain areas around the country (www.swansea.dor2dor.com or www.oxford.dor2dor.com) The problem is that the content that is on a lot of the pages is the same on all of them for instance our faq's page, special offers etc. What is the best way to get these pages to rank well and not have the duplicate content issues and be ranked down by search engines? Any help will be greatly received.
On-Page Optimization | | D2DWeb0 -
Duplicate content? Not sure.
Good news! I have my first real SEO gig and now I have to be able to actually deliver. I'm up for it but I want to be sure I'm seeing what I think I am before suggesting any changes. I'm working my way throught Danny Dover's excellent book SEO Secrets and learning tons! To see if there is duplicate content on the site, I've taken a sentence from one of the pages on the site and searched for it: i.e., site:storybooksforhealing.com "Some of the most quiet moments are often the most difficult after a loss. Mornings, late nights, time alone." The SERPs show 7 pages that have this text on it. It seems like this is duplicate content, right? This is a Wordpress website so what's happening is the actual page is here: www.storybooksforhealing.com/publish-cup-of-joy/ but there are several archive pages that show excerpts of this text, too. If this is duplicate content (first question) then how would I go about remedying it? Should I set the canonical reference to /publish-cup-of-joy page? Thank you for being patient with my NOOB questions.
On-Page Optimization | | ChristiMc0