Technical Automated Content - Indexing & Value
-
One of my clients provides some Financial Analysis tools, which generate automated content on a daily basis for a set of financial derivatives. Basically they try to estimate through technical means weather a particular share price is going up or down, during the day as well as their support and resistance levels.
These tools are fairly popular with the visitors, however I'm not sure on the 'quality' of the content from a Google Perspective. They keep an archive of these tools which tally up to nearly a 100 thousand pages, what bothers me particularly is that the content in between each of these varies only slightly.
Textually there are maybe up to 10-20 different phrases which describe the move for the day, however the page structure is otherwise similar, except for the Values which are thought to be reached on a daily basis. They believe that it could be useful for users to be able to access back-dated information to be able to see what happened in the past. The main issue is however that there is currently no back-links at all to any of these pages and I assume Google could deem these to be 'shallow' provide little content which as time passes become irrelevant. And I'm not sure if this could cause a duplicate content issue; however they already add a Date in the Title Tags, and in the content to differentiate.
I am not sure how I should handle these pages; is it possible to have Google prioritize the 'daily' published one. Say If I published one today; if I had to search "Derivative Analysis" I would see the one which is dated today rather then the 'list-view' or any other older analysis.
-
I would appreciate some more feedback, I'm looking to group some of these pages from the 100k we're bringing it down to around 33k.
As regards comments not sure it's very feasible from research we did not many people go into back-dated entries so it's highly doubtful we'd receive much if any comments.
-
Right, I guess that's true as we still rank for other terms. However there are concerns that this could effect the Domain Rank ( I don't think its the case). We've decided to try drop at least 1/3rd of these 'automated pages' by displaying them in AJAX this way there should be a bit less stuff in the google index.
-
If certain area of the website have a duplicate content that Google will only ignore those pages which contain duplication the affect will never be on the complete website!
-
I don't exactly want all content to be deemed to be unique, what I'm more interested in is making sure that this content does not penalize the rest of the website; it's fine if its ignored by Google if its more then a week or two old. What we don't want is old results coming up when today's value is far more interesting.
I'd be happy if Google would prioritize the 'daily' posts more with relation to 'freshness'.
-
In my personal opinion slightly varied content can count under the duplicate content and this is mainly because the major %age of content on different pages is same...
As you explain how the content is generated, I don’t think there is a way you can manage to change the page in such a way that it becomes unique from each other and adding unique content to each pages is not a very good idea as there are around 100 thousand pages as you said earlier!
If I would be at your place I would have added the comment section below the content so that users who are interested in the content can share their experience, how this data helped them, what exactly happened in the market.... and this user generated content will help the up-coming pages to be unique with user generated content.
This idea will help to an extent to give new life to old pages but saying that it will make all pages unique is almost next to impossible in my eye!
Obviously, this is my suggestions but I would love to listen to others what they would do if they gone through the similar situation!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New Subdomain & Best Way To Index
We have an ecommerce site, we'll say at https://example.com. We have created a series of brand new landing pages, mainly for PPC and Social at https://sub.example.com, but would also like for these to get indexed. These are built on Unbounce so there is an easy option to simply uncheck the box that says "block page from search engines", however I am trying to speed up this process but also do this the best/correct way. I've read a lot about how we should build landing pages as a sub-directory, but one of the main issues we are dealing with is long page load time on https://example.com, so I wanted a kind of fresh start. I was thinking a potential solution to index these quickly/correctly was to make a redirect such as https://example.com/forward-1 -> https:sub.example.com/forward-1 then submit https://example.com/forward-1 to Search Console but I am not sure if that will even work. Another possible solution was to put some of the subdomain links accessed on the root domain say right on the pages or in the navigation. Also, will I definitely be hurt by 'starting over' with a new website? Even though my MozBar on my subdomain https://sub.example.com has the same domain authority (DA) as the root domain https://example.com? Recommendations and steps to be taken are welcome!
Intermediate & Advanced SEO | | Markbwc0 -
No content in the view source, why?
Hi I have a website that you don't see the article body in the view source but if you use the inspect element tool you can see the content, do you know why? Thanks Roy
Intermediate & Advanced SEO | | kadut0 -
Any SEO value in gTLD redirect?
So, my client is thinking of purchasing several gTLDs with second level keywords important to us. Stuff like this...we don't want .popsicles, just the domain with the second level keyword. Those cost anywhere from $20-30 right now: grape.popsicles cherry.popsicles rocket.popsicles companyname.popsicles The thinking is that it's best to be defensive, not let a competitor get the gTLD with our name in it (agreed) and not let them capitalize on a keyword-rich gTLD (hmm). The theory was that we or a competitor could buy this gTLD and redirect it to our relevant page for, say, cherry popsicles. They wonder if that would help that gTLD page rank well - and sort of work in lieu of AdWords for pages that are not ranking well. I don't think this will work. A redirected page shouldn't rank better that the page it links to...unless Google gave it points for Exact Match in the URL. Do you think they will -- does Google grade any part of a URL that redirects? Viewing this video from Matt Cutts, I surmise that a gTLD would be ranked like any other page -- if its content, inbound links, etc. support a high DA, well, ok then, you get graded like every domain. In the case of a redirect, the page would not be indexed as a standalone so that is a moot point, right? So, any competitor buying a gTLD with the hopes of ranking well against us would have to build up pagerank in that new domain...and for our purposes I see that being hugely difficult for anyone - even us. Still, a defensive purchase of some of these might not be a bad idea since it's a fairly low cost investment. Other thoughts?
Intermediate & Advanced SEO | | Jen_Floyd0 -
To index search results or to not index search results?
What are your feelings about indexing search results? I know big brands can get away with it (yelp, ebay, etc). Apart from UGC, it seems like one of the best ways to capture long tail traffic at scale. If the search results offer valuable / engaging content, would you give it a go?
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate Content Question
Brief question - SEOMOZ is teling me that i have duplicate content on the following two pages http://www.passportsandvisas.com/visas/ and http://www.passportsandvisas.com/visas/index.asp The default page for the /visas/ directory is index.asp - so it effectively the same page - but apparently SEOMOZ and more importantly Google, etc treat these as two different pages. I read about 301 redirects etc, but in this case there aren't two physical HTML pages - so how do I fix this?
Intermediate & Advanced SEO | | santiago230 -
How Long Does it Take for Rel Canonical to De-Index / Re-Index a Page?
Hi Mozzers, We have 2 e-commerce websites, Website A and Website B, sharing thousands of pages with duplicate product descriptions. Currently only the product pages on Website B are indexing, and we want Website A indexed instead. We added the rel canonical tag on each of Website B's product pages with a link towards the matching product on Page A. How long until Website B gets de-indexed and Website A gets indexed instead? Did we add the rel canonical tag correctly? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Technical SEO issue
Hi Everyone, I have encountered a major issue in one of my clients website(kitchen appliance website). This client has 2 main websites (A & B) linked with each other representing 2 different categories of appliances. We are trying to create some brand pages that this store carries. One brand page has been created and when searching for it on SERP, the results found should be under URL A but it is under URL B. I don't know what is going on? Can someone explain me what happened? Thank you,
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
Image and Content Management
My boss has decided that on our new website we are building, that he wants all content and images managed by not allowing copying content and/or saving images. Some of the information and images is proprietary, yet most is available for public viewing, but never the less, he wants it prohibited from copy and/or saving. We would still want to keep the content indexable and use appropriate alt tags etc... I wanted to find out if there is any SEO reason and facts to why this would not be a good idea?Would implementing code to prohibit (or at least make it difficult) to save images and copy content, penalize us?
Intermediate & Advanced SEO | | KJ-Rodgers0