Does Automated High Quality Content Look Like Low Quality to Search Engines?
-
I have 1,000+ pages that all have very similar writing, but different results.
Example:
Nr of days on market
Average sales price
Median sales price
etc etc etcAll the results are very different for each neighborhood. However, as per the above, the wording is similar. The content is very valuable to users. However, I am concerned search engines may see it as low quality content, as wording is identical across all these pages (except the results). Any view on this? Any examples to back up such views?
-
Automated means that my my web developers has an algorithm in places that calculates changes in al those statistical fields on an ongoing basis so users always have new up to date data. From the URL I included you can on top bar change neighborhood etc and the statistics will change. Great insight for user but since writing "median price per year", "$ Volume of active listings" etc are same across all pages I wonder how I should expect search engines to treat it.
Any articles or experience to back up ideas highly appreciated.
-
Ah, OK. So, when you say "automated" content, what does that mean, exactly? And why is there thousands of pages? Are they all unique somehow? How are you deciding when it is worthwhile to create a new page?
I'd need more insight into your website hierarchy, content strategy and more to give more of an answer.
-
http://www.honoluluhi5.com/oahu/honolulu-condos/
High quality stats on the page. Many pages like that. Good for user.
-
My concern is whether your content is duplicated in ways that offer no additional value to search engines and website visitors. For example, do you have two pages that have pretty much the exact-same text except that one uses the phrase "average sales price" and another has "median sales price" instead?
While I know that "average" and "median" mean two different things, if the only difference in the text of two pages is that one uses "average" and the other uses "median," then I would be very concerned about a Panda hit from Google. Panda hits websites that have duplicated, low-quality, and/or unoriginal content on a large scale.
My question: unless a website has thousands of products or thousands of blog posts, do you really need thousands of pages? Most websites that have thousands of pages have spun content to target one specific keyword on one specific page -- and doing this many, many times over. One of my first "SEO" jobs years ago was to rewrite website pages in different words for exactly this purpose. (I know now that it was a black-hat job.) Today, Google is smart enough to know that a single page can be relevant for multiple keyword variations and themes -- so such actions are not necessary. And rightly so!
My other concern is your use of the word "automated." 99% of the time, anything automated will appear to Google -- and, more importantly, to users -- as spam. Original, authoritative, quality, human-created content is always better. Five pages of this is better than 500 pages of automated text. I would look into consolidating a lot of your pages into a smaller set of completely-original pages that each targets a set of related keyword themes.
Again, I don't know your specific case, so I could be wrong. But your post set off a bunch of warnings. If you need any clarifications, please feel free to reply!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Possible duplicate content issue
Hi, Here is a rather detailed overview of our problem, any feedback / suggestions is most welcome. We currently have 6 sites targeting the various markets (countries) we operate in all websites are on one wordpress install but are separate sites in a multisite network, content and structure is pretty much the same barring a few regional differences. The UK site has held a pretty strong position in search engines the past few years. Here is where we have the problem. Our strongest page (from an organic point of view) has dropped off the search results completely for Google.co.uk, we've picked this up through a drop in search visibility in SEMRush, and confirmed this by looking at our organic landing page traffic in Google Analytics and Search Analytics in Search Console. Here are a few of the assumptions we've made and things we've checked: Checked for any Crawl or technical issues, nothing serious found Bad backlinks, no new spammy backlinks Geotarggetting, this was fine for the UK site, however the US site a .com (not a cctld) was not set to the US (we suspect this to be the issue, but more below) On-site issues, nothing wrong here - the page was edited recently which coincided with the drop in traffic (more below), but these changes did not impact things such as title, h1, url or body content - we replaced some call to action blocks from a custom one to one that was built into the framework (Div) Manual or algorithmic penalties: Nothing reported by search console HTTPs change: We did transition over to http at the start of june. The sites are not too big (around 6K pages) and all redirects were put in place. Here is what we suspect has happened, the https change triggered google to re-crawl and reindex the whole site (we anticipated this), during this process, an edit was made to the key page, and through some technical fault the page title was changed to match the US version of the page, and because geotargetting was not turned on for the US site, Google filtered out the duplicate content page on the UK site, there by dropping it off the index. What further contributes to this theory is that a search of Google.co.uk returns the US version of the page. With country targeting on (ie only return pages from the UK) that UK version of the page is not returned. Also a site: query from google.co.uk DOES return the Uk version of that page, but with the old US title. All these factors leads me to believe that its a duplicate content filter issue due to incorrect geo-targetting - what does surprise me is that the co.uk site has much more search equity than the US site, so it was odd that it choose to filter out the UK version of the page. What we have done to counter this is as follows: Turned on Geo targeting for US site Ensured that the title of the UK page says UK and not US Edited both pages to trigger a last modified date and so the 2 pages share less similarities Recreated a site map and resubmitted to Google Re-crawled and requested a re-index of the whole site Fixed a few of the smaller issues If our theory is right and our actions do help, I believe its now a waiting game for Google to re-crawl and reindex. Unfortunately, Search Console is still only showing data from a few days ago, so its hard to tell if there has been any changes in the index. I am happy to wait it out, but you can appreciate that some of snr management are very nervous given the impact of loosing this page and are keen to get a second opinion on the matter. Does the Moz Community have any further ideas or insights on how we can speed up the indexing of the site? Kind regards, Jason
Intermediate & Advanced SEO | | Clickmetrics0 -
Should you bother disallowing low quality links with brand/non-commercial anchor text?
Hi Guys, Doing a link audit and have come across lots of low quality web directories pointing to the website. Most of the anchor text of these directories are the websites URL and not comercial/keyword focused anchor text. So if thats the case should we even bother doing a link removal request via google webmaster tools for these links, as the anchor text is non-commercial? Cheers.
Intermediate & Advanced SEO | | spyaccounts140 -
Scraped content ranking above the original source content in Google.
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google. 4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results. We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console). Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content. We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place. Please suggest.
Intermediate & Advanced SEO | | ontarget-media0 -
Is Content Location Determined by Source Code or Visual Location in Search Engine's Mind?
I have a page with 2 scroll features. First 1/3 of the page (from left) has thumb pictures (not original content) and a vertical scroll next to. Remaining 2/3 of the page has a lot of unique content and a vertical scroll next to it. Question: Visually on a computer, the unique content is right next to the thumbs, but in the source code the original content shows after these thumbs. Does that mean search engines will see this content as "below the fold" and actually, placing this content below the thumbs (requiring a lot of scrolling to get to the original content) would in a search engine's mind be the exact same location of the content, as the source code shows the same location? I am trying to understand if search engines base their analysis on source code or also visual location of content? thx
Intermediate & Advanced SEO | | khi50 -
Aggregators outranking me for my own content
WARNING : The follow question is for an adult website. If you are at work, near children or are offended by such material, DO NOT CLICK Hey guys, This one has had me stumped for awhile. I operate www.deviantclip.com. Its a very old and trusted domain by google with loads of history. However, in the past year, Google has been giving me the cold shoulder. One major problem I've noticed is that I've lost all longtail traffic. Its even gotten to the point where aggregators are outranking me in google for my own custom titles and content. **Example A : ** Google Link 1 This search has my own sitename in the title and my site ranks somewhere on page 2 or further. **Example B : ** Google Link 2 This content originated from our site and has a unique title, yet we're dead last in the serps. I submitted my site for reconsideration a few times, and the outcome everytime is that Google tells me they have not applied any manual penalty. There are a TON of issues to adress with this site, but obviously, getting my own content to rank first is the primary problem I would like to fix. Your time and advice is greatly appreciated. If you need furter info, don't be afraid to ask.
Intermediate & Advanced SEO | | CrakJason0 -
Search Engine Pingler
Hello everyone, it's me again 😉 I've just got a Pro membership on SeoMoz and I am full of questions. A few days ago I found very interesting tool called: Search Engine Pingler And description of it was like this: Your website or your page was published a long time, but you can not find it on google. Because google has not index your site. Tool Search engine pingler will assist for you. It will ping the URL of your Page up more than 80 servers of google and other search engines. Inform to the search engine come to index your site. So my question is that tool really helps to increase the indexation of the link by search engine like Google, if not, please explain what is a real purpose of it. Thank you to future guru who can give a right answer 🙂
Intermediate & Advanced SEO | | smokin_ace0 -
How to Create automate Content for Big Ecommerce Site
Hello guys, Im planning to do some big changes on my ecommerce, On my ecommerce i normally ship services so tons of relative products but it will change from each brand. For example i would like to say the same thing on each product landing page but just changing the keyword for the proper product lading page , but my fear is that this will look like duplicate content. How can i deal with information on each landing page on a ecommerce that have more than 1k on services. I dont want to write this differently on each 1k on pages for products since they are doing the same thing but in different brands. I hope you can help me on this. Note: Is there any source of doing this type of Seo for automate page service generator?
Intermediate & Advanced SEO | | aldovacano0 -
Duplicate content for area listings
Hi, I was slightly affected by the panda update on the 14th oct generaly dropping by about 5-8 spots in the serps for my main keywords, since then I've been giving my site a good looking over. On a site I've got city listings urls for certain widget companys, the thing is many areas and thus urls will have the same company listed. What would be the best way of solving this duplicate content as google may be seeing it? I was thinking of one page per company and prominenly listing the areas they operate so still hopefully get ranked for area searches. But i'd be losing the city names in the url as I've got them now for example: mywidgetsite.com/findmagicwidgets/new-york.html mywidgetsite.com/findmagicwidgets/atlanta.html Any ideas on how best to proceed? Cheers!
Intermediate & Advanced SEO | | NetGeek0