Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google index dynamically generated content/headers, etc.?
-
To avoid dupe content, we are moving away from a model where we have 30,000 pages, each with a separate URL that looks like /prices/<product-name>/<city><state>, often with dupe content because the product overlaps from city to city, and it's hard to keep 30,000 pages unique, where sometimes the only distinction is the price & the city/state.</state></city></product-name>
We are moving to a model with around 300 unique pages, where some of the info that used to be in the url will move to the page itself (headers, etc.) to cut down on dupe content on those unique 300 pages.
My question is this. If we have 300 unique-content pages with unique URL's, and we then put some dynamic info (year, city, state) into the page itself, will Google index this dynamic content?
The question behind this one is, how do we continue to rank for searches for that product in the city-state being searched without having that info in the URL?
Any best practices we should know about?
-
Hi there,
Not sure I have enough information to weigh in on the first part of your question - Google will index whatever it sees on the page. If you deliver the content to Google, then they index it. The problem comes when you deliver different content to different users. Try a tool like SEO Browser to see how googlebot views your site.
To answer your second question, its often hard to rank near-duplicate pages for specific cities/states without running into massive duplicate content problems. Matt Cutts himself actually addressed this awhile back. He basically stated if you have multiple pages all targeting different locations, it's best to include a few lines of unique content on each page (I recommend the top) to make each unique.
“In addition to address and contact information, 2 or 3 sentences about what is unique to that location and they should be fine,” Source
But this technique would be very hard with only 300 product page. The alternative, stuffing these pages with city/state information for every combination possible, is not advised.
http://www.seomoz.org/q/on-page-optimization-to-rank-for-multiply-cities
So in the end, it's actually not hard to rank for city-state keywords without having it in the URL, but the information should be in the content or other places like the title tag or internal link structure - but to do this for 1000's of locations with only 300 pages without keyword stuffing is near impossible.
The best thing to do is figure out how to create unique content for every page you want to rank for, and take that route.
For example, I might create a "Seattle" page, create unique content for the top of the page, then list 50 or so products with the unique Seattle prices. (This is a rough strategy - you'd have to refine it greatly to work for your situation.
Hope this helps! Best of luck with your SEO.
-
I see. To get the city-state pages indexed then they must have their own URL. If you can only access it via posting a form (assumed for using the search feature), the a search engine can't see it.
To get round this, you could put a links underneath the search box to popular searches. This will get them indexed.
Does that answer the questions?
Thanks
Iain - Reload
-
Thanks for the reply. The city-state content wouldn't be driven by the URL, it would be driven by the city-state that the user searched for. ie if the person searched for <product><city><state>I would want our /product/ page to show up, and show them content in their local city state.</state></city></product>
-
Hi Editable Text,
In short if you show Google a crawlable link to the content with the dynamic header/content, and the content is driven by the unique URL, yes it will index it.
As with any SEO/life question, there are a few t&c's with this.
- The pages need to be unique enough not to be classed as duplicate content
- Make sure it's intelligently linked internally
- You have external links pointing deep into the site
- You have a decent site architecture
To answer you second question, you'll need unique pages for each location, unless your content would be so thin, you'd need to group them. The URL doesn't have to include the keyword, but it's damn helpful if it does.
Hope that helps
Iain - Reload Media
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google ranking content for phrases that don't exist on-page
I am experiencing an issue with negative keywords, but the “negative” keyword in question isn’t truly negative and is required within the content – the problem is that Google is ranking pages for inaccurate phrases that don’t exist on the page. To explain, this product page (as one of many examples) - https://www.scamblermusic.com/albums/royalty-free-rock-music/ - is optimised for “Royalty free rock music” and it gets a Moz grade of 100. “Royalty free” is the most accurate description of the music (I optimised for “royalty free” instead of “royalty-free” (including a hyphen) because of improved search volume), and there is just one reference to the term “copyrighted” towards the foot of the page – this term is relevant because I need to make the point that the music is licensed, not sold, and the licensee pays for the right to use the music but does not own it (as it remains copyrighted). It turns out however that I appear to need to treat “copyrighted” almost as a negative term because Google isn’t accurately ranking the content. Despite excellent optimisation for “Royalty free rock music” and only one single reference of “copyrighted” within the copy, I am seeing this page (and other album genres) wrongly rank for the following search terms: “free rock music”
On-Page Optimization | May 3, 2023, 3:00 PM | JCN-SBWD
“Copyright free rock music"
“Uncopyrighted rock music”
“Non copyrighted rock music” I understand that pages might rank for “free rock music” because it is part of the “Royalty free rock music” optimisation, what I can’t get my head around is why the page (and similar product pages) are ranking for “Copyright free”, “Uncopyrighted music” and “Non copyrighted music”. “Uncopyrighted” and “Non copyrighted” don’t exist anywhere within the copy or source code – why would Google consider it helpful to rank a page for a search term that doesn’t exist as a complete phrase within the content? By the same logic the page should also wrongly rank for “Skylark rock music” or “Pretzel rock music” as the words “Skylark” and “Pretzel” also feature just once within the content and therefore should generate completely inaccurate results too. To me this demonstrates just how poor Google is when it comes to understanding relevant content and optimization - it's taking part of an optimized term and combining it with just one other single-use word and then inappropriately ranking the page for that completely made up phrase. It’s one thing to misinterpret one reference of the term “copyrighted” and something else entirely to rank a page for completely made up terms such as “Uncopyrighted” and “Non copyrighted”. It almost makes me think that I’ve got a better chance of accurately ranking content if I buy a goat, shove a cigar up its backside, and sacrifice it in the name of the great god Google! Any advice (about wrongly attributed negative keywords, not goat sacrifice ) would be most welcome.0 -
Updating Old Content at Scale - Any Danger from a Google Penalty/Spam Perspective?
We've read a lot about the power of updating old content (making it more relevant for today, finding other ways to add value to it) and republishing (Here I mean changing the publish date from the original publish date to today's date - not publishing on other sites). I'm wondering if there is any danger of doing this at scale (designating a few months out of the year where we don't publish brand-new content but instead focus on taking our old blog posts, updating them, and changing the publish date - ~15 posts/month). We have a huge archive of old posts we believe we can add value to and publish anew to benefit our community/organic traffic visitors. It seems like we could add a lot of value to readers by doing this, but I'm a little worried this might somehow be seen by Google as manipulative/spammy/something that could otherwise get us in trouble. Does anyone have experience doing this or have thoughts on whether this might somehow be dangerous to do? Thanks Moz community!
On-Page Optimization | Mar 15, 2017, 3:11 PM | paulz9990 -
Google Indexing Wrong Title
Hey guys ! I have a wordpress website and also yoast seo plugin . I've set up a meta title which is : TV Online | Assistir Filmes| Notícias | Futebol |GogsTV . (I checked on some free tools to see , and they also show up this) but .... google is showing this : GogsTV: TV Online | Assistir Filmes| Notícias | Futebol . Seems they are trying to show my brand name first instead of my main keyword . I'm not sure why it doesnt indexes as i want ... Does anybody know how can i fix this . Thanks
On-Page Optimization | Dec 10, 2014, 4:52 AM | tiagosimk0 -
How does Google treat Dynamic Titles?
Let's say my website can be accessed in only 3 states Colorado, Arizona and Ohio. I want to display different information to each visitor based on where they are located. For this I would also like the title to change based on their location. Not quite sure how Google we treat the title and rank the site.... Any resources you can provide would be helpful. Thanks
On-Page Optimization | Nov 6, 2014, 1:35 PM | Firestarter-SEO0 -
Duplicate content penalty
when moz crawls my site they say I have 2x the pages that I really have & they say I am being penalized for duplicate content. I know years ago I had my old domain resolve over to my new domain. Its the only thing that makes sense as to the duplicate content but would search engines really penalize me for that? It is technically only on 1 site. My business took a significant sales hit starting early July 2013, I know google did and algorithm update that did have SEO aspects. I need to resolve the problem so I can stay in business
On-Page Optimization | May 7, 2014, 12:06 PM | cheaptubes0 -
Sliders and Content Above the Fold
I was just inspecting a wire frame that is going out to a client and realized that the slider may interfere with the "content above the fold." Can't believe this had not struck me on others. If the Header has basic business info, etc. in it and you place a slider to display images in the area just beneath the Header or slightly down from it, does that decrease the amount of content seen a being above the fold? Or, is content above the fold established by virtue of H1,2, 3, etc.?
On-Page Optimization | Dec 20, 2011, 11:16 AM | RobertFisher0 -
Should I let Google index tags?
Should I let Google index tags? Positive? Negative Right now Google index every page, including tags... looks like I am risking to get duplicate content errors? If thats true should I just block /tag in robots.txt Also is it better to have as many pages indexed by google or it's should be as lees as possible and specific to the content as much as possible. Cheers
On-Page Optimization | Jul 7, 2011, 7:51 PM | DiamondJewelryEmpire0 -
301 redirect (www.domain.com/index to www.domain.com)
Hello, Please let me know what are the exact right steps in order to get rid of the duplicate content issues related with: www.domain.com/index.html same as www.domain.com without creating an infinite loop. Do you have a step by step guide posted within seomoz including 301 redirect for non www to www for all urls and index.whatever to main domain name without going into a infinite loop ? btw how to you spot the loop ? is it obvious like never ending refresh of the home page ? thanks a lot !
On-Page Optimization | Mar 2, 2011, 2:00 PM | eyepaq2