Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does Google index dynamically generated content/headers, etc.?
-
To avoid dupe content, we are moving away from a model where we have 30,000 pages, each with a separate URL that looks like /prices/<product-name>/<city><state>, often with dupe content because the product overlaps from city to city, and it's hard to keep 30,000 pages unique, where sometimes the only distinction is the price & the city/state.</state></city></product-name>
We are moving to a model with around 300 unique pages, where some of the info that used to be in the url will move to the page itself (headers, etc.) to cut down on dupe content on those unique 300 pages.
My question is this. If we have 300 unique-content pages with unique URL's, and we then put some dynamic info (year, city, state) into the page itself, will Google index this dynamic content?
The question behind this one is, how do we continue to rank for searches for that product in the city-state being searched without having that info in the URL?
Any best practices we should know about?
-
Hi there,
Not sure I have enough information to weigh in on the first part of your question - Google will index whatever it sees on the page. If you deliver the content to Google, then they index it. The problem comes when you deliver different content to different users. Try a tool like SEO Browser to see how googlebot views your site.
To answer your second question, its often hard to rank near-duplicate pages for specific cities/states without running into massive duplicate content problems. Matt Cutts himself actually addressed this awhile back. He basically stated if you have multiple pages all targeting different locations, it's best to include a few lines of unique content on each page (I recommend the top) to make each unique.
“In addition to address and contact information, 2 or 3 sentences about what is unique to that location and they should be fine,” Source
But this technique would be very hard with only 300 product page. The alternative, stuffing these pages with city/state information for every combination possible, is not advised.
http://www.seomoz.org/q/on-page-optimization-to-rank-for-multiply-cities
So in the end, it's actually not hard to rank for city-state keywords without having it in the URL, but the information should be in the content or other places like the title tag or internal link structure - but to do this for 1000's of locations with only 300 pages without keyword stuffing is near impossible.
The best thing to do is figure out how to create unique content for every page you want to rank for, and take that route.
For example, I might create a "Seattle" page, create unique content for the top of the page, then list 50 or so products with the unique Seattle prices. (This is a rough strategy - you'd have to refine it greatly to work for your situation.
Hope this helps! Best of luck with your SEO.
-
I see. To get the city-state pages indexed then they must have their own URL. If you can only access it via posting a form (assumed for using the search feature), the a search engine can't see it.
To get round this, you could put a links underneath the search box to popular searches. This will get them indexed.
Does that answer the questions?
Thanks
Iain - Reload
-
Thanks for the reply. The city-state content wouldn't be driven by the URL, it would be driven by the city-state that the user searched for. ie if the person searched for <product><city><state>I would want our /product/ page to show up, and show them content in their local city state.</state></city></product>
-
Hi Editable Text,
In short if you show Google a crawlable link to the content with the dynamic header/content, and the content is driven by the unique URL, yes it will index it.
As with any SEO/life question, there are a few t&c's with this.
- The pages need to be unique enough not to be classed as duplicate content
- Make sure it's intelligently linked internally
- You have external links pointing deep into the site
- You have a decent site architecture
To answer you second question, you'll need unique pages for each location, unless your content would be so thin, you'd need to group them. The URL doesn't have to include the keyword, but it's damn helpful if it does.
Hope that helps
Iain - Reload Media
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content in sidebar
Hi guys. So I have a few sentences (about 50 words) of duplicate content across all pages of my website (this is a repeatable text in sidebar). Each page of my website contains about 1300 words (unique content) in total, and 50 words of duplicate content in sidebar. Does having a duplicate content of this length in sidebar affect the rankings of my website in any way? Thank you so much for your replies.
On-Page Optimization | | AslanBarselinov1 -
Canonical: Same content but different countries
I'm building a website that has content made for specific countries. The url format is: MyWebsite.com/<country name="">/</country> Some of the pages for <specific url="">are the same for different countries, the <specific url="">would be the same as well. The only difference would be the <country name="">.</country></specific></specific> How do I deal with canonical issues to avoid Google thinking I'm presenting the same content?
On-Page Optimization | | newbyguy0 -
How to fix duplicate content for homepage and index.html
Hello, I know this probably gets asked quite a lot but I haven't found a recent post about this in 2018 on Moz Q&A, so I thought I would check in and see what the best route/solution for this issue might be. I'm always really worried about making any (potentially bad/wrong) changes to the site, as it's my livelihood, so I'm hoping someone can point me in the right direction. Moz, SEMRush and several other SEO tools are all reporting that I have duplicate content for my homepage and index.html (same identical page). According to Moz, my homepage (without index.html) has PA 29 and index.html has PA 15. They are both showing Status 200. I read that you can either do a 301 redirect or add rel=canonical I currently have a 301 setup for my http to https page and don't have any rel=canonical added to the site/page. What is the best and safest way to get rid of duplicate content and merge the my non index and index.html homepages together these days? I read that both 301 and canonical pass on link juice but I don't know what the best route for me is given what I said above. Thank you for reading, any input is greatly appreciated!
On-Page Optimization | | dreservices0 -
SERP Hijacking/Content Theft/ 302 Redirect?
Sorry for the second post, thought this should have it's own. Here is the problem I am facing amongst many others. Let's take the search term "Air Jordan Release Dates 2017" and place it into Google Search. Here is a link:
On-Page Optimization | | SneakerFiles
https://www.google.com/#q=air+jordan+release+dates+2017 Towards the bottom of the page, you will see a website that has SneakerFiles (my website) in the title. The exact title is: Air Jordan Release Dates 2016, 2017 | SneakerFiles - Osce Now, this is my content, but not my website. For some reason, Google thinks this is my site. If you click on the link in search, it automatically redirects you to another page (maybe 302 redirect), but in the cache you can see it's mine:
http://webcache.googleusercontent.com/search?q=cache:qrVEUDE1t48J:www.osce.gob.pe/take_p_firm.asp%3F+&cd=8&hl=en&ct=clnk&gl=us I have blocked the websites IP, disallowed my style.css to be used so it just shows a links without the style, still nothing. I have submitted multiple google spam reports as well as feedback from search. At times, my page will return to the search but it gets replaced by this website. I even filed a DMCA with Google, they declined it. I reached out to their Host and Domain register multiple times, never got a response. The sad part about this, it's happening for other keywords, for example if you search "KD 9 Colorways", the first result is for my website but on another domain name (my website does rank 3rd for a different Tag page). The page I worked hard on keeping up to date. I did notice this bit of javascript from the cloaked/hacked/serp hijacking website: I disabled iFrames...(think this helps) so not sure how they are doing this. Any help would be greatly appreciated. Note: I am using Wordpress if that means anything.0 -
No-index all the posts of a category
Hi everyone! I would like no-indexing all the posts of a specific category of my wordpress site. The problem is that the structure of my URL is composed without /category/: www.site-name.ext/date/post-name/
On-Page Optimization | | salvyy
so without /category-name/ Is possibile to disallow the indexing of all the posts of the category via robots.txt? Using Yoast Plugin I can put the no-index for each post, but I would like to put the no-index (or disallow/) a time for all the post of the category. Thanks in advance for your help and sorry for my english. Mike0 -
Is the HTML content inside an image slideshow of a website crawled by Google?
I am building a website for a client and i am in a dilemma whether to go for an image slideshow with HTML content on the slides or go for a static full size image on the homepage. My concern is that HTML content on the slideshow may not get crawled by Google and hence may not be SEO friendly.
On-Page Optimization | | aravinn0 -
Duplicate content on partner site
I have a trade partner who will be using some of our content on their site. What's the best way to prevent any duplicate content issues? Their plan is to attribute the content to us using rel=author tagging. Would this be sufficient or should I request that they do something else too? Thanks
On-Page Optimization | | ShearingsGroup0 -
Http://www.xxxx.com does not re-direct to http://xxx.com
When typing in my website URL www.earthsaverequipment.com successfully re-directs to earthsaverequipment.com as specified in robot. However if you type http://www.earthsaverequipment.com it brings up a 404 error Is this a potential issue? if so is there a way to fix it? thanks
On-Page Optimization | | Earthsaver0