What is the value of Google Crawling Dynamic URLS with NO SEO
-
Hi All
I am Working on travel site for client where there are 1000's of product listing pages that are dynamically created. These pages are not SEO optimised and are just lists of products with no content other than the product details. There are no meta tags for title and description on the listings pages. You then click Find Out more to go to the full product details. There is no way to SEO these Dynamic pages
This main product details has no content other than details and now meta tags.
To help increase my google rankings for the rest of the site which is search optimised would it be better to block google from indexing these pages.
Are these pages hurting my ability to improve rankings if my SEO of the content pages has been done to a good level with good unique Titles, descriptions and useful content
thanks In advance
John
-
Thank Tim. The search part of the site integrates from a 3rd party so it is hard to get them to do anything. bUt as you say if it is not hurting then all ok
-
To answer your last question first, this should not hurt your main content pages that you have optimized.
Without knowing how the site is set-up, there is still benefit in having these dynamically created pages, especially since they contain product details. Without meta data, the content ("details" you mention) can still be read by the crawlers and the theme can still be determined. Assuming there are still navigation links back to your main optimized content pages and that they are related, there can be some benefit passed (assuming they are thematically congruent). So in this situation, the benefit outweighs the risk (assuming I have understood the situation correctly), so I would not block the dynamically created pages.
I would still work with the developer to find a way to push the product name and details to the meta data for the dynamically created pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking Dynamic Search Result Pages From Google
Hi Mozzerds, I have a quick question that probably won't have just one solution. Most of the pages that Moz crawled for duplicate content we're dynamic search result pages on my site. Could this be a simple fix of just blocking these pages from Google altogether? Or would Moz just crawl these pages as critical crawl errors instead of content errors? Ultimately, I contemplated whether or not I wanted to rank for these pages but I don't think it's worth it considering I have multiple product pages that rank well. I think in my case, the best is probably to leave out these search pages since they have more of a negative impact on my site resulting in more content errors than I would like. So would blocking these pages from the Search Engines and Moz be a good idea? Maybe a second opinion would help: what do you think I should do? Is there another way to go about this and would blocking these pages do anything to reduce the number of content errors on my site? I appreciate any feedback! Thanks! Andrew
Intermediate & Advanced SEO | | drewstorys0 -
How can I make sure Google is crawling a link from an iframe (video)?
Do they crawl backlinks from an iframe example from a Youtube video embedded in a blog post? TIA!
Intermediate & Advanced SEO | | zpm20140 -
301 Redirecting from Static to Dynamic URLs. I think we messed up
I'm looking for some guidance on an issue I believe we created for ourselves and if we undo what we did. We recently added attributed search to our sites. This of course created a bunch of dynamically generated URLS. For various reasons, it was decided to take some of our existing static URLs and 301 redirect them to their dyanamic counterpart. Ex .../Empire-Paintball-Masks-0Y.aspx now redirects to .../Paintball-Masks-And-Goggles-0Y.aspx?Manufacturer=Empire Many of these stat URLS had top 3 rankings for their associated keywords. Now, we don't rank for anything. I realize that 301 redirecting is the way to go...if you NEED to. My guess is our drop in keyword ranking is directly tied to what we did. I'm looking for an solid argument to be made to my boss as to why we should not have done this and that it, more than likely has resulted in dropped keyword rankings and organic traffic. I welcome any input. Also, if we decided to revert back (remove all 301 redirects and de-index all dynamic URLS), what is the likely hood we can recapture some of this lost organic traffic? Can I disallow indexing in a robot.txt file to remove, say anything with a '?' in the URL? Would the above URL example (which was ranking in the top 3 in SERPs), have a good chance of finding its way back? thanks
Intermediate & Advanced SEO | | Istoresinc1 -
Does putting a Google custom search box on make Google think my users are bouncing?
I added a Google custom search box to my pages, that's doing an advanced Google search. A lot of people are using it. So users are coming to my site from a Google search, and then often performing another Google search on my site. Should I be worried that Google may interpret the resultant user behavior as a bounce or pogo-stick? Or will the fact that the second search occurred on my site, using custom search, and with advanced parameters signal to Google that this is not a dissatisfied user returning to Google? Thanks
Intermediate & Advanced SEO | | GilReich0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
if we search site:domain.com vs www.domain.com, We see: 130,000 vs 15,000 results. When reviewing the site:domain.com results, we're finding that the majority of the URLs showing are blocked by robots.txt. They are subdomains that we use as production environments (and contain similar content as the rest of our site). And, we also find the message "In order to show you the most relevant results, we have omitted some entries very similar to the 541 already displayed." SEER Interactive mentions that this is one way to gauge a Panda penalty: http://www.seerinteractive.com/blog/100-panda-recovery-what-we-learned-to-identify-issues-get-your-traffic-back We were hit by Panda some time back--is this an issue we should address? Should we unblock the subdomains and add noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
How to remove wrong crawled domain from Google index
Hello, I'm running a Wordpress multisite. When I create a new site for a client, we do the preparation using the multisite domain address (ex: cameleor.cobea.be). To keep the site protected we use the "multisite privacy" plugin which allows us to restrict the site to admin only. When site is ready we a domain mapping plugin to redirect the client domain to the multisite (ex: cameleor.com). Unfortunately, recently we switched our domain mappin plugin by another one and 2 sites got crawled by Google on their multsite address as well. So now when you type "cameleor" in Google you get the 2 domains in SERPS (see here http://screencast.com/t/0wzdrYSR). It's been 2 weeks or so that we fixed the plugin issue and now cameleor.cobea.be is redirected to the correct address cameleor.com. My question: how can I get rid of those wrong urls ? I can't remove it in Google Webmaster Tools as they belong to another domain (cf. cameleor.cobea.be for which I can't get authenticated) and I wonder if will ever get removed from index as they still redirect to something (no error to the eyes of Google)..? Does anybody has an idea or a solution for me please ? Thank you very much for your help Regards Jean-Louis
Intermediate & Advanced SEO | | JeanlouisSEO0 -
Multinational SEO
Hi all The situation: We have a .com website that is the core of our business over the last 3 years we have built this into a very sucessful brand. Customers are able to purchase products from our website and have it delivered anywhere in the world. As part of the development of our business we want to obviously rank high within serps regardless of what country our potential customer is from. We understand that we will need to translate much of our website to achieve this and that is something that we have in the pipeline. My question is more aimed at the English speaking countries and how we should optimise our website for these. For example: websitename.com.au and websitename.co.uk were initialy setup as 301 redirects to websitename.com, however, we have now set them up as their own domains which display the exact same content as the .com website. So to clarify the content on websitename.com/product1.html is also on websitename.com.au/product1.html and websitename.co.uk/product1.html What would the best way to ensure that our .com.au and .co.uk gain traction within the appropriate country? Is duplicate content still an issue? All our prices are displayed in USD will this go againts? We use US English (with a sprinkle of chinglish) as our websites copy languange should we change spelling for AU and UK? Does anyone have any case studies and or other reports I can read that may help me find the right solution for us. Thanks Danny
Intermediate & Advanced SEO | | DannyCarter0 -
Google bot vs google mobile bot
Hi everyone 🙂 I seriously hope you can come up with an idea to a solution for the problem below, cause I am kinda stuck 😕 Situation: A client of mine has a webshop located on a hosted server. The shop is made in a closed CMS, meaning that I have very limited options for changing the code. Limited access to pagehead and can within the CMS only use JavaScript and HTML. The only place I have access to a server-side language is in the root where a Defualt.asp file redirects the visitor to a specific folder where the webshop is located. The webshop have 2 "languages"/store views. One for normal browsers and google-bot and one for mobile browsers and google-mobile-bot.In the default.asp (asp classic). I do a test for user agent and redirect the user to one domain or the mobile, sub-domain. All good right? unfortunately not. Now we arrive at the core of the problem. Since the mobile shop was added on a later date, Google already had most of the pages from the shop in it's index. and apparently uses them as entrance pages to crawl the site with the mobile bot. Hence it never sees the default.asp (or outright ignores it).. and this causes as you might have guessed a huge pile of "Dub-content" Normally you would just place some user-agent detection in the page head and either throw Google a 301 or a rel-canon. But since I only have access to JavaScript and html in the page head, this cannot be done. I'm kinda running out of options quickly, so if anyone has an idea as to how the BEEP! I get Google to index the right domains for the right devices, please feel free to comment. 🙂 Any and all ideas are more then welcome.
Intermediate & Advanced SEO | | ReneReinholdt0