URL Parameter Handling In GWT to Treat Overindexation - how aggressive?
-
Hi,
My client recently launched a new site and their index went from about 20K up to about 80K - which is a severe over indexation.
I believe this was caused by parameter handling as some category pages now have 700 pages in the results for "site:domain.com/category1" - and apart from the top result, they are all parameters being indexed.
My question is how active/aggressive should I be in blocking these parameters in Google Webmaster Tools? Currently, everything is set to 'let googlebot decide'.
-
Hi! Did these answers take care of your question, or do you still have some questions?
-
Hey There
I would use a robots meta noindex on them (except for the top page of course) and use rel = prev/next to show they are paginated.
I would prefer to do that than use WMT. Also, WMT crawl settings will stop the crawling, but not remove them from the index. Plus, WMT will only handle Google, not other engines like Bing etc. Not that Bing matters, but always better to have a universal solution.
-Dan
-
Hello Search Guys,
Here is some food for thought taken from: http://www.quora.com/Does-Google-limit-the-number-of-pages-it-indexes-for-a-particular-site
Summary:
"Google says they crawl the web in "roughly decreasing PageRank order" and thus, pages that have not achieved widespread link popularity, particularly on large, deep sites, may not be crawled or indexed."
"Indexation
There is no limit to the number of pages Google may index (meaning available to be served in search results) for a site. But just because your site is crawled doesn't mean it will be indexed.Crawl
The ability, speed and depth for which Google crawls your site and retrieves pages can be dependent on a number of factors: PageRank, XML sitemaps, robots.txt, site architecture, status codes and speed.""For a zero-backlink domain with 80.000+ pages, in conjunction with rel=canonical and an xml-sitemap (You do submit a sitemap, don't you?), after submitting the domain to Google for a crawl, a little less than 10k pages remained in index. A few crawls later this was reduced to a mere 250 (very good job on Google's side).
This leads me to believe the indexation cap for a newer site with low to zero pagerank/authority is around 10k."
Another interesting article: http://searchenginewatch.com/article/2062851/Google-Upping-101K-Page-Index-Limit
Hope this helps, and easy response is to limit crawling to the most needed pages as aggressive as possible to remove the unneeded links leaving only needed ones
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does the url in for your homepage impact SEO
Is there any harm to SEO having a homepage url that is not clean like www.domain.com. For example citi uses https://online.citi.com/US/login.do Does that matter in any way? Would a company like citi benefit from changing to www.citi.com as their homepage?
Intermediate & Advanced SEO | | kcb81781 -
Google Webmaster Remove URL Tool
Hi All, To keep this example simple.
Intermediate & Advanced SEO | | Mark_Ch
You have a home page. The home page links to 4 pages (P1, P2, P3, P4). ** Home page**
P1 P2 P3 P4 You now use Google Webmaster removal tool to remove P4 webpage and cache instance. 24 hours later you check and see P4 has completely disappeared. You now remove the link from the home page pointing to P4. My Question
Does Google now see only pages P1, P2 & P3 and therefore allocate link juice at a rate of 33.33% each. Regards Mark0 -
Google Disavow wrong Sample URLs
Sometime last year we were penalized for unnatural links by Google. We followed a couple of good posts on MOZ on how to go about correcting all of this (Excel Showing work, Google Docs, Ahrefs, semrush, etc...) and submitted a reconsideration request as well as a list of urls using Disavow Tool. So today, about 20 days later, we get a reply from Google stating that we still have some links that are outside their quality guidelines with 3 examples. Problem is, none of the 3 examples they listed have our website on them. I even did a View Source and checked all our inbound link reports for these websites. All of the 3 examples are Lawyer / Legal Advise websites and ours has nothing to do with this. Any ideas on how to reply and ask them to double check? Maybe they mixed up with another account they were working on at the same time.
Intermediate & Advanced SEO | | Apex-Lighting0 -
HTML for URL markup
Hi, We are changing our URLs to be more SEO friendly. Is there any negative impact or pitfall of using <base> HTML-tag? Our developers are considering it as a possible solution for relative URLs inside HTML-markup in the Friendly URL context.
Intermediate & Advanced SEO | | theLotter0 -
Need Perfect URLs
I'm redesigning a site's structure from the ground up, and am having issues with the URLs. I'd love to have them be perfect, but kept finding conflicting advice online. 1. For my services blog, is it best to have it set up like www.example.com/services/keyword or
Intermediate & Advanced SEO | | Stryde
www.example.com/keyword There seems to be conflicting advice as to keep it short and keep the keyword as far to the left as possible, but also that including the word services would help with long tail phrases and site organization. 2. For my blog section, is it best to have it set up like
www.example.com/blog/keyword or
www.example.com/keyword or
www.example.com/blog-post-title-with**-keyword**-in-it It's similar to the first question, but also adds the question of including the entire post title in the URL or just the keyword. Your help would be greatly appreciated!1 -
How to determine URL Parameters in Google Webmaster
Hi there! I have a new website with so many duplicate meta titles and descriptions because of its expanded features from the e-commerce shopping cart that I am using like mobile website, product sorting, etc. Aside from canonical, is it advisable to use the URL parameters from Google webmaster tools to disallow crawling of mobile website and other parameters like, "parent", "catalogsetview", "pcsid", "pg" "mode". I appreciate and advise. 🙂 Thanks!
Intermediate & Advanced SEO | | paumer800 -
Canonical URL Tag Usage
Hi there, I have a .co.uk website and a .ie website, which have the exact same content on both, should I put a canonical tag on both websites, on every page? Kind Regards
Intermediate & Advanced SEO | | Paul780 -
Capitals in url creates duplicate content?
Hey Guys, I had a quick look around however I couldn't find a specific answer to this. Currently, the SEOmoz tools come back and show a heap of duplicate content on my site. And there's a fair bit of it. However, a heap of those errors are relating to random capitals in the urls. for example. "www.website.com.au/Home/information/Stuff" is being treated as duplicate content of "www.website.com.au/home/information/stuff" (Note the difference in capitals). Anyone have any recommendations as to how to fix this server side(keeping in mind it's not practical or possible to fix all of these links) or to tell Google to ignore the capitalisation? Any help is greatly appreciated. LM.
Intermediate & Advanced SEO | | CarlS0