Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Limit on Google Removal Tool?
-
I'm dealing with thousands of duplicate URL's caused by the CMS...
So I am using some automation to get through them -
What is the daily limit? weekly? monthly?
Any ideas??
thanks,
Ben
-
Hi. I was curious to know if you were able to find all the urls that needed to be removed, if they were effectively removed, and if your traffic came back. I'm assuming you were dealing with Panda issues.
-
Thanks @Baldea - yes, you are correct, as soon as it processes the requests for removal it allows for more to be added for removal.
In "one sitting", I managed to get 1000 URL's removed - within a few hours I could process more.
-
It's been a while since I haven't used GWT, but I thing you have a limit of 500 requests/day or something like that.
As soon as those requests are processed the counter resets.
-
Thanks Baldea....
Yes, I have done all the above, but still got some pages stuck in Google Index that got in there - now measurements are in place to stop that happening in the future.
But using batch process of GWT to remove the URL's that already got into the index.
So how many URL's can I remove with GWT?
-
Hi BJ
I'd use the URL removal tool only after I'd make sure that I used rel=canonical (to point out which is the actual URL Google should consider "legit", use noindex, follow meta tag for the pages you don't want google to index - like tag or category pages or subpages -).
You can also use strings when removing pages, which may be way more effective than removing them one by one: eg: http://www.example.com/category/ - Google will remove all matching URLs.
But I strongly advise you to make sure you minimize as much as you can the duplicate content generation and use the two methods I highlighted above.
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing the Trailing Slash in Magento
Hi guys, We have noticed trailing slash vs non-trailing slash duplication on one of our sites. Example:
Intermediate & Advanced SEO | | brandonegroup
Duplicate: https://www.example.com.au/living/
Preferred: https://www.example.com.au/living So, SEO-wise, we suggested placing a canonical tag on all trailing slash pointing to non-trailing slash. However, devs have advised against removing the trailing slash from some URLs with a blanket rule, as this may break functionality in Magento that depends on the trailing slash. The full site would need to be tested after implementing a blanket rewrite rule. Is any other way to address this trailing slash duplication issue without breaking anything in Magento? Keen to hear from you guys. Cheers,0 -
Using disavow tool for 404s
Hey Community, Got a question about the disavow tool for you. My site is getting thousands of 404 errors from old blog/coupon/you name it sites linking to our old URL structure (which used underscores and ended in .jsp). It seems like the webmasters of these sites aren't answering back or haven't updated their sites in ages so it's returning 404 errors. If I disavow these domains and/or links will it clear out these 404 errors in Google? I read the GWT help page on it, but it didn't seem to answer this question. Feel free to ask any questions that may help you understand the issue more. Thanks for your help,
Intermediate & Advanced SEO | | IceIcebaby
-Reed0 -
How to NOT appear in Google results in other countries?
I have ecommerce sites the only serve US and Canada. Is there a way to prevent a site from appearing in the Google results in foreign countries? The reason I ask is that we also have a lot of informational pages that folks in other countries are visiting, then leaving right after reading. This is making our overall Bounce Rate very high (64%). When we segment the GA data to look at just our US visitors, then the Bounce Rate drops a lot. (to 48%) Thanks!
Intermediate & Advanced SEO | | GregB1230 -
Google and Product Description Tabs
How does Google process a product page with description tabs? For example, lets say the product page has a tab for Overview, Specifications, What's In the Box and so on. Wouldn't that content be better served in one main product description tab with the tab names used as (htags) or highlighted paragraph separators? Or, does all that content get crawled as a single page regardless of the tabs?
Intermediate & Advanced SEO | | AWCthreads0 -
How does google recognize original content?
Well, we wrote our own product descriptions for 99% of the products we have. They are all descriptive, has at least 4 bullet points to show best features of the product without reading the all description. So instead using a manufacturer description, we spent $$$$ and worked with a copywriter and still doing the same thing whenever we add a new product to the website. However since we are using a product datafeed and send it to amazon and google, they use our product descriptions too. I always wait couple of days until google crawl our product pages before i send recently added products to amazon or google. I believe if google crawls our product page first, we will be the owner of the content? Am i right? If not i believe amazon is taking advantage of my original content. I am asking it because we are a relatively new ecommerce store (online since feb 1st) while we didn't have a lot of organic traffic in the past, i see that our organic traffic dropped like 50% in April, seems like it was effected latest google update. Since we never bought a link or did black hat link building. Actually we didn't do any link building activity until last month. So google thought that we have a shallow or duplicated content and dropped our rankings? I see that our organic traffic is improving very very slowly since then but basically it is like between 5%-10% of our current daily traffic. What do you guys think? You think all our original content effort is going to trash?
Intermediate & Advanced SEO | | serkie1 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Should I remove Meta Keywords tags?
Hi, Do you recommend removing Meta Keywords or is there "nothing to lose" with having them? Thanks
Intermediate & Advanced SEO | | BeytzNet0