Massive URL blockage by robots.txt
-
Hello people,
In May there has been a dramatic increase in blocked URLs by robots.txt, even though we don't have so many URLs or crawl errors. You can view the attachment to see how it went up. The thing is the company hasn't touched the text file since 2012. What might be causing the problem? Can this result any penalties? Can indexation be lowered because of this?
-
Even though there are less pages indexed compared to those that are blocked, you still have a significant increase in indexed pages as well. That is a good thing! You technically have more pages that are indexed than before. It looks like you possibly relaunched the site or something? More pages blocked could be an indexing problem, or it might be a good thing - it all depends on what pages are being blocked.
If you relaunched the site and used this great new whiz-bang CMS that created an online catalog that gave your users 54 ways to sort your product catalog, then the number of "pages" could increase with each sort. Just imagine, sort your widgets by color, or by size or by price, or by price and size, or by size and color, or by color and price - you get the idea. Very quickly you have a bunch of duplicate pages of a single page. If your SEO was on his or her toes, they would account for this using a canonical approach or possibly a meta noindex or changing the robots.txt etc. That would be good as you are not going to confuse Google with all the different versions of the same page.
Ultimately, Shailendra has the approach that you need to take. Look in robots.txt, look at the code on your pages. What happened around 5/26/2013? All those things need to be looked at to try and answer your question.
-
Le Fras,
You don't only have to change the robots.txt file for Google to indicate that more URLs are being blocked by it. The robots.txt file tells the search engines not to crawl given URLs, but that they may keep them in the index and display the URLs in the search results.
So the search engines do know of the URLs that are being blocked and they are able to indicate that more are being blocked as you add pages to your site that are restricted by the robots.txt file.
-
Check you robots file. Are there entries to block the crawling? If you can give the url then it would be helpful/
Regards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocked internal resources Wordpress
Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted. I've created the following new one: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush, I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!2 -
Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
What's your experience of using robots meta tag v robots.txt when it comes to a stand alone solution to prevent Google indexing? I am pretty sure robots meta tag is more reliable - going on own experiences, I have never experience any probs with robots meta tags but plenty with robots.txt as a stand alone solution. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart1 -
SEO benefit of tracked URLs
I've found a lot of mixed info on this topic so I thought I'd ask the experts (Moz community). If I'm adding tracking parameters to URLs to monitor organic traffic will this affect the rank/value of the original clean URL? If so, would best practice be to 301 redirect the tracked URL to the original:
Intermediate & Advanced SEO | | IceIcebaby
i.e. redirect www.example.com/category/?DZID=Organic_G_NP/SQ&utm_source=Organic&utm_medium=Google TO www.example.com/category Thanks for your help!
-Reed0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
301 Redirection and apostrophes in URLs
Hi I am experiencing trouble getting any redirects with apostrophes in the URLs to 301 redirect in order to eliminate 404 errors. I have tried replacing the instance of the apostrophe in the source URL field to %27 and variations of this but to no avail. The site is a wordpress site (the old URLS are legacies from the old Business Catalyst site) and I am using the redirection plug in. I have gone into some detail with a helpful soul here http://wordpress.org/support/topic/how-to-deal-with-apostrophes-in-source-url but unfortunately to no result. If anyone has any idea how to solve this puzzle I would be grateful for the help. Example: http://www.tesselaars.com/blog/Inside_Flowers/post/Online_Marketing_for_Florists_Part_1%E2%80%93_A_Website_You_Won%27t_Regret/
Intermediate & Advanced SEO | | Seamoose0 -
Overly-Dynamic URLs & Changing URL Structure w Web Redesign
I have a client that has multiple apartment complexes in different states and metro areas. They get good traffic and pretty good conversions but the site needs a lot of updating, including the architecture, to implement SEO standards. Right now they rank for " <brand_name>apartments" on every place but not " <city_name>apartments".</city_name></brand_name> There current architecture displays their URLs like: http://www.<client_apartments>.com/index.php?mainLevelCurrent=communities&communityID=28&secLevelCurrent=overview</client_apartments> http://www.<client_apartments>.com/index.php?mainLevelCurrent=communities&communityID=28&secLevelCurrent=floorplans&floorPlanID=121</client_apartments> I know it is said to never change the URL structure but what about this site? I see this URL structure being bad for SEO, bad for users, and basically forces us to keep the current architecture. They don't have many links built to their community pages so will creating a new URL structure and doing 301 redirects to the new URLs drastically drop rankings? Is this something that we should bite the bullet on now for future rankings, traffic, and a better architecture?
Intermediate & Advanced SEO | | JaredDetroit0 -
Can I use a "no index, follow" command in a robot.txt file for a certain parameter on a domain?
I have a site that produces thousands of pages via file uploads. These pages are then linked to by users for others to download what they have uploaded. Naturally, the client has blocked the parameter which precedes these pages in an attempt to keep them from being indexed. What they did not consider, was they these pages are attracting hundreds of thousands of links that are not passing any authority to the main domain because they're being blocked in robots.txt Can I allow google to follow, but NOT index these pages via a robots.txt file --- or would this have to be done on a page by page basis?
Intermediate & Advanced SEO | | PapaRelevance0