Robots.txt file issue.
-
Hi,
Its my third thread here and i have created many like it on many webmaster communities.I know many pro are here so badly needs help.
Robots.txt blocked 2k important URL's of my blogging site
http://Muslim-academy.com/ Especially of my blog area which are bringing good number of visitors daily.My organic traffic declined from 1k daily to 350.
I have removed the robots.txt file.Resubmitted existing Sitemap.Used all Fetch to index options and 50 URL submission option in Bing Webmaster Tool.
What Can I do know to have these blocked URL's back in Google index?
1.Create a NEW sitemap and submit it again in Google webmaster and bing webmaster tool?
2.Bookmark,linkbuilding or share the URL's.I did a lot of bookmarking for blocked URL's.
I fetch the list of blocked URLS Using BING WEBMASTER TOOLS.
-
Robert some good signs of life.New sitemap shows 5080 pages submitted and 4817 indexed.
These remaining pages are surely blocked ones?RightRobert though there is some improvement in Impressions and Clicks.Thanks a lot for staying that long with me solving this issue.
-
Christopher,
Have you looked at indexing in GWMT to see if they have indexed, how many pages, etc.?
-
Got your point but I Resubmit and its status is still pending.
I have test it and it was working but when I submit it 2 days ago up till now its status is pending. -
No, when you resubmit or submit a "new" sitemap, it just tells Google this is the sitemap now. There is no content issue with a sitemap.
Best,Robert
-
Just one last question Robert.Does not the duplicate sitemap creates duplicate pages in searches?
Sorry my question may looks like Crazy to you but at the moment with applying every possible fix I do not mess up and make things even more worse.
-
Given the only issue was the robots.txt error, I would resubmit. I do think it would not hurt to generate a sitemap and submit that in case there may be something you are missing though.
Best
-
Robert the question is either I need to create a new sitemap or resubmit the existing one?
-
Hello Christopher
It appears you have done a good deal to remediate the situation already. I would resubmit a sitemap to Google also. Have you looked in WMT to see what is now indexed? I would look at the graph of indexed and robots.txt and see if you are moving the needle upward again.
This begs a second question of "How did it happen?" You stated, "Robots.txt blocked 2k important URL's of my blogging site" and that sounds like it just occurred out of the ether. I would want to know that I had found the reason and make sure I have a way to keep it from happening going forward. (just a suggestion).Lastly, using the Index Status in WMT should be a great way to learn how effective what you tried in fixing it is. I like knowing that type of data and storing it somewhere retrievable for the future.
Best to you,
Robert
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does intercom support pages and redirect issue can affect the SEO performance of our website?
I noticed that in the redirect issues I have, most of the issues are coming from our Intercom support links. I want to ask, does intercom support pages and redirect issue can affect the SEO performance of our website?
Reporting & Analytics | | Envoke-Marketing0 -
Drupal SEO Issues
Hi, I have two questions regarding my enterprise website. It is built on the Drupal CMS. First, and in looking at Google Analytics, I'm seeing more than 6k pages listed, but over 5k have received less than 10 page views in six months. In fact, most of them are not really content pages at all. The URLs I'm seeing listed, which to me indicates actual crawlable content in GA, shows pages like this: http://www.domainname.com/node/2153
Reporting & Analytics | | jaccardi62
http://www.domainname.com/company/careers?gnk=apply&gni=8a87142e4d086a73014d2a0d65242b8e&gns=glassdoor+free
http://www.domainname.com/blog?page=1
http://www.domainname.com/resources/videos?field_video_category_value=all&page=4&page=1
http://www.domainname.com/search/site?search_api_views_fulltext=talent+pool What is the problem here? Why are these non-pages being indexed as content and why are they showing up in GA? Second question is about my blog and blog best practices. While I know blog content is important for SEO, why is my site blog pagination being indexed as content. For example, these "pages" are showing up in SERPs: http://www.domainname.com/blog/tag/business_intelligence?page=2
http://www.domainname.com/blog/topic/expansion?page=5
http://www.domainname.com/blog/weeks_news_april_26 What is the best way to fix this? Thanks in advance for your help!0 -
Https Referral Issues
Hi Moz! We are having some trouble passing along referral data to our linked websites. We are an advertising platform that counts on our clients seeing our traffic in analytics. We recently switched to HTTPS and implemented the meta referral <always>however, we are still experiencing issues with clients not seeing our referral traffic. Are there other ways to make a full-proof fix so they will always see us as a referral? I know the referral tag should help but it isn't supported by all browsers. </always> Also, would there be an issue if we have our site (https) linked to their http site that redirects to https? Even with the referral tag, we are seeing issues. I feel so lucky that our actual transition to https went fine but now with all this referral traffic on clients analytics, I am concerned we are losing the credit we should get for the traffic to their site. Thanks Moz Community!
Reporting & Analytics | | GoAbroadKP0 -
Oh Japanese Character and 404 issue
This one has me stumped, so I really hope someone can help. I am assisting a company who are having 404 errors on certain japanese pages of their site. I have checked in web master tools and analytics, can see the 404, but have no idea where it is coming from. I am starting to think it may be a encoding issue of some sort. Just wondered if anyone has come across this before? Site uses site finity. What is happening is reader is going to page domain.com/ja/blog/article then coming to this page /404?aspxerrorpath=/Sitefinity/WebsiteTemplates/App_themes/"website"/fonts/open sans regular Really odd - as we cannot re-create this 404 I don't know how they are getting to it? Also in Analytics some of the pages that are written in japanese that are giving 404's look like this C3%A3%E2%80%9A%C2%BD%C3%A3%C6%92%C2%AA%C3%A3%C6%92%C2%A5%C3%A3%C6%92%C2%BC%C3%A3%E2%80%9A%C2%B7%C3%A3%C6%92%C2%A7%C3%A3%C6%92%C2%B394.8%C3%A3%C2%81%C2%AE%C3%A5%C2%AF%C2%BE%C3%A8%C2%B1%C2%A1/%C3%A9%C2%A1%C2%A7%C3%A5%C2%AE%C2%A2%C3%A3%E2%80%9A%C2%B0%C3%A3%C6%92%C2%AB%C3%A3%C6%92%C2%BC%C3%A3%C6%92%E2%80%94%C3%A6%C2%AF%C5%BD%C3%A3%C2%81%C2%AE94.8 Any help much appreciated
Reporting & Analytics | | Kelly33300 -
Longevity of robot.txt files on Google rankings
This may be a difficult question to answer without a ton more information, but I'm curious if there's any general thought that could shed some light on the following scenario I've recently heard about and wish to be able to offer some sound advice: An extremely reputable non-profit site with excellent ranking had gone through a re-design and change-over into WordPress. A robots.txt file was used during development on the dev site on the dev server. Two months later it was noticed through GA that traffic was way down to the site. It was then discovered that the robot.txt file hadn't been removed and the new site (same content, same nav) went live with it in place. It was removed and a site index forced. How long might it take for the site to re-appear and regain past standing in the SERPs if rankings have been damaged. What would the expected recovery time be?
Reporting & Analytics | | gfiedel0 -
Any issues with Google impressions dropping in Webmaster Tools?
I'm seeing a drop in impressions across all my websites that are hosted at a certain location. Just wanted to make sure that it is not some reporting issue that others are seeing.
Reporting & Analytics | | tdawson090 -
Adding Something to htaccess File
When I did a google search for site.kisswedding.com (my website) I noticed that google is indexing all of the https versions of my site. First of all, I don't get it because I don't have an SSL certificate. Then, last night I did what my host (bluehost) told me to do. I added the below to my htaccess file. Below rule because google is indexing https version of site - https://my.bluehost.com/cgi/help/758RewriteEngine OnRewriteCond %{HTTP_HOST} ^kisswedding.com$ [OR]RewriteCond %{HTTP_HOST} ^kisswedding.com$RewriteCond %{SERVER_PORT} ^443$RewriteRule ^(.*)$ http://www.kisswedding.com [R=301,L] Tonight I when I did a google search for site:kisswedding.com all of those https pages were being redirected to my home page - not the actually page they're supposed to be redirecting to. I went back to Bluehost and they said and 301 redirect shouldn't work because I don't have an SSL certificate. BUT, I figure since it's sorta working I just need to add something to that htaccess rule to make sure it's redirected to the right page. Someone in the google webmaster tools forums told me to do below but I don't really get it? _"to 301 redirect from /~kisswedd/ to the proper root folder you can put this in the root folder .htaccess file as well:_Redirect 301 /~kisswedd/ http://www.kisswedding.com/" Any help/advice would be HUGELY appreciated. I'm a bit at a loss.
Reporting & Analytics | | annasus0 -
Meta Robots Tag - What's it really mean?
I used on a handful of pages recently and noticed that they're still popping up in the Google search index. I'd like to keep these from appearing, so I figured I needed a directive statement with stronger semantic meaning. From what I understand, is what I'm looking for. Using this will keep Google from not only crawling the page, but indexing the page, as well. I decided to see what the official robotstxt.org website said about it, so I checked (link here): the NOFOLLOW directive only applies to links on this page. It's entirely likely that a robot might find the same links on some other page without a NOFOLLOW (perhaps on some other site), and so still arrives at your undesired page. So, is their explanation saying that the page itself will be indexed, but the content / links on it won't be followed / indexed? Let me hear your thoughts, mozzers.
Reporting & Analytics | | mudbugmedia0