100K Webmaster Central Not Found Links?
-
http://screencast.com/t/KLPVGTzM I just logged into our Webmaster Central account to find that it shows 100k links that are not found? After searching through all of them they all appear to be from our search bar, with no results? Are we doing something wrong here?
-
Ya, I read through that article yesterday & see that they recommend the same setting as the Yoast plugin should be doing? Although I didn't ever get a response from me to see if there is something missing?
For now, I plan on adding this to the robots.txt file & see what results I get?
Do you know the time frame that it takes to get the updates in GWT? Will this update within a few weeks or would it take longer than that?
Thanks for all the help!
BJ
-
Hello BJ.
The robots.txt file must be on your server, in the document root.
Here is information about how to configure robots.txt
Note that is does have a warning at the end, about how you could possibly lose some link juice, but that is probably a much smaller problem than the problem you are trying to fix.
Nothing is perfect, and with the rate that google changes its mind, who knows what is the right thing to do this month.
Once you have edited robots.txt, you don't need to do anything.
- except I just had a thought - how to get google to remove those items from your webmaster tools. I think you should be able to tell them to purge those entries from GWT. Set it so you can see 500 to a page and then just cycle through and mark them fixed.
-
Sorry to open this back up after a month, in adding this to the robot.txt file is there something that needs to be done within the code of the site? Or can I simply update the robots.txt file within Google Webmaster Tools?
I was hoping to get a response from Yoast on his blog post, it seems there were a number of questions similar to mine, but he didn't ever address them.
Thanks,
BJ
-
We all know nothing lasts forever.
A code change can do all kinds of things.
Things that were important are sometimes less important, or not important at all.
Sometimes yesterdays advice no longer is true.
If you make a change, or even if you make no change, but the crawler or the indexer changes, then we can be surprised at the results.
While working on this other thread:
http://www.seomoz.org/q/is-no-follow-ing-a-folder-influences-also-its-subfolders#post-74287
I did a test and checked my logs. A nofollow meta tag and a nofollow link do not stop the crawlers from following. What it does (we think) is to not pass pagerank. That is all it does.
That is why the robots.txt file is the only way to tell the crawlers to stop following down a tree. (until there is another way)
-
Ok, I've posted a question on Yoast.com blog to see what other options we might have? Thanks for the help!
-
It is because Roger ignores those META tags.
Also, google often ignores them too.
The robots.txt file is a much better option for those crawlers.
There are some crawlers that ignore the robots file too, but you have no control over them unless you can put their IPs in the firewall or add code to ignore all of their requests.
-
Ok, I just did a little more research into this, to see how Yoast was handling this within the plugin & came across this article: http://yoast.com/example-robots-txt-wordpress/
In the article he stats that this is already included within the plugin on search pages:
I just confirmed this, by doing this search on my site & looking at the code: http://www.discountqueens.com/?s=candy
So this has always been in place. Why would I still have the 100K not found links still showing up?
-
We didn't have these errors showing up previously, so that's why I was really suspicious? Also we have Joost De Valk's SEO plugin installed on our site & I thought there was an option to turn off the searches from being indexed?
-
Just to support Alan Gray's response, I'll say it's very important to block crawlers from your site search, because it not only throws errors (bots try to guess what to put in a search box), but also because any search results that get into the index will cause content conflicts, dilute ranking values, and worst case scenario, potentially create the false impression that you have a lot of very thin content / near duplicate content pages.
-
the search bar results are good for searchers but not for search engines. You can stop all search engines and Roger (the seomoz crawler) from going into those pages by adding an entry to your robots.txt file. Roger only responds to his own section of the robots file, so anything you make global will not work for him.
User-agent: rogerbot Disallow: /search/*
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal links decrease dramatically
I have an unknown problem with my internal links. but after many searches on Moz community and other sites, I didn't find any answer. the question is: why homepage doesn't enough internal links like other pages? the homepage internal links decrease dramatically in 2 months but it doesn't happen to other pages in the same domain 6l6Bh D0bC1
Intermediate & Advanced SEO | | canadaoptimize0 -
Viewing search results for 'We possibly have internal links that link to 404 pages. What is the most efficient way to check our sites internal links?
We possibly have internal links on our site that point to 404 pages as well as links that point to old pages. I need to tidy this up as efficiently as possible and would like some advice on the best way to go about this.
Intermediate & Advanced SEO | | andyheath0 -
OSE link report showing links to 404 pages on my site
I did a link analysis on this site mormonwiki.com. And many of the pages shown to be linked to were pages like these http://www.mormonwiki.com/wiki/index.php?title=Planning_a_trip_to_Rome_By_using_Movie_theatre_-_Your_five_Fun_Shows2052752 There happens to be thousands of them and these pages actually no longer exist but the links to them obviously still do. I am planning to proceed by disavowing these links to the pages that don't exist. Does anyone see any reason to not do this, or that doing this would be unnecessary? Another issue is that Google is not really crawling this site, in WMT they are reporting to have not crawled a single URL on the site. Does anyone think the above issue would have something to do with this? And/or would you have any insight on how to remedy it?
Intermediate & Advanced SEO | | ThridHour0 -
Wikipedia links - any value?
Hello everyone. We recently posted some of our research to Wikipedia as references in the "External Links" section. Our research is rigorous and has been referenced by a number of universities and libraries (an example: https://www.harborcompliance.com/information/company-suffixes.php). Anyway, I'm wondering if these Wikipedia links have any value beyond of course adding to the Wiki page's information. Thanks!
Intermediate & Advanced SEO | | Harbor_Compliance0 -
Dummy links in posts
Hi, Dummy links in posts. We use 100's of sample/example lnks as below http://<domain name></domain name> http://localhost http://192.168.1.1 http:/some site name as example which is not available/sample.html many more is there any tag we can use to show its a sample and not a link and while we scan pages to find broken links they are skipped and not reported as 404 etc? Thanks
Intermediate & Advanced SEO | | mtthompsons0 -
Site wide links removal
A website of mine has about 4,000 backlinks of which 2,500 of them are coming from one website to the homepage and about 6 internal pages. These have been built up over about 5 years, mainly via article posts. The site was recently hit via penguin 2.0 but has only had natural links built so i'm wondering if the sitewide links are in fact the issue? The website linking to mine is an authority source within its niche but the concern is the amount of backlinks coming from this one site and if it may now be seen as having a negative impact. When ive reviewed the links from this one site via a backlink removal tool about 80% seem fine and suggestions are to remove about 20% of the backlinks. Would you keep all the sitewide backlinks or remove them?
Intermediate & Advanced SEO | | jazavide
Have you come across a similar situation and how did it affect ranking/traffic?0 -
How many links home on a page?
We are planning on a mega menu which will have around 300 links and a mega slider which will have around 175 links if our developer has their way. In all I could be looking at over 500 links from the home page. The Mega Menu will flatten the site link structure out but I am worried this slider on the home page which is our 4th most visited page behind our 3 core category pages. What are your thoughts?
Intermediate & Advanced SEO | | robertrRSwalters0 -
Too many links?
I've recently taken over a site from another agency, which has hundreds of linking root domains. These domains are of very low quality and, in my opinion, are being ignored by Google. Is it best to 'clean up' some of these links, or leave them and start building quality links? I just don't want to waste time cleaning link profiles if there's no need.
Intermediate & Advanced SEO | | A_Q0