How to find out if I have been penalized?
-
I have launched a new website beginning January this year and have seen slowly more and more traffic coming from google to the website until the 20th of March where suddenly there are no more visitors from the google search engine. The only traffic left is from google images, social networks or other search engines. Without visitors from google search this reduces our overall traffic by ~66%.
I can't easily find anymore our website in the search results of google by using terms which we usually ranked quite well. Nevertheless, the website is still indexed as I can find it using the "site:" search query. In google webmaster tools there are no messages and we have only been doing a bit of link building on website and blog directories (nothing excessive and nothing paid neither).
Is there any way to find out if google penalized my website? I guess it has... and what would be the best thing to do right now?
The website is hellasholiday (dot) com
Thanks in advance for your idea and suggestions
-
I am not a fan of CMS, i realize there are pros and cons, but when you try to do too much and be all things to all people you tend to have a lot of compromises.
There is one other reason i dont like to use robots,txt, i remeber Matt Cutts saying that it is a spam signal because they can not see what you are hiding, not that it is going to get you flaged by itself, but with other signals it can. If i remember correctly he was talking about hiding malware in scripts blocked by robots.
If you are interested, the best CMS for SEO i had found was Orchard CMS but even that has some silly errors, it puts more then one H1 tag in pages, but is still the best solution I have looked at. It is more customizable via code.
-
After having read your post and all the linked articles you have recommended I understand the issue and have adapted the robots.txt accordingly. Basically only leaving one single Disallow for the WordPress plugins. I hope this will help but I suppose I will see this in the next few days...
Now regarding WordPress I would suggest them to adapt their documentation as it is really misleading. Also I think they should implement all these noindex meta tags where necessary natively into wordpress and not by having to use a plugin for that, but this is another story.
-
Wordpress do many things that are not recommened, and blocking by robots is not recomened, what they are suggesting is a extream messure to solve the softewares problems. there are better ways to solve duplicate content without giveing away your link juice
Read this section "WordPress Robots.txt blocking Search results and Feeds"
on this page http://yoast.com/example-robots-txt-wordpress/
These plug-ins like yoast and word press itself, do not produse very good results. I have crawled many wordpress sites and they all have the same old problems many caused by the yoast plugin.
What google is refereing to in the link, is not getting pages of little value into their index, this is for their advantage not yours.
Its quite simple, if you block a page, the links pointing to that page waste their link juice, if you dont, or at least allow follow with a meta tag, you will get the link juice back.
See this article where Dr Pets calls it an extream messure, search for robots.txt you will see many comments refering to my point http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
See Dr pets comments here http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
I thought it would be no use for google to index and cache small icons, logos and cached resized images which have no meaningful name or so. So now I have at least removed the Disallow for these but for WordPress blog I want to keep the Disallow rules as recommended by WordPress itself for SEO purposes as documented here http://codex.wordpress.org/Search_Engine_Optimization_for_WordPress#Robots.txt_Optimization assuming they know what they are speaking about.
Anyhow I don't have the feeling this is really the problem why my website doesn't show up anymore in the google search engine results...
-
The question should be why block them?
its like cutting off your hand, because you have a splinter.
If duplicate content is a problem, then you can (in order of prefrance) fix it, use a canonical, a noindex,follow meta tag, but not robots
-
Many thanks Alan for your answer!
Regarding the robots.txt, basically I just would like to block/disallow some cached images and small icons/pictures from the website as well as some stuff for the associated WordPress blog which is also host on the same website. For the blog I am disallowing the admin pages, feeds, comments, trackbacks, content theme files etc. Here wold be the complete list just in case:
Disallow /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /category//
Disallow: /*/trackback
Disallow: /*/feed
Disallow: /*/comments
Disallow: /?
Disallow: /*?
So maybe I should change my question to "what URLs should I disallow for a WordPress blog?"
Also where can I see all the pages which are blocked by my robots.txt file?
-
You can ask for reconsideration from google though webmaster tools. But since you have no warnings and you are still in the index, i have doubts that you have been flagegd manualaly, but you may have been algorthmicly.
I notived that you have blokced hundreds of pages with robots.txt, thios had led to thousonds of links pointing to pages that are not indexed, this means these links are puiring away link juice into nowhere.
You should not use robots text to block pages that are linked to, its a waste of valuable link juice.
if you must no-index the pages, use a meta noindex,follow tag, this way you will get most of the link juice back though the pages outlinks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Cant find source of redirect
Hey guys, I have a bizarre situation on my hands. I have a URL that is being wonky. The url is redirecting to another url and the 301 redirect is not in my htaccess. There is a 301 redirect in my htaccess but is being overwritten by something else, i.e. whatever is happening in above. So basically URL A should be redirecting to URL B but instead its going to URL C. I know we were not hacked, it's not redirecting to a strange bizarre domain. I have also disabled all of our plugins that redirect (to my knowledge) Any thoughts would be great!
Technical SEO | | HashtagHustler0 -
Massive drop in sales - how do I find out why?
I am working with my client managing/maintaining their online store.We have been making big improvements to onsite SEO, improved URL structures (complete with re-directs) etc, we use Yoast Premium to optimise pages for search and Yoast Local. We have a bounce rate of around 20%, however, suddenly in mid August, our revenues virtually vanished overnight. Moz is reporting a search visibility drop of 51%, while visitor levels remain fairly average, certainly no drops that can be compared with the sudden loss of conversions. I am at a loss and cannot fathom why our sales have dropped. Can anyone suggest where I might look? Many thanks Bob
Technical SEO | | SushiUK0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Is there anyway to find historical ranking for a specific keyword?
for example - if i want to know who has ranked for the term "seo company" on google for the last 12 months (or there abouts), can it be done? James
Technical SEO | | isntworkdull1 -
How does Google find /feed/ at the end of all pages on my site?
Hi! In Google Webmaster Tools I find *.../feed/ as a 404 page in crawl errors. The problem is that none of these pages exist and they have no inbound links (except the start page). FYI, it´s a wordpress site. Example: www.mysite.com/subpage1/feed/ www.mysite.com/subpage2/feed/ www.mysite.com/subpage3/feed/ etc Does Google search for /feed/ by default or why do I keep getting these 404´s every day?
Technical SEO | | Vivamedia0 -
What to do about similar content getting penalized as duplicate?
We have hundreds of pages that are getting categorized as duplicate content because they are so similar. However, they are different content. Background is that they are names and when you click on each name it has it's own URL. What should we do? We can't canonical any of the pages because they are different names. Thank you!
Technical SEO | | bonnierSEO0 -
Search engines have been blocked by robots.txt., how do I find and fix it?
My client site royaloakshomesfl.com is coming up in my dashboard as having Search engines have been blocked by robots.txt, only I have no idea where to find it and fix the problem. Please help! I do have access to webmaster tools and this site is a WP site, if that helps.
Technical SEO | | LeslieVS0 -
Should I worry about errors MozBot finds but is not on my sitemap?
MozBot crawled a found a couple errors that isn't included on my sitemap plugin, such as duplicate page content on author pages. Should I worry about things not on my sitemap?
Technical SEO | | 10JQKAs0