How is my competition causing bad crawl errors and links on my site
-
We have a compeditor who we are in a legal dispute at the moment, and they are using under hand tactics to cause us to have bad links and crawl errors and i do not know how they are doing it or how to stop it.
The crawl errors we are getting is the site having two urls together, for example www.testsite.com/www.testsite.com and other errors are pages that we do not even have or pages that are spelt wrong or have a dot after the page name.
We have been told off a number of people in our field that this has also happened to them and i would like to know how they are doing it so we can have this stopped
Since they have been doing this our traffic has gone down by half
-
Hi no there is only me who deals with the site. I have put copyright notice on the site but i will read about authorship accorss the site.
-
Hi Diane,
I like the way Ryan thinks!
...but I am hoping we won't have to go to the length of having to resort to a content bomb.
The lesson in a situation like this is to realize that being a good white hat SEO unfortunately means needing an understanding of some of the tactics used by black hats
or maybe just a little help from some friends
Since there is obviously an issue with your content being copied, the first thing I would do is to implement Authorship markup across your entire site. By doing this you ensure that any content that is "borrowed" is immediately "outed" to the search engines because it doesn't have an external link from your Google profile page, which acts as verification that you are in fact the author of the content. Matt Cutts and Othar Hansson have given a really easy rundown on implementation in this Google Webmaster Help video
For the moment though, it would be better if you can avoid making any changes to the site until we can identify all of the issues in play.
BTW ... I think I have an inkling of what might be going on here ... is there a programmer or designer who is or has been involved in the development of the site besides yourself?
In the meantime, I'm continuing with a diagnostic based on the information we have and will let you know as soon as I have confirmed my suspicions or otherwise.
Hang in there,
Sha
-
May I suggest planting a "bomb" in your content?
Most thieves are lazy. Rather then create content themselves they steal from others. Their laziness is dependable.
Take a look at their copies and determine what content is and is not being stolen. If they are copying everything including the HTML code and meta tags, you can add canonical tags to your site and other helpful code.
If they are not copying the meta tags, you can add fake content and use the noindex, nofollow tag to protect your site, but provide content which otherwise would cause a site to be removed from Google's index. When they steal the content, it wont have the noindex tag and the site would get nailed.
There are many other possibilities but I am confident you can outsmart them if you try. In addition, be sure to report the site(s) to Google: http://www.google.com/support/bin/static.py?page=ts.cs&ts=1114905
Another idea is to copyright your work, thereby protecting it and giving you legal proof the content is yours.
-
thanks for this, will send private message. we have had to redo the site so many times with new content because the content keeps on being stolen by a franchise group who then pass it on to their franchisees.
-
Hi Diane,
Ryan's response is spot on and his suggestions are excellent.
If you can provide the URL(s), then we can take a look and see exactly what is going on with the referring page(s).
If you don't want to share the information publicly in the Q&A, you can private message each of us through your SEOmoz profile page.
If you ever think that someone has access to edit pages on your site without your permission, the first thing to do is to check with your service provider whether there are any active ftp accounts that you are unaware of. I have seen situations before where people have managed to get a "back door" set up and then it is as simple as logging in and changing pages without your knowledge.
Given that this involves a legal dispute, if we can do a proper diagnosis and trace the source of the errors (or if there happens to be a back door in place), then you would be able to:
- issue a cease and desist
- better secure the server against unauthorized access
- Ban any ip addresses identified as malicious
Hope that helps,
Sha
-
You mentioned these are crawl errors. Are you using the SEOmoz crawl report? If so, please look at the "referrer" field. It will offer the page on your site which is providing the bad URL.
If you are willing to share the referring page, we can take a look and possibly provide more detail.
Anyone can create bad links to your site which can appear in Google or Bing WMT. Only someone with the ability to add content on your site can create crawl errors. Either your site is open to user generated content and someone created a bad link, or someone with access to your web server created the content.
-
will do thanks
-
If iit is happniong to many, then maybe its a reason not to suspect them, but like i say go to Bing WMT and have a look at where the links ae comming from.
-
the reason why i know they are behind it, is because other companies in the field of the website have had the same problems and after doing research for our legal team over this matter, we spent time speaking to over 40 people in the field of the website and found it happened to them aswell.
These people are cowboys but hopefully it will all be sorted out soon.
-
I would look in Bing WMT to see where the links are comming from for a start.
i must say also, If you dont know how they are doing it, then maybe you dont know if they are doing it., it may be somthing quite inocent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will putting a one page site up for all other countries stop Googlebot from crawling my UK website?
I have a client that only wants UK users to be able to purchase from the UK site. Currently, there are customers from the US and other countries purchasing from the UK site. They want to have a single webpage that is displayed to users trying to access the UK site that are outside the UK. This is fine but what impact would this have on Google bots trying to crawl the UK website? I have scoured the web for an answer but can't find one. Any help will be greatly appreciated. Thanks 🙂
Technical SEO | | lbagley0 -
.htaccess probelem causing 605 Error?
I'm working on a site, it's just a few html pages and I've added a WP blog. I've just noticed that moz is giving me the following error with reference to http://website.com: (webmaster tools is set to show the www subdomain, so it appears OK). Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag Here's the code from my htaccess, is this causing the problem? RewriteEngine on
Technical SEO | | Stevie-G
Options +FollowSymLinks
RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://www.website.com/$1 [R=301,L]
RewriteCond %{THE_REQUEST} ^./index.php
RewriteRule ^(.)index.php$ http://www.website.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^website.com$ [NC]
RewriteRule ^(.*)$ http://www.website.com/$1 [R=301,L] BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress Thanks for any advice you can offer!0 -
Site wide links from another domain - could these cause a problem?
Hi I manage the SEO in house for the site http://www.naturalworldsafaris.com/ A new add on to our services has been launched in the form of an online store allowing us to sell, for example, expedition clothing that is relevant to the trips we offer. The store is managed elsewhere and sits on a subdomain of the company who are providing this service for us. There are sitewide links throughout this site back to our homepage: http://naturalworld.newheadings.com/index.php I'm just a bit concerned about these links from an SEO perspective and was wondering if we should request these are set up as no follow. Would appreciate any thoughts on this. Thanks!
Technical SEO | | KateWaite0 -
Is it a bad that my site has the same title and description for directory listings?
I manually listed my site in a few hundred free directories, two paid directores (Joe ant $40, and dirmania $12), and 50 directories that require a reciprocal link ( I paid for a cheap service that gets around having to do the reciprocal). I made the big mistake of having the title and the description for these as the same or very close to the same...is this a huge problem? Should I have my site removed from the free directories or just let it go? I've since stopped focusing on all the directories, and considering saving up to get in Yahoo directory. Working now on getting legit and relevant links from .edu sites.
Technical SEO | | eugenecomputergeeks0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0 -
Will changing our colocation affect our site's link juice?
If we change our site's server location to a new IP, will this affect anything involving SEO? The site name and links will not be changing.
Technical SEO | | 9Studios0 -
Competition links make no sense
Hello everybody, I used the open site explorer to check where my competitor has links and try to put mine there too. However I am extremely confused with the results. Eg the first link to my competitor coming from a domain with authority 91, is a download file. The other one is a link from ups, the courier service. When I click on it I get an access denied.The other one comes from samsung and when I click on it, I download an swf file. Next one, fcc.gov and it downloads a wp file. If I keep clicking on these links, in the end I am going to get a virus or something and learn nothing about what my competitor does. Any one have a clue how they managed to get linked like that?
Technical SEO | | polyniki0 -
Google causing Magento Errors
I have an online shop - run using Magento. I have recently upgraded to version 1.4, and I installed a extension called Lightspeed, a caching module which makes tremendous improvements to Magento's performance. Unfortunately, a confoguration problem, meant that I had to disable the module, because it was generating errors relating to the session, if you entered the site from any page other than the home page. The site is now working as expected. I have Magento's error notification set to email - I've not received emails for errors generated by visitors. However over a 72 hour period, I received a deluge of error emails, which where being caused by Googlebot. It was generating an erro in a file called lightspeed.php Here is an example: URL: http://www.jacksgardenstore.com/tahiti-vulcano-hammock IP Address: 66.249.66.186 Time: 2011-06-11 17:02:26 GMT Error: Cannot send headers; headers already sent in /home/jack/jacksgardenstore.com/user/jack_1.4/htdocs/lightspeed.php, line 444 So several things of note: I deleted lightspeed.php from the server, before any of these error messages began to arrive. lightspeed.php was never exposed in the URL, at anytime. It was referred to in a mod_rewrite rule in .htaccess, which I also commented out. If you clicked on the URL in the error message, it loaded in the browser as expected, with no error messages. It appears that Google has cached a version of the page which briefly existed whilst Lightspeed was enabled. But I though that Google cached generated HTML. Since when does cache a server-side PHP file ???? I've just used the Fetch as Googlebot facility on Webmaster Tools for the URL in the above error message, and it returns the page as expected. No errors. I've had to errors at all in the last 48 hours, so I'm hoping it's just sorted itself out. However I'm concerned about any Google related implications. Any insights would be greatly appreciated. Thanks Ben
Technical SEO | | atticus70