I tried the directorie list of seomoz, but almost all of them charged for the inclusion. This is a black hat situation?
-
I need backlinks for my site, and several places inform that directories are a good place. But they charge for the inclusion. Should I pay? This is a blackhat situation where I'm buying for links?
-
H Naghimiac,
Many directories charge, but this doesn't mean they are black hat. The key concept is editorial inclusion. A directory that accepts anyone is not a directory you want to be associated with. This includes directories filled with porn, gambling and payday loan sites.
On the other hand, the harder it is to get into a directory, the more value it usually passes. This is true even when the directory charges money for "review" services.
Be careful - directory listings are meant to enhance your backlink profile, not act as a foundation.
Here's a helpful article:
http://www.seomoz.org/blog/seo-link-directory-best-practices
Best of luck!
-
Good... im afraid to be at the dark side...in gray side i can accept...thanks
-
Hi Naghirniac,
When you pay for a directory listing, you're paying for the review not the actual link. That being said, unless the directory is quality, don't go for it - money can be better spent elsewhere.
Good luck!
-
In an ideal world you wouldn't have to seek directories, however we aren't there quite yet and so they will give you a small boost in the meantime (whilst you are trying to build up other links).
I noticed that they were payable when I looked at them, I think from the point of view at SEOMoz, they are saying 'if you are going to use directories, these are the best'. Obviously the people that run them know that and thats why they can charge.
It would all depend on what you can afford. If $150 for a small, but likely, boost seems worthwhile to you then go for it.It's something that depends on the business you are working for and the situation (are links the thing what are letting you down)
It's more grey than black, there are much worse tactics you could employ towards link building.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hreflang targeted website using the root directory's description & title
Hi there, Recently I applied the href lang tags like so: Unfortunately, the Australian site uses the same description and title as the US site (which was the root directory initially), am i doing something wrong? Would appreciate any response, thanks!
Intermediate & Advanced SEO | | oliverkuchies0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Seo black hat tricks
I have a competitor in the local area. He registered a new domain name. www.orangecountypatentlawfirm.com. It was created back in 11/10 and updated a few months ago on 11/13. My domain is ocpatentlawyer.com. I put my domain and his domain in the open site explorer. The peculiar thing is that my competitors website mirrors identically to my domain. (see attached image) my competitors website rose through the SERP very fast. I never saw it coming. Anyways, I wanted to know if he was using some type of black hat seo trick to hi jack my domain authority to get his own website to rank higher? Plus, if so, does it hurt my ranking? compare.png
Intermediate & Advanced SEO | | jamesjd710 -
Keywords under product listing pages
Hi guys, One of my main concerns when we start redesigning the site Trespass.co.uk, is the current pages like this one http://www.trespass.co.uk/snow-sports/clothing/ski-jackets/womens-ski-jackets are bordering over optimisation. Is this the case as each product listed in the url above has "womens ski jacket" under each product. If we have 50 products on each product listing page with the product name + type of product, ie. flora womens ski jacket, xyz mens waterproof jacket. Are we over optimising the page for the main keywords by having them under each product? Would that page be over optimised for womens ski jackets? Thanks guys
Intermediate & Advanced SEO | | Trespass0 -
Advice Needed: Why Is My Site Not Ranking Despite All The White Hat I Have Done?
Hi all, I have tried all white hat ways to make my local business website rank well in Google. We have done: 1.) Good quality content for our site on regular basis
Intermediate & Advanced SEO | | chanel27
2.) Submit to Google sitemap
3.) Link in an ethical way
4.) Post on social media sites No Google Panda content farming
No Google Penguin unnatural linking In fact, we have more quality articles to share compared to other laundry dry cleaning websites in Singapore. Can anyone advice me on why my site is my ranking well? Site:
http://www.drycleaning.com.sg0 -
Mission Possible? You have 3 hours to do Local SEO. Which top 5 sites do you go Social Bookmark, Local Search Engine Submit and Directory List.
Mission Possible? Here is a test. Suppose you had 3 hours (okay 7) to go and submit links, etc, on Social Bookmarking, Local Search Engines and Directories, which top 5 or more of each would you do? (Assuming your on-page is already sweetened). I just got 2 more clients and I need to get started on a few things for each. Thankful for all your advice.............
Intermediate & Advanced SEO | | greenhornet770 -
Content on New Domain or Sub Directory of Existing Domain?
I have a client with a well aged, high DA site. They rank well for their wedding photography business in several cities. They are launching a new service which is related to photography (photobooths and flipbooks) which they built and developed content on a new domain. The existing domain has 0 links with a DA of 1. The site is brand new.. Is there any drawback to moving the existing content on the new domain to a sub directory of the high authority domain? EX: http://domain.com/newcompany The look, feel, and design of the new site / service is much different than the high DA site. My thoughts are that this will give them an automatic step up, especially since they will be marketing this in several major cities. Also, since the design will be different, if it is good to move to the subdir, should we put the new company name in the subdir folder or something keyword friendly like domain.com/photobooth as opposed to domain.com/newcompanyname. Any thoughts would be greatly appreciated.
Intermediate & Advanced SEO | | itrogers0 -
How was cdn.seomoz.org configured?
The SEOmoz CDN appears to have a "pull zone" that is set to the root of the domain, such that any static file can be addressed from either subdomain: http://www.seomoz.org/q/moz_nav_assets/images/logo.png http://cdn.seomoz.org/q/moz_nav_assets/images/logo.png The risk of this configuration is that web pages (not just images/CSS/JS) also get cached and served by the CDN. I won't put the URL here for fear of Google indexing it, but if you replace the 'www' in the URL below with 'cdn', you'll see a cached copy of the original: http://www.seomoz.org/ugc/the-greatest-attribution-ever-graphed The worst-case scenario is that the homepage gets indexed. But this doesn't happen here: http://cdn.seomoz.org/ That URL issues a 301 redirect back to the canonical www subdomain. As it should. Here's my question: how was that done? Because maxcdn.com can't do it. If you set a "pull zone" to your entire domain, they'll cache your homepage and everything else. googlebot has a field day with that; it will reindex your entire site off the CDN. Maybe the SEOmoz CDN provider (CloudFront) allows specific URLs to be blocked? Or do you detect the CloudFront IPs and serve them a 301 (which they'd proxy out to anyone requesting cdn.seomoz.org)? One solution is to create a pull zone that points to a folder, like example.com/images... but this doesn't help a complex site that has cacheable content in multiple places (do you Wordpress users really store ALL your static content under /wp-content/ ?). Or, as suggested above, dynamically detect requests from the CDN's proxy servers, and give them a 301 for any HTML-page request. This gets complex quickly, and is both prone to breakage and very difficult to regression-test. Properly retrofitting a complex site to use a CDN, without creating a half-dozen new CDN subdomains, does not appear to be easy.
Intermediate & Advanced SEO | | mcglynn0