How to go about removing bad/irrelevent links?
-
We have been made aware of a series of irrelevant links on some rather dodgy sites.
- http://www.designerdogstop.com/level-static/http://www.bestfirepits.net/some-of-the-best-vacations-for-families/2010/05/05/
- http://whatcigarsdoismoke.com/cigar-lighters/cigarette-cigar-2/
- http://dollfuss.org/build-bear-hawaii/
Absolute rubbish im sure you will agree. These links must surely be causing our link profile some damage.
They are currently wordpress sites with no means of contacting the authors. What ways and means are there of removing these negative pages and links?
Cheers all, any help appreciated.
-
Thanks Daniel,
Lots of additional features & improvements still in the works - updates @rmoov
Sha
-
Nice! Checking it out right now. Good luck with the project ! ~^DH
-
Hi Tim,
We just opened free Beta during this last weekend on a link removal management service that we developed.
The free trial will allow you to run a campaign including the domains you mentioned in your post.
The tool allows you to drop in a list of URLs, pull contact information from ICANN (the whois data mentioned above)Â for each of the domains, customize emails, send follow-ups, receive notification of cleanups from webmasters etc.
Hope it helps,
Sha
-
There's no quick route for SEO. Work slow, and do your work properly. I think that ipositions' suggesting is by far the best one - look-up contact details, contact them and request to remove the link. Patience helps
-
I do agree that not spending too much time is critical, as it can become hugely time consuming and inefficient. But getting rid of a few is also a help.
Being so time consuming was the main reason in my initial post, to see if there were any quicker options available to web masters, e.g. sending google a list of links that you wish for it to disregard due to not being generated by yourself... unlikely I know... :o(
Thanks for the response.
-
Have you looked for the email address of the webmaster using whois.net? That's the first thing we do when contact details are not provided on the site.
I've seen a lot of people here going on about removing bad links and where possible I agree that it should be done. However a line must be drawn where getting a link removed becomes too time consuming to be worth the effort. That time would be better spent creating content, reaching out to popular relevant blogs etc and create enough good links that offset the bad ones.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I predict quality of inbound link before using Disavow links tool?
I am working on Ecommerce website and getting issues with bad inbound links. I am quite excited to use Disavow links tools for clean up bad inbound links and sustain my performance on Google. We have 1,70,000+ inbound links from 1000+ unique root domains. But, I have found maximum root domains with low quality content and structure. Honestly, I don't want to capture inbound links from such websites who are not active with website and publishing. I am quite excited to use Disavow links tool. But, How do I predict quality of inbound links or root domains before using it? Is there any specific criteria to grade quality of inbound links?
Industry News | | CommercePundit0 -
Are you affected by the Gov't shutdown or is it just your .gov links?
With the shutdown came the take down of sites such as http://www.usda.gov/ and even: http://nsa.gov/ (even though http://www.downforeveryoneorjustme.com/nsa.gov says its up UPDATE: now down). Those .gov links might not be worth as much (pun somewhat intended) But here comes an actual question as I was thinking about this, I am really curious... Did your SEO efforts suffer in anyway due to the government shutdown, or is it too early to tell yet? PS Isn't it also interesting that Google's homepage is choosing to celebrate Yosemite's National Park 123rd anniversary when all National Parks are to be closed in our nation? Tfe85nN.jpg
Industry News | | vmialik0 -
Six Reconsiderations and Zero options left - Where do I go from here?
I have a client that I'm trying to get reindexed after they did extensive link farming in the past through other companies. We've poured countless hours in and I've submitted 6 Disavows/Reconsiderations and watched every video from Google on the topic. The client has also invested a significant amount of money into our work.
Industry News | | BlueTent20
We've used the following to try and find links: -Google Webmaster Tools (Shows a limit of 1,000 domains) -Bing Webmaster Tools -Link Detox (paid) -Google Analytics (Lifetime Referral Report) -Open Site Explorer All we've gotten is automated responses with 2 sample links. In the latest one, they only supplied 1 sample link, which was odd because it said 'sample URLs'. Either way, we feel that we're out of options. Is it time to throw in the towel and inform the client, or are there other resources we can exhaust? Where do we go from here? Thanks!0 -
Ideal SEO / Social Media Employee Skillset
Iâve been wondering recently what makes a good SEO / Social Media employee. It seems to me that SEO and Social Media are in the process of merging into a single role. What are your thoughts on the skills that this new world SEO / Social Media employee would need? Or do you think these roles should ideally remain separate and that a âtraditionalâ SEO is more what is needed? My own role has been moving much more towards social media recently and I was wondering if this was a common trend!
Industry News | | RG_SEO0 -
Strange video site adding unwanted links.
Today I was checking our backlink profile and noticed a lot of strange back links coming from grosezinga.com apparently this site is some type of search engine for videos, and it somehow pulls videos from Youtube, and Daily Motion, and adds them to the grosezinga search engine. I have never placed a link on this site, I do not want a link there, and I never even knew it existed until today. We do have video tutorials on the web, and somehow this site has pulled them from our video page without our permission, and added them to this site. Anyone ever had this problem, should I ask for them to be removed? When I do a Google video search my url is showing up on about a hundred different videos that are not ours.  I don't want to be accused of building unnatural links.
Industry News | | TinaGammon0 -
Suggested Link Builder and Brief
Hello I am looking to do some link building to get a clients site up the rankings. I would like to utilise the cheaper fees of an SEO in another country. India or somewhere Does anyone have any good references. I can see some good ones on O-desk. $6 dollars per hour. Also does anyone have an example brief/spec that would allow to to communicate key aspects? Thanks James
Industry News | | smashseo0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist.  You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : Â http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690