I want to Disavow some more links - but I'm only allowed one .txt file?
-
Hey guys,
Wondering if you good people could help me out on this one?
A few months back (June 19) I disavowed some links for a client having uploaded a .txt file with the offending domains attached.
However, recently I've noticed some more dodgy-looking domains being indexed to my client's site so went about creating a new "Disavow List".
When I went to upload this new list I was informed that I would be replacing the existing file.
So, my question is, what do I do here?
Make a new list with both old and new domains that I plan on disavowing and replace the existing one?
Or; Just replace the existing .txt file with the new file because Google has recognised I've already disavowed those older links?
-
Cheers Tom.
Exactly the answer I needed!
-
Hi Matthew
You want to add to your current list. So you'll want to upload a file that had what you had previously disavowed in addition to what new sites you want to disavow.
It's probably worth putting in a description line like:
domain:badsite.com
badsite2.com/badpageThese files were uploaded on 19/09/2013 following a further link audit
And so on. Showing progressive evidence of action taken is always a good sign I feel.
If you uploaded the new file without the old links, for all intents and purposes it would "de-disavow" those links, so you wanna keep them in there.
Hope that helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler doesn't discover the links in the main nav
Hi Moz Community, We have a headless ecom (Magento) client that I'm trying to crawl the site. During the crawl, the tool (Screaming Frog) cannot discover the sub-category URLs in the main navigation when I start crawling via homepage. Similarly, when I start crawling with one of the sub-category page, it doesn't crawl any of the product URLs on the sub-category page itself. When I inspect product and sub-cat URLs through Search Console, they seem as indexed and if I view how Googlebot rendered the sub-category page, I can see the product URLs on the sub-cat page too. If you have any idea what's the issue with Screaming Frog and would like to help me out, I'd be so grateful! Thanks in advance
Intermediate & Advanced SEO | | bbop330 -
Site Migration Question - Do I Need to Preserve Links in Main Menu to Preserve Traffic or Can I Simply Link to on Each Page?
Hi There We are currently redesigning the following site https://tinyurl.com/y37ndjpn The local pages links in the main menu do provide organic search traffic. In order to preserve this traffic, would be wise to preserve these links in the main menu? Or could we have a secondary menu list (perhaps in the header or footer), featured on every page, which links to these pages? Many Thanks In Advance for Responses
Intermediate & Advanced SEO | | ruislip180 -
How to Evaluate Original Domain Authority vs. Recent 'HTTPS' Duplicate for Potential Domain Migration?
Hello Everyone, So our site has used ‘http’ for the domain since the start. Everything has been set up for this structure and Google is only indexing these pages. Just recently a second version was created on ‘httpS’. We know having both up is the worst case scenario but now that both are up is it worth just switching over or would the original domain authority warrant just keeping it on ‘http’ and redirecting the ‘httpS’ version? Assuming speed and other elements wouldn’t be an issue and it's done correctly. Our thought was if we could do this quickly it would be easier to just redirect the ‘httpS’ version but was not sure if the Pros of ‘httpS’ would be worth the resources. Any help or insight would be appreciated. Please let us know if there are any further details we could provide that might help. Looking forward to hearing from all of you! Thank you in advance for the help. Best,
Intermediate & Advanced SEO | | Ben-R1 -
Are We Doing Link Building Right? Do Certain Links Actually Matter?
I've been thinking about this as I go through my daily link building activities for clients. Do we really know as much as we hope/think we do about how Google values inbound links, which links actually matter, and how much these link signals play into rankings? For example, does Google REALLY value the fact that a business is paying to sponsor a local sports team, or to join a local chamber? For local businesses, link building is rather difficult because they don't necessarily have the resources or ability to implement ongoing Content Marketing initiatives to earn links naturally. How can we be sure that the things we recommend actually make a difference? I had my family real estate business featured in almost a dozen articles as expert sources, with links from authoritative sites like Realtor.com and others. Does Google distinguish between a profile link on a site like Realtor.com vs. being featured as an expert source on home page news? Just second guessing a lot of this today. Anyone can to share thoughts and insights?
Intermediate & Advanced SEO | | RickyShockley0 -
Top hierarchy pages vs footer links vs header links
Hi All, We want to change some of the linking structure on our website. I think we are repeating some non-important pages at footer menu. So I want to move them as second hierarchy level pages and bring some important pages at footer menu. But I have confusion which pages will get more influence: Top menu or bottom menu or normal pages? What is the best place to link non-important pages; so the link juice will not get diluted by passing through these. And what is the right place for "keyword-pages" which must influence our rankings for such keywords? Again one thing to notice here is we cannot highlight pages which are created in keyword perspective in top menu. Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Does 302 redirects pass link juice ? - I've read conflicting reports
Hi Mozzers Ive noticed that I have some 302 redirects on my website which have been there for some time . They should really 301's but I am wondering if 302s pass link juice or not as from what I've read they don't so I just wanted to check if anyone knew for sure, thanks pete
Intermediate & Advanced SEO | | PeteC120 -
Technical Question on Image Links - Part of Addressing High Number of Outbound Links
Hi - I've read through the forum, and have been reading online for hours, and can't quite find an answer to what I'm searching for. Hopefully someone can chime in with some information. 🙂 For some background - I am looking closely at four websites, trying to bring them up to speed with current guidelines, and recoup some lost traffic and revenue. One of the things we are zeroing in on is the high amount of outbound links in general, as well as inter-site linking, and a nearly total lack of rel=nofollow on any links. Our current CMS doesn't allow an editor to add them, and it will require programming changes to modify any past links, which means I'm trying to ask for the right things, once, in order to streamline the process. One thing that is nagging at me is that the way we link to our images could be getting misconstrued by a more sensitive Penguin algorithm. Our article images are all hosted on one separate domain. This was done for website performance reasons. My concern is that we don't just embed the image via , which would make this concern moot. We also have an href tag on each to a 'larger view' of the image that precedes the img src in the code, for example - We are still running the numbers, but as some articles have several images, and we currently have about 85,000 articles on those four sites... well, that's a lot of href links to another domain. I'm suggesting that one of the steps we take is to rel=nofollow the image hrefs. Our image traffic from Google search, or any image search for that matter, is negligible. On one site it represented just .008% of our visits in July. I'm getting a little pushback on that idea as having a separate image server is standard for many websites, so I thought I'd seek additional information and opinions. Thanks!
Intermediate & Advanced SEO | | MediaCF0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0