Help with Roger finding phantom links
-
It Monday and Roger has done another crawl and now I have a couple of issues:
- I have two pages showing 404->302 or 500 because these links do not exist. I have to fix the 500 but the 404 is trapped correctly.
http://www.oznappies.com/nappies.faq & http://www.oznappies.com/store/value-packs/\
The issue is when I do a site scan there is no anchor text that contains these links. So, what I would like to find out is where is Roger finding them. I cannot see any where in the Crawl Report that tells me where the origin of these links is.
- I also created a blog on Tumblr and now every tag and rss feed entry is producing a duplicate content error in the crawl stats. I cannot see anywhere in Tumblr to fix this issue.
Any Ideas?
-
Thanks again Ryan, you have been very helpful answering al lot of my questions.
-
Someone else asked the same question regarding tag pages yesterday. I would suggest asking a separate Q&A on that topic.
Tag pages & forum category pages are both often used as containers. They don't have any content except links to articles. I would ask for feedback as to the best practice. I suspect noindex, following those pages would be best, but I don't have the experience to feel comfortable offering that advice.
-
I have been looking at the data that Roger is reporting for the duplicate content and in ALL cases there is either a 301 or a NoIndex. So now I do not know why Roger is reporting them as a duplicate, robots should not see the second entry.
-
I did not think of looking at the csv report. I see it now thanks Ryan. There should be a soft 404 handler in place to process the bad urls, I will have to see why it is not working.
With tumblr, I was looking for an easy way to add a blog to the site.
The RSS is coming from tumblr as is all the content.
When we specify Tags in tumblr it creates urls e.g. mypage.com/article/tag1 mypage.com/article/tag2 mypage.com/article/tag3 which all contain the content of mypage.com/article with out a canonical to the original. It is a really strange non-seo friendly approach, and so I wondered if anyone had similar problems.
-
The crawl report offers a "referrer" field. That field offers where Roger found the offending link. In my experience that field has always been accurate.
When I try to access www.oznappies.com/faq I receive a 302 redirect and a 500 error. I would recommend adjusting non-existant pages to a soft 404 page. Still provide a 404 response to browsers, but offer users a friendly way to find information (i.e. links / search) and stay on your site.
A great example of a soft 404 page is http://www.orangecoat.com/a-404-page.html
For the Tumblr issue, I am not clear on the problem. Are you writing content and publishing on both the oznappies.com site and your tumblr site? Then this content is being published again on your site via a RSS import?
-
I removed the links and just left the text so these will cut and paste now. It confuses me where Roger found the links.
Thanks for running the Xenu scan. I have tried other site scanner and come up blank.
-
That second link is anchored to the wrong place.
Regardless I also cannot find the .faq page. I just ran Xenu over it to see what it could find, but no broken links showed up.
Afraid I don't use Tumblr either, so eh, pretty useless post. Sorry.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a tool on Moz or out on the internet that does bad link checker
I'm still pretty new to this and I was wondering if there is a free software, one of Moz or free out on the the internet that allows you to check bad links. I've done a lot of link building with citations and directories that for my clients industry. I just don't want to add their website and profile to a bad/risky directory and it penalizes my clients. I've seen a few out there, but I need one that is respectable and reliable. Any suggestions? I found one called bad neighborhood http://www.bad-neighborhood.com/text-link-tool.htm. Thanks Again, Benny
Moz Pro | | ACann0 -
How to find page with the link that returns a 404 error indicated in my crawl diagnostics?
Hi Newbie here - I am trying to understand what to do, step by step, after getting my initial reports back from seomoz. The first is regarding the 404 errors shown as high priority to fix, in crawl diagnostics. I reviewed the support info help on the crawl diagnostics page referring to 404 errors, but still did not understand exactly what I am supposed to do...same with the Q&A section when I searched how to fix 404 errors. I just could not understand exactly what anyone was talking about in relation to my 404 issues. It seems I would want to find the page that had the bad link that sent a visitor to a page not found, and then correct the problem by removing the link, or correcting and re-uploading the page being linked to. I saw some suggestions that seemed to indicate that seomoz itself will not let me find the page where the bad link is and that I would need to use some external program to do this. I would think that if seomoz found the bad page, it would also tell me what page the link(s) to the bad page exists on. A number of suggestions were to use a 301 redirect somehow as the solution, but was not clear when to do this versus, just removing the bad link, or repairing the page the link was pointing to. I think therefore my question is how do I find the links that lead to 404 page not founds, and fix the problem. Thanks Galen
Moz Pro | | Tetruss0 -
Competitive Link Comparison vs. Open Site Explorer
I get different external followed links showing in the Competitive Link Comparison vs. Open Site Explorer. The total number of links is different as well. I thought this was the same info - just one tracks ongoing and one is just a "one-off" tool. Is this not the case? Do the count links differently? Thanks! Kevin
Moz Pro | | yandl0 -
Does Google have a direct link with facebook and twitter?
Google monitor social media. What I'm wondering is do Google use the same tools we have on Facebook's API, Twitter's API etc to use in their SERPs Or do Facebook grant Google more detailed access to see who has liked links etc. I think it's quite an interesting point as surely I can push up my own count by repeatedly sharing my own links, which wouldn't be genuine. If Google had better access they could then determine what's been faked etc.
Moz Pro | | PhotoGazza0 -
Gained Links but MozRank drops. Why?
I'm working with a client whose external followed links rose from 897 to 2792 in about a month. However the client's domain MozRank dropped from 4.52 to 3.82. What is a possible cause for this drop?
Moz Pro | | Fueled0 -
Why are inbound links not showing up?
I'm new to SEOmoz but have a question regarding inbound links that I don't see posted in the forum. In order to become more familiar with SEOmoz tools, I've been checking out sites that friends and family members have created as practice. Things have been going really smooth until I came across a 2+ year old page that should have included an inbound link from wsj.com but said link is not appearing in OSE for this page. Background: A friend of mine has a (basically) defunct blog that had a pretty well trafficked posting in 2009. However, when I use OSE to check out both the domain and page inbound links, I don't see the aforementioned inbound link from wsj.com. Why is that? Or, it's insanely late - am I missing something? Friend's blog posting: http://bcclist.com/2009/04/21/craigslist-killer-megan-philipcom-removed/ WSJ posting with a link to my friend's blog (4th paragraph...anchor text = "taken down"): http://blogs.wsj.com/digits/2009/04/21/who-is-megan-mcallister/ No rush. Again, I'm doing this as practice and being new to the site, I figure I'm overlooking something. Any feedback would be greatly appreciated. Thanks!
Moz Pro | | ICM0 -
Broken Links and Duplicate Content Errors?
Hello everybody, I’m new to SEOmoz and I have a few quick questions regarding my error reports: In the past, I have used IIS as a tool to uncover broken links and it has revealed a large amount of varying types of "broken links" on our sites. For example, some of them were links on my site that went to external sites that were no longer available, others were missing images in my CSS and JS files. According to my campaign in SEOmoz, however, my site has zero broken links (4XX). Can anyone tell me why the IIS errors don’t show up in my SEOmoz report, and which of these two reports I should really be concerned about (for SEO purposes)? 2. Also in the "errors" section, I have many duplicate page titles and duplicate page content errors. Many of these "duplicate" content reports are actually showing the same page more than once. For example, the report says that "http://www.cylc.org/" has the same content as "http://www.cylc.org/index.cfm" and that, of course, is because they are the same page. What is the best practice for handling these duplicate errors--can anyone recommend an easy fix for this?
Moz Pro | | EnvisionEMI0 -
Were do I find the old Link Scape report that shows Domain Juice passed?
It looks like they are retiring the old Link scape. Where do we find the report that showed the Domain Juice passed by each link? Is it in the advanced reports and we need to request and wait?
Moz Pro | | MBayes0