Find 404 errors
-
Seomoz is currently reporting 404-errors, without saying where these errors are found.
What good is this report when it does'nt say a word about where I can find the page linking to the 404?
Screen: http://filer.crenia.no/NJPZ
I really think it's time for Seomoz to prioritize upgrading the pro tools. Feels like they haven't been updated in ages..
-
This would be a great addition to our web tools! Thanks so much for the suggestions. I can absolutely see how valuable this would be.
Must suggestion would be posting this as a feature suggestion. Here's the feature request forum we use to collect ideas:
http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requests
The teams at SEOmoz pay close attention to these requests and we make additions and updates to our tools often based on submissions just like these. I hope this will help.
-
If your site is under 3000 pages you can use this tool
it shows you the broken link, and the page and where in the source code the broken link is on that page.
-
Keri, given that this kind of question comes up so often, why not at least put a note on the report pages indicating that significantly more complete data is available in the CSV exports than can be shown in the tool itself?
If the data's there but in a non-obvious place, the page needs to do a better job of explaining that to the user.
Paul
-
We understand your frustration, and know that having to download the CSV to find the 404 errors isn't the most obvious solution. It is on our list of things to work on. We do have some upgrades coming in the future, never fear!
-
Hi Ignitas,
To get some insights into what urls are linking to a 404 page the best would be to export your Crawl Diagnostics to CSV. These files contain the urls linking to your pages.
Hope this helps a bit! But I would still suggest to the Moz team to include this feature within the PRO tools.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Increase of 404 error after change of encoding
Hello, We just have launch a new version of our website with a new utf-8 encoding. Thing is, we use comma as a separator and since the new website went live, I have a massive increase of 404 error of comma-encoded URL. Here is an example : http://web.bons-de-reduction.com/annuaire%2C321-sticker%2Csite%2Cpromotions%2C5941.html instead of : http://web.bons-de-reduction.com/annuaire,321-sticker,site,promotions,5941.html I check with Screaming Frog SEO and Xenu, I can't manage to find any encoded URL. Is anyone have a clue on how to fix that ? Thanks
Moz Pro | | RetailMeNotFr0 -
Error on duplicated content, but when checking shouldn't been possible
Dear all, Every week I look at the different crawl reports for our website, since the start of my SeoMoz membership the Errors for duplicated content and duplicated Title is rising. But if I take out the .csv file and look in more detail, and select a pages which is marked as duplicated content, a canonical is actually existing on this page. So it shouldn't be an warning, I have no idea what the issue could be. For example pagesare marked as duplicated content, <colgroup><col width="966"></colgroup>
Moz Pro | | Letty
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?sortby=2 |
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?startnumber=60&sortby=2 |
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?startnumber=80&sortby=2 | the parameters after '?' (question mark) are necessary for our internal system. To overcome duplicated content we coded that a canonical tag onis placed on every page with parameters and the main page is http://www.zylom.com/es/descargar-juegos/3-en-raya/ but it doesn't seem to work, because my error warnings are still rising. Please advice me Kind regards, Ms Letty van Eembergen0 -
Some thoughts on MozTrust based on OSE Findings X ref'd with SERPS
I've been doing a bit of competitor analysis for a client using OSE. There are a group of about 4 websites (our clients website included) that all dominate the sector with none of the 4 clearly out in front (call this GROUP A). Then there are another group of about 5 websites, which come lower in the SE's consistently than the top 4 (Call this GROUP B) **I've been doing some analysis in OSE: ** ALL GROUP B Websites outrank all of GROUP A websites in the OSE Metrics (Including Trust Rank). I did some analysis on the backlinks in Group A VS Group B Group A - Generally a mixture of ok links from blog posts, sponsorship, and ok directories. Group B - As A, but with fewer numbers of links from quality blogs PLUS A high level of spammy links ( .edu and .gov spam filled pages), very low quality, almost non legible blog posts on MFA sites (think Digital Point sellers). From the above it is clear that the OSE metrics are out of whack with the real SE results. Clearly OSE has a few problems with working out what are spammy links and what are decent. Obviously google also has issues with working this out, so I am not surprised that OSE also does - but that doesn't solve the issue. This is a general discussion - so I would just throw in a few thoughts on how OSE may possibly try are overcome some of these issues : 1/. % Trust Links vs Non trust Links:
Moz Pro | | James77
Add in a metric to Trust Rank where the number of links close to trusted sites are also compared to the number of links not close to trusted sites. If you see a very high ratio of links from sites that are not close to trusted sites, it is a strong indicator of spammy links. 2/. Use seed "Non Trusted" sites to create a negative Trust Rank
Use something like a reverse of the "trusted sites" theory, but taking a load of very clearly spammy / link manipulative sites and work out in terms of links connections how far the site is away from these sites. Thoughts???0 -
Urgent: Campaign set up 'Select Competitors' errors
Hi. Im setting up my first campaign and Im having issues with step 3: 'Select your competitors to track'. I only want to track 1 competitor: http://en.wikipedia.org/wiki/Ryan_Murphy_(writer) When I enter this and the competitor name into the form provided and click 'continue to next step' it throws an error at me: Darn, there are errors in your form! Don’t worry, Roger can’t feel pain. Competitors domain http://en.wikipedia.org/wiki/ryan_murphy_(writer) may not have a /path after the host Domain http://en.wikipedia.org/wiki/ryan_murphy_(writer) may not have a /path after the host Can anyone help me as this is urgent.
Moz Pro | | RyanSMurphy1 -
Error 403
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
Moz Pro | | Sean_McDonnell0 -
How do I find the most linked to page of a site?
I'm looking at a site for a potential link and am trying to find the most linked to page. The SEOmoz toolbar tells me the root domain (DA) is linked to by 660 root domains but the main URL (PA) is linked to by 38 root domains. I used open site explorer and got the same # of 38 root domains in the result. From the Top Pages tab, I clicked on the 2nd page down and the SEOmoz toolbar gives me 189 root domains linking to that page (PA). Then I ran a Linkscape report to see what that would say and I get 146 linking root domains. 1. Is this 2nd page down on OSE the most linked to page? 2. a. Is something off in these numbers?
Moz Pro | | Motava
b. How come OSE/Linkscape doesn't report the 660 root domains in the DA?0 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0