Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Find all external 404 errors/links?
-
Hi All,
We have recently discovered a site was linking to our site but it was linking to an incorrect url, resulting in a 404 error. We had only found this by pure chance and wondered if there was a tool out there that will tell us when a site is linking to an incorrect url on our site?
Thanks
-
If you dont have access to the logs that could be an issue - not really any automated tools out there as it would need to crawl every website and find 404 errors.
I haven't tried this - so its just an idea. Go into GSC download all the links pointing to your site (and from places like Moz, Ahrefs, Majestic) and then chuck that list of urls into Screaming Frog or URL Profiler and look at external links and see if any are returning a 404. Not sure if this would work - its just an idea.
Thanks
Andy
-
Great, will take a look. Maybe run a trial to see if it does exactly what I need
Thanks for the info!
-
Good idea!
Although some of our clients that we do SEO for aren't hosting their websites on our server and we don't have access to their server logs etc.
Was hoping for an automated dashboard like MOZ/Screaming Frog/ or A hrefs as mentioned above. Due to the amount of clients we have, opening up and running through all there Log files could be time consuming.
Cheers for the info though, may come in use in the future, or to someone else reading this
-
Hi
The best way I have found is to look in your server logs, its the only true place to find out what Google is doing on your site.
Download the logs and look at all the 404 errors - quite simple and depending on size of your logs can take you around 5 minutes worth a work - the longer time period you can analyse in your logs the better.
Thanks
Andy
-
Hi David.
Ahrefs.com offers that service: broken links.
Another way to do that search could be this: Downloading the historic backlinks list and with a mass checker, check where do they point nowdays. I've used GScraper and its option to crawl outbound links.
Best Luck.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Broken canonical link errors
Hello, Several tools I'm using are returning errors due to "broken canonical links". However, I'm not too sure why is that. Eg.
Technical SEO | | GhillC
Page URL: domain.com/page.html?xxxx
Canonical link URL: domain.com/page.html
Returns an error. Any idea why? Am I doing it wrong? Thanks,
G1 -
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
Is SEO effected of putting an external link in the primary navigation of a website?
I have a customer, www.xxx.com. This site has good traffic, low bounce rate (28%), 2:00 min avg time on site, and 45% return visitor rating. No spam rankings, etc. Good load time. Another site, www.yyy.com, has sent out a request for them to add them as a new link in www.xxx.com's primary navigation - using a title such as "abc" (not the name of the company or site of yyy.com). This second site, www.yyy.com, has a bounce rate of 98%, avg time on site is :30, and 10.2% return visitor rate. No spam flags noted in Open Site explorer. Plus they are asking other sites similar to www.xxx.com to do the same thing. Questions/Concerns and Feedback appreciated: Will yyy.com's analytics and quality pass back to xxx.com and cause Google or algorithms to flag or penalize xxx.com? (It ranks #1 for quite a few things.) The relevancy between the sites is good -same industry, same business objectives. From a usability standpoint, isn't it more appropriate to place a link to another website in a different way? e.g. a promotional graphic wit a link or anchor text links? Isn't it more appropriate to ask another business for links - not using the primary nav of a site? (It seems yyy.com is essentially asking other sites for 'free advertising/promotion.' Thanks!
Technical SEO | | mundsack0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Are 404 Errors a bad thing?
Good Morning... I am trying to clean up my e-commerce site and i created a lot of new categories for my parts... I've made the old category pages (which have had their content removed) "hidden" to anyone who visits the site and starts browsing. The only way you could get to those "hidden" pages is either by knowing the URLS that I used to use or if for some reason one of them is spidering in Google. Since I'm trying to clean up the site and get rid of any duplicate content issues, would i be better served by adding those "hidden" pages that don't have much or any content to the Robots.txt file or should i just De-activate them so now even if you type the old URL you will get a 404 page... In this case, are 404 pages bad? You're typically not going to find those pages in the SERPS so the only way you'd land on these 404 pages is to know the old url i was using that has been disabled. Please let me know if you guys think i should be 404'ing them or adding them to Robots.txt Thanks
Technical SEO | | Prime850 -
404 error - but I can't find any broken links on the referrer pages
Hi, My crawl has diagnosed a client's site with eight 404 errors. In my CSV download of the crawl, I have checked the source code of the 'referrer' pages, but can't find where the link to the 404 error page is. Could there be another reason for getting 404 errors? Thanks for your help. Katharine.
Technical SEO | | PooleyK0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340 -
Does the Referral Traffic from a Link Influence the SEO Value of that Link?
If a link exists, and nobody clicks on it, could it still be valuable for SEO? Say I have 1000 links on 500 sites with Domain Authority ranging from 35 to 80. Let's pretend that 900 of those links generate referral traffic. Let's assume that the remaining 100 links are spread between 10 domains of the 500, but nobody ever clicks on them. Are they still valuable? Should an SEO seek to earn more links like those, even though they don't earn referral traffic? Does Google take referral data into account in evaluating links? 5343313-zelda-rogers-albums-zelda-pictures-duh-what-else-would-they-be-picture3672t-link-looks-so-lonely.jpg Sad%20little%20link.jpg
Technical SEO | | glennfriesen1