Why is 410 (Gone) being classed as a high priority issue in crawl diagnostics?
-
Are high priority issues have suddenly soared by over 100 because Moz is classing 410s as high priority.
Google doesn't class these as so serious, so we were wondering if anyone knows why Mos does? -
HI,
If you are seeing these warnings in your crawl reports then 410 pages would be a high priority issue since it means you are still linking to these pages from within your own site. In that respect there is no difference between 404s and 410s, they both declare a page is no longer available and if you are still linking to these pages in your site then you should try to find the pages that are linking to them and update the links. If you download your crawl report, open in excel and then filter for 410 errors one of the columns on the far right should tell you which page was linking to the 410 page(s) in question.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz bot has trouble crawling Angular JS - I believe it's seeing the SPA (Single Page Application) before Universal. Anyone else have this issue? What is the fix?
The Moz bot user agent detection settings are able to read Universal, but the Single Page Application (SPA) version partially loads on the website before Universal. Because of this, Moz (and possibly search engines) think we have massive duplicate content issues. For example, the crawl report said a particular product page (which has about 1,000 words) has 33,000 words and has duplicate content with over 300 other pages. This makes me believe it's only picking up the SPA version. Has anyone come across this, and what would be the fix?
Moz Bar | | laurengdicenso1 -
Site crawl only shows homepage
Hi everyone, A client of us has a quite new website with a lot of URLs. (Google Search Console indicates around 5300.) However, when I execute a site crawl with screaming frog, or a crawl test in MOZ, it only shows me one URL, the homepage. Does somebody have an idea why the other pages of the website are not showing up? Thanks,
Moz Bar | | WeAreDigital_BE
Jens0 -
Crawl Notifications
Hi, I'm well aware that the title's for all of my blog post are longer than the recommended length. How can I tell moz to ignore that? I hate seeing 80 plus crawl notifications all regarding this.
Moz Bar | | prestigeluxuryrentals.com0 -
How Do I Troubleshoot 804 HTTPS Crawl Error?
In my Moz crawl report I get: Crawl Error
Moz Bar | | digium
Moz encountered an error on one or more pages on your site
Error Code 804: HTTPS (SSL) Error Encountered The Moz Help Section only says: 804 HTTPS (SSL) error 804 errors result from a site with misconfigured SSL software. If Moz's crawlers cannot correctly interpret an SSL response for a home page, the crawl ends immediately. My site is publicly accessible on https - https://www.respoke.io/ And I'm not seeing any issues with my certificate. Can anyone help me out? What steps can I take to troubleshoot this error? If SSL is misconfigured, how do I configure it properly?0 -
Why does the moz crawl test lists page twice?
Hi, I'm running into an issue where some crawlers list my pages twice, once with a trailing slash, once without. I first saw it on a few pages with screaming frog, then saw it happen on all my pages with the moz crawler. The site is www.kidsandart.org and its on Squarespace. I grepped the sitemap.xml I submitted to google webmaster and got 167 distinct pages, all of them without a trailing slash. Any insights on why this is happening, and how to regard moz crawler results would be appreciated. thanks Tom
Moz Bar | | tpushpathadam0 -
I'm getting a Crawl error 605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
The website is www.bigbluem.com and is a wordpress site. I'm getting the following error: 605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag But what is weird is the domain it lists below that is http://None/BigBlueM.com Any advice?
Moz Bar | | TumbleweedPDX1 -
I am not able to perform crawl test in moz tools
it is throwing there is some problem in domain when i try testing the crawl test for my domains
Moz Bar | | IBEE-Hosting0 -
My 301 Error and Duplicate Title Content Issue is Growing !
When i redirect some of my page - it shows error. not redirecting and i made this 3-4 months before, no effect. All Errors under each category make me gone sick.
Moz Bar | | Esaky0