Is there any way to view crawl errors historically?
-
One of the website's we monitor have been getting high duplicate page titles, as we work through the pages, we see changes and the number of duplicate page titles are decreasing.
However, lately, it went up again and the duplicate page titles have increased.
I wanted to ask if there's any way to view the new errors and the old errors separately or sorted in a way that can help me identify why we are getting new page crawl errors.
Any advice would be great.
Thanks!
-
Do you have any server logs? Those can give you the information you need.
Google Webmaster Tools keep errors from a couple of year back. At least the ones not yet fixed. It might be that you are also lworking on the internal link structure, improving the crawling and you might have recently submitted a more up to date sitemap, which surfaced old errors.
How do you currently track the errors?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate title on Crawl
Ok, this should hopefully be a simple one. Not sure if this is a Moz crawl issue of redirect issue. Moz is reporting duplicate title for www.site.co.uk , site.co.uk and www.site.co.uk/home.aspx is this a canonical change or a moz setting I need to get this number lower.
Moz Pro | | smartcow0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
404 Error
When I get a 404 error report like below, I can't find a reference to the specific link on the page that is in error? Can you help me. 404 : Error http://www.boxtheorygold.com/blog/www.boxtheorygold.com/blog/bid/23385/Measuring-Your-Business-Processes-Pays-Big-Dividends Thanks, Ron Carroll
Moz Pro | | Rong0 -
Still Cant Crawl My Site
I've removed all blocks but two from our htaccess. They are for amazonaws.com to block amazon from crawling us. I did a fetch as google in our WM tools on our robots txt with success. SEOMoz crawler here hit's our site and gets a 403. I've looks in our blocked request logs and amazon is the only one in there. What is going on here?
Moz Pro | | martJ0 -
Advice for 4000+ duplicate errors on 1st check
Hi, 1st time use of the SEOMOZ scan has thrown up a lot of duplicate errors. Seems to look like my site has a .com.au/ & .com.au/default for the same pages. We had the domain on a hosted cms solution & have now migrated to magento. We duplicated the pages, but had to redirect all of the old url's to he new magento structure. This was done via a developer adding a 301 wildcard code to the .htaccess. Would that many errors be normal for a 1st scan? Where should I look for someone to fix them? Thanks
Moz Pro | | Paul_MC0 -
How to remove URLS from from crawl diagnostics blocked by robots.txt
I suddenly have a huge jump in the number of errors in crawl diagnostics and it all seems to be down to a load of URLs that should be blocked by robots.txt. These have never appeared before, how do I remove them or stop them appearing again?
Moz Pro | | SimonBond0 -
Can I change the crawl day ?
Hi All I hope there is a simple solution to this - we have a number of campaigns setup which are all crawled, and therefore updated, on different days of the week. We review these weekly and it would be much easier if they were all crawled on the same day. Is it possible to change the crawl day for some campaigns? Thanks Roy
Moz Pro | | bluelogic0