Is there a way to take notes on a crawled URL?
-
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
-
Hey! Dave here from the Help Team,
There are a couple different things you can do to mark items that you have done. One of the new features we have implemented into Site Crawl is the ability to mark items as "Fixed". This tool can be handy if you know that you have fixed issues with your site but are still waiting for your next update. Another trick you might want to do is download your "all crawled pages" CSV and then create a "notes" column. It wont live in the Moz dashboard but at least you would have a good record! Hopefully those options help you out!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server blocking crawl bot due to DOS protection and MOZ Help team not responding
First of all has anyone else not received a response from the help team, ive sent 4 emails the oldest one is a month old, and one of our most used features on moz on demand crawl to find broken links doesnt work and its really frustrating to not get a response, when we're paying so much a month for a feature that doesnt work. Ok rant over now onto the actual issue, on our crawls we're just getting 429 errors because our server has a DOS protection and is blocking MOZ's robot, im sure it will be as easy as whitelisting the robots IP, but i cant get a response from MOZ with the IP. Cheers, Fergus
Feature Requests | | JamesDavison0 -
Access all crawl tests
How can I see all crawl tests ran in the history of the account? Also, can I get them sent to an email that isn't the primary one on the account? Please advise as I need this historical data ASAP.
Feature Requests | | Brafton-Marketing0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
MOZ Site Crawl - Ignore functionality Request
I now understand that the ignore option found in the MOZ Site Crawl tool will permanently remove the item from ever showing up in our Issues again. We desire to use the issues list as kind of like a To-Do checklist with the ultimate goal to have no issues found and would like to "Temporarily Remove" an issue to see if it is shows back up in future crawls. If we properly fix the issue it shouldn't show back up. However, based on the current Ignore function, if we ignore the issue it will never show back up, even if the issue is still a problem. At the same time, the issue could be a known issue that the end user doesn't want to ever show back up and they desire to never have it show again. In this case it might be nice to maintain the current "Permanently Ignore" option. Use the following imgur to see a mockup of my idea for your review. pzdfW
Feature Requests | | StickyLife0 -
MOZ Site Crawl - Ignore functionality question
Quick question about the ignore feature found in the MOZ Site Crawl. We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it. Will it clear the item from the current list until the next Site Crawl takes place. If Roger finds the issue again, it will relist the error? Will it clear the item from the list permanently, regardless if it has not been properly corrected?
Feature Requests | | StickyLife1 -
Is there any way to filter by relevancy first and then volume second? Right now I just export the results of keyword explorer and do it offline. It would be great if I could do it online
I'm trying to filter the results of a keyword search in keyword explorer by relevancy first and volume second. But the minute I select volume the relevancy is completely lost. I know I can export them and manipulate it in excel but is there a feature that allows me to do this in Moz?
Feature Requests | | Anerudh0 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0