Understanding the actions needed from a Crawl Report
-
I've just joined SEOMOZ last week and have not even received my first full-crawl yet, but as you know, I do get the re-crawl report. It shows I have 50 301's and 20 rel canonical's. I'm still very confused as to what I'm supposed to fix...And, all the rel canonical's are my sites main pages, so hence I am still equally confused as to what the canonical is doing and how do I properly setup my site. I'm a technical person and can grasp most things fairly quickly, but on this the light bulb is taking a little while longer to fire-up
If my question wasn't total jibberish and you can help shed some light, I would be forever grateful.
Thank you.
-
Thanks Charles
I'm really happy with him
-
Thanks Woj - it helps..a little :). SEO is definitely a journey...
On another note, I just read the post on your company website regarding your process of developing the Kwasi robot logo - very interesting read, I enjoyed it.
-
The 301s are warnings and could be in place for a reason - you can also download a spreadsheet with all the crawl findings.. it's really useful.
Generally, fix all the errors (in red) if any.. fix warnings as required & examine the notices
For example, I have a site that has 100+ canonicals - all fine & a couple of warnings (titles too long but only over by 1 or 2 characters)
Hope that helps a little
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Showing up on Moz Crawl | www. vs. no-www.
Hello Moz Community! I am new to SEO, Moz and this is my first question. My questions; I have a client that is getting flagged for Duplicate Content. He is getting flagged for having two domains that have the same content i.e. www.mysite.com & mysite.com. I read into this and set up a 301 redirect through my hosting site. I evaluated which site had a stronger Page Authority and had the weaker site redirect to the stronger site. However, I am still getting hit for Duplicate pages caused by the www.mysite.com & mysite.com being duplicates. How should I go about resolving this? Is this an example of a Canonical tag needed in the head of the HTML? Any direction is appreciated. Thank You. B/R Will H.
Technical SEO | | MarketingChimp100 -
Pages with 301 redirects showing as 200 when crawled using RogerBot
Hi guys, I recently did an audit for a client and ran a crawl on the site using RogerBot. We quickly noticed that all but one page was showing as status code 200, but we knew that there were a lot of 301 redirects in place. When our developers checked it, they saw the pages as 301s, as did the Moz toolbar. If page A redirected to page B, our developers and the Moz toolbar saw page A as 301 and page B as 200. However the crawl showed both page A and page B as 200. Does anyone have any idea why the crawl may have been showing the status codes as 200? We've checked and the redirect is definitely in place for the user, but our worry is that there could be an issue with duplicate content if a crawler isn't picking up on the 301 redirect. Thanks!
Technical SEO | | Welford-Media0 -
Value of having a good crawl budget?
Hi, I've seen several questions where people give advice on how to increase the crawl budget. What I haven't seen anyone comment is what the value of this really is if you have many pages that doesn't get updated very often. Take for example the typical agency page - 50 pages, most of them rarely gets updates. In a monthly basis normally 10% of the website gets updated. Is there really any value then of having 100% of the website crawled on a daily basis?
Technical SEO | | Inevo0 -
Site Crawling with Firewall Plugin
Just wondering if anyone has any experience with the WordPress Simple Firewall plugin. I have a client who is concerned about security as they've had issues in that realm in the past and they've since installed this plugin: https://wordpress.org/support/view/plugin-reviews/wp-simple-firewall?filter=4 Problem is, even with a proper robots file and appropriate settings within the firewall, I still cannot crawl the site with site crawler tools. Google seems to be accessing the site fine, but I still wonder if it is in anyway potentially hindering search spiders.
Technical SEO | | BrandishJay0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Crawl completed but still says meta description missing
Before the last crawl I went through and made sure all meta descriptions were not missing. However, from the last crawl on the 26th July, it still has the same pages on the list and showed they were crawled as well? Any ideas on why they may still be showing as missing?
Technical SEO | | HamiltonIsland0 -
Google Webmaster Tool - Crawl Stats Query ?
Dear All, I have been looking at GWT Crawl Stats and wondering how should I be interrupting the crawl stats chart. AllI I see is 3 charts telling me a high , low and average for the below but I am wondering is there anything I really need to be looking for ?. Pages crawled per day Kilobytes downloaded per day Time spent downloading a page (in milliseconds) thanks Sarah
Technical SEO | | SarahCollins0 -
Tracking a Crawl error
Hi All, If you find a crawl error on your page. How do you find it? The error only says the URL that is wrong but this is not the location. Can i drill down and find out more information? Thank you!
Technical SEO | | wedmonds0