Crawl Depth improvements
-
Hi
I'm checking the crawl depth report in SEM rush, and looking at pages which are 4+ clicks away.
I have a lot of product pages which fall into this category. Does anyone know the impact of this? Will they never be found by Google?
If there is anything in there I want to rank, I'm guessing the course of action is to move the page so it takes less clicks to get there?
How important is the crawl budget and depth for SEO? I'm just starting to look into this subject
Thank you
-
Hey Becky,
Those pages will be found by Google if you have links pointing to them somewhere on your site. In terms of crawl budget, the more page depth the more time does Google need to spend on crawling your site.
However, with proper internal linking you should be able to significantly lower the amount of clicks. So the next step would be adding some links through relevant anchor texts. After you do this, watch the analytics and let me know if it had any impact.
Hope it helps. Cheers, Martin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is Amazon crawling my website? Is this hurting us?
Hi mozzers, I discovered that Amazon is crawling our site and exploring thousands of profile pages. In a single day it crawled 75k profile pages. Is this related to AWS? Is this something we should worry about or not? If so what could be a solution to counter this? Could this affect our Google Analytics organic traffic?
Intermediate & Advanced SEO | | Ty19860 -
Gradual Increase in Domain Authority After Domain Migration But No Improvement in Organic Traffic Yet
We migrated our domain in early April and simultaneously added an SSL certificate. Everything was done by the books. All redirects implemented perfectly, very few errors. Google notified via Search Console. Despite all steps being done perfectly our domain authority dropped from 24 to 8. Organic traffic dropped from about 80 per day to about 10. Each month domain authority increases by 2 or 3. We are now back up to a DA of 16. But no improvement in organic traffic yet. At what point should organic traffic start to return? Hopefully the consistent improvement in DA is a good sign. I have been told that adding SSL and moving the domain at the same time was a very bad idea. We are starting link building next week. Hopefully that will help further. Any ideas as to when this situation will improve? Needless to say it has been awful for our business.
Intermediate & Advanced SEO | | Kingalan10 -
Using "nofollow" internally can help with crawl budget?
Hello everyone. I was reading this article on semrush.com, published the last year, and I'd like to know your thoughts about it: https://www.semrush.com/blog/does-google-crawl-relnofollow-at-all/ Is that really the case? I thought that Google crawls and "follows" nofollowed tagged links even though doesn't pass any PR to the destination link. If instead Google really doesn't crawl internal links tagged as "nofollow", can that really help with crawl budget?
Intermediate & Advanced SEO | | fablau0 -
Website Indexing Issues - Search Bots will only crawl Homepage of Website, Help!
Hello Moz World, I am stuck on a problem, and wanted to get some insight. When I attempt to use Screaming Spider or SEO Powersuite, the software is only crawling the homepage of my website. I have 17 pages associated with the main domain i.e. example.com/home, example.com/sevices, etc. I've done a bit of investigating, and I have found that my client's website does not have Robot.txt file or a site map. However, under Google Search Console, all of my client's website pages have been indexed. My questions, Why is my software not crawling all of the pages associated with the website? If I integrate a Robot.txt file & sitemap will that resolve the issue? Thanks ahead of time for all of the great responses. B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Ipad Sales & Traffic Improvement for my Ecommerce site
Do you guys know any tool or software which provides follow things for my ecommerce site? Real Time/ next day data for ipad traffic Real Time/ next day data for ipad urls visited Read time/ next day data for ipad Page rendering load time for all the urls separately Real Time/ next day data for ipad network load time for all the urls separately Real Time/ next day data for ipad dom processing time for the all the urls separately Real Time/ next day data for ipad request queuing load time for all the urls separtely Real Time/ next day data for ipad web application load time for all the urls separtely Real Time/ next day data for ipad total load time for each url Real Time/ Next day data for ipad timestamp i.e Time of each url being accessed by the visitor Real Time/ next day data for ipad visitor city Real Time/ next day data for ipad visitor country code Real Time/ next day data for ipad visitor duration on that page Real Time/ next day data for ipad visitor user agent name foreg chrome, IE, safari, firefox etc Real time/ next day data for ipad visitor user agent OS foreg. ipad only Real time/ next day data for ipad user agent version foreg. ipad 8.0, ipad 6.0, ipad air, ipad ratina, ipad mini etc Real time/ next day data for ipad visitor for each url session trace in water fall like backend time, dom processing, page load, waiting on ajax, interactions of visitors etc Real time/ next day data for ipad visitor for each url with total request for each page. Real time/ next day data for ipad visitors for each url with javascript error on the page and javascript url plus stake track of that error. Real time/ next day data for ipad visitors for each url with ajax error on the page and ajax url plus stake track of the error Real time/ next day data for ipad visitors for each and every url where each and every request time taken in waterfall layout. Real time/ next day data for ipad visitors funnel visiualization tracking Real time/ next day data for ipad visitors transcations tracking. Please note that all above data also require day wise, country wise, previous days and month, model wise sorting, pagination feature, etc. waiting for your reply Regards, Mit
Intermediate & Advanced SEO | | mit0 -
MOZ crawl report says category pages blocked by meta robots but theyr'e not?
I've just run a SEOMOZ crawl report and it tells me that the category pages on my site such as http://www.top-10-dating-reviews.com/category/online-dating/ are blocked by meta robots and have the meta robots tag noindex,follow. This was the case a couple of days ago as I run wordpress and am using the SEO Category updater plugin. By default it appears it makes categories noindex, follow. Therefore I edited the plugin so that the default was index, follow as I want google to index the category pages so that I can build links to them. When I open the page in a browser and view source the tags show as index, follow which adds up. Why then is the SEOMOZ report telling me they are still noindex,follow? Presumably the crawl is in real time and should pick up the new follow tag or is it perhaps because its using data from an old crawl? As yet these pages aren't indexed by google. Any help is much appreciated! Thanks Sam.
Intermediate & Advanced SEO | | SamCUK0 -
Will ranking be improved or hurt by changing 1/5 of part numbers to key words
Note: I bold major content for your quick skim for your convenience. Does this help you decide if its a fit for your response? My site has been devastated by the Panda or unknown reasons so I need to think outside the box. I distribute industrial products with average brand recognition. I only have about 5 competitors selling this same brand. My other brand competitors are billion dollar companies that pay a lot for PPC and have sites with 10 times the product offering. Since my brand recognition is not as important as the function.I'm thinking about changing the part numbers to reflect function. This will affect about 1/5of the parts ( about 500 out of 3,000 parts) . My concern is will ranking be hurt or helped by changing these parts with these strong keywords in front of the part for such a high % of the site. The strong keywords cost $10 for a chance at a $200 sale with repeat business. Example: Current part is: 10-10 Black Plastic; which is a Big Red Truck with my brand part # as 10-10 and comes in different colors of plastic. . Keyword is Big Red Truck. I would like to put my manufactures brand in the description. My same brand competitors sell 10,000 parts and my logic is that if I have the brand in 1/5 of my parts ranking would be improved because of the % of brand per the site versus my same brand competitors.. So I would change the part # to : **Brand 10-10-Black Plastic Big Red Truck ** In conversation I would state the part as: Brand: 18 characters, Part #: 8, Material:12, Keyword: 27 If the keyword should be first I could change to: K,B P,M. Which is recommended?
Intermediate & Advanced SEO | | Wales0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0