Crawl Depth improvements
-
Hi
I'm checking the crawl depth report in SEM rush, and looking at pages which are 4+ clicks away.
I have a lot of product pages which fall into this category. Does anyone know the impact of this? Will they never be found by Google?
If there is anything in there I want to rank, I'm guessing the course of action is to move the page so it takes less clicks to get there?
How important is the crawl budget and depth for SEO? I'm just starting to look into this subject
Thank you
-
Hey Becky,
Those pages will be found by Google if you have links pointing to them somewhere on your site. In terms of crawl budget, the more page depth the more time does Google need to spend on crawling your site.
However, with proper internal linking you should be able to significantly lower the amount of clicks. So the next step would be adding some links through relevant anchor texts. After you do this, watch the analytics and let me know if it had any impact.
Hope it helps. Cheers, Martin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Issue for Deleted Pages
Hi, sometimes, I just delete a page and not necessarily want to make a 404 to another page. So Google Webmaster Tools shows me 108 'not found' pages under 'Crawling Errors'. Is that a problem for my site?
Intermediate & Advanced SEO | | soralsokal
Can I ignore this with good conscience?
Shall I make 404 to my homepage? I am confused and would like to hear your opinion on this. Best, Robin0 -
Crawl efficiency - Page indexed after one minute!
Hey Guys,A site that has 5+ million pages indexed and 300 new pages a day.I hear a lot that sites at this level its all about efficient crawlabitliy.The pages of this site gets indexed one minute after the page is online.1) Does this mean that the site is already crawling efficient and there is not much else to do about it?2) By increasing crawlability efficiency, should I expect gogole to crawl my site less (less bandwith google takes from my site for the same amount of crawl)or to crawl my site more often?Thanks
Intermediate & Advanced SEO | | Mr.bfz0 -
Should you give all the posts in a Forum an unique description? Or let it empty so Google can make one with the crawled keywords .... ...
To make all descriptions for all forum posts unique is a hell of a job.... One option is to crawl the first 165 characters and turn these automaticly into the meta description of the page.
Intermediate & Advanced SEO | | Zanox
If Google thinks the meta description is not suitable for the search query, Google will make a own description. In this case all te meta descriptions are unique, like the Google Guidlines want you to do. How will Google think off the fact when we delete the meta description tag so Google will make all the descriptions by herself?0 -
How can we improve rankings for category pages
Hi Everyone, I have a dog breeder site I'm working on and I was wondering if I could get some tips and ideas on things to do to help the "category" pages rank better in search engines. Let's say I have "xyz" breed category page which has listings of all dog breeders who offer that particular breed, in this case "xyz". I have certain breeder profile listings which rank higher for those terms that the category page should be ranking for. So I'm guessing Google thinks those breeder profile pages are more relevant for those terms. Especially if well optimized. I know thin content may be my problem here, but one of our competitors dominates the rankings for relevant keywords with no content on their category pages. What do you all suggest?
Intermediate & Advanced SEO | | rsanchez0 -
Is it safe to not have a sitemap if Google is already crawling my site every 5-10 min?
I work on a large news site that is constantly being crawled by Google. Googlebot is hitting the homepage every 5-10 minutes. We are in the process of moving to a new CMS which has left our sitemap nonfunctional. Since we are getting crawled so often, I've met resistance from an overwhelmed development team that does not see creating sitemaps as a priority. My question is, are they right? What are some reasons that I can give to support my claim that creating an xml sitemap will improve crawl efficiency and indexing if we are already having new stories appear in Google SERPs within 10-15 minutes of publication? Is there a way to quantify what the difference would be if we added a sitemap?
Intermediate & Advanced SEO | | BostonWright0 -
Crawl errors in GWT!
I have been seeing a large number of access denied and not found crawl errors. I have since fixed the issued causing these errors; however, I am still seeing the in webmaster tools. At first I thought the data was outdated, but the data is tracked on a daily basis! Does anyone have experience with this? Does GWT really re-crawl all those pages/links everyday to see if the errors still exist? Thanks in advance for any help/advice.
Intermediate & Advanced SEO | | inhouseseo0 -
How to stop Google crawling after 301 redirect?
I have removed all pages from my old website and set 301 redirect to new website. But, I have verified old website with Google webmaster tools' HTML verification file which enable me to track all data and existence of pages in Google search for my old website. I was assumed that, Google will stop crawling and DE-indexed all pages after 301 redirect. Because, I have set 301 redirect before 3 months. Now, I'm able to see Google bot activity on my website with help of Google webmaster tools. You can find out attachment to know more about it. How can it possible & How Google can crawl removed pages? You can see following image to know more about it. First & Second
Intermediate & Advanced SEO | | CommercePundit0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0