Crawl Depth improvements
-
Hi
I'm checking the crawl depth report in SEM rush, and looking at pages which are 4+ clicks away.
I have a lot of product pages which fall into this category. Does anyone know the impact of this? Will they never be found by Google?
If there is anything in there I want to rank, I'm guessing the course of action is to move the page so it takes less clicks to get there?
How important is the crawl budget and depth for SEO? I'm just starting to look into this subject
Thank you
-
Hey Becky,
Those pages will be found by Google if you have links pointing to them somewhere on your site. In terms of crawl budget, the more page depth the more time does Google need to spend on crawling your site.
However, with proper internal linking you should be able to significantly lower the amount of clicks. So the next step would be adding some links through relevant anchor texts. After you do this, watch the analytics and let me know if it had any impact.
Hope it helps. Cheers, Martin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
Intermediate & Advanced SEO | | BandG0 -
Will a disclaimer affect Crawling?
Hello everyone! My German users will have to get a disclaimer according to German laws, now my question is the following: Will a disclaimer affect crawling? What's the best practice to have regarding this? Should I have special care in this? What's the best disclaimer technique? A Plain HTML page? Something overlapping the site? Thank you all!
Intermediate & Advanced SEO | | NelsonF0 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain. Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm. There seem to be two potential fixes. Which is best for this case? use the robots.txt to block Google from crawling the .gtm site 2) canonicalize the the gtm urls to toptable.co.uk In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best. Thanks in advance to the SEOmoz community!
Intermediate & Advanced SEO | | OpenTable0 -
Depth of Links on Ecommerce Site
Hi, In my sitemap, I have the preferred entrance pages and URL's of categories and subcategories. But I would like to know more about how Googlebot and other spiders see a site - e.g. - what is classed as a deep link? I am using Screaming Frog SEO spider, and it has a metric called level on it - and this represents how deep or how many clicks away this content is.. but I don't know if that is how Googlebot would see it - From what Screaming Frog SEO spider software says, each move horizontally across from Navigation is another level which visually doesnt make sense to me? Also, in my sitemap, I list the URL's of all the products, there are no levels within the sitemap. Should I be concerned about this? Thanks, B
Intermediate & Advanced SEO | | bjs20100 -
Can the experts out here can review our site for improved performance and suggestions
Hi - we have started off with Auto Site based in India - 15 months back The site is - www.mycarhelpline.com - generating close to 2500 visits / daily basis. we aim to scale to new heights to touch atleast 10,000 visits / daily basis in coming 12 months Can we request your review to recommend for :- Link Building Site review Loading time Improvements / suggestion to take it up - (leaving aside dynamic url's) . Though may seem as SEO Audit and review - but any recommendations or suggestion will be highly appreciated.
Intermediate & Advanced SEO | | Modi0 -
How to Disallow Specific Folders and Sub Folders for Crawling?
Today, I have checked indexing for my website in Google. I found very interesting result over there. You can check that result by following result of Google. Google Search Result I aware about use of robots.txt file and can disallow images folder to solve this issue. But, It may block my images to get appear in Google image search. So, How can I fix this issue?
Intermediate & Advanced SEO | | CommercePundit0 -
Negative impact on crawling after upload robots.txt file on HTTPS pages
I experienced negative impact on crawling after upload robots.txt file on HTTPS pages. You can find out both URLs as follow. Robots.txt File for HTTP: http://www.vistastores.com/robots.txt Robots.txt File for HTTPS: https://www.vistastores.com/robots.txt I have disallowed all crawlers for HTTPS pages with following syntax. User-agent: *
Intermediate & Advanced SEO | | CommercePundit
Disallow: / Does it matter for that? If I have done any thing wrong so give me more idea to fix this issue.0