What is a good crawl budget?
-
Hi Community!
I am in the process of updating sitemaps and am trying to obtain a standard for what is considered "strong" crawl budget? Every documentation I've found includes how to make it better or what to watch out for. However, I'm looking for an amount to obtain for (ex: 60% of the sitemap has been crawled, 100%, etc.)
-
@blueprintmarketing I have a large website with Wordpress image folders going back to 2009.
I am currently redesigning my website, and I am trying to determine if there is any benefit to trying to shrink down / delete those images and image folders which I am no longer using.
I really do not have time to go through all of those image folders, and see which ones I am still using, and which ones I am not using anymore. I am hoping this does not matter?
Does anyone here know if this matters when it comes to Google's Crawl Budget?
All of the images are completely optimized and crunched, however, my question is whether it would be worth the time investment to go through every single folder and thousands of images and try to delete the ones which are not being referenced on any of my pages?
Does anyone have a definitive answer regarding Crawl Budget?
-
Can you give some inputs about the site [https://indiapincodes.net/](link url) I tried all recommendations, only 30% of the url is been indexed. would appreciate your time.
-
@yaelslater
Unless you have a huge site, I'm talking about half a million to one million pages. I Would not worry about True Google crawl budge anymore.However, if only 60% of URLs in your XML site map are being indexed, make sure they are indexable URLs if they're not index value, or else you should be able to click in the coverage section of Search Console. It will give you a reason why your URL was submitted by an XML site map or not noindex.
A recent study showed about 20% of URLs on all websites across the study were not indexed for one reason or another.But make sure there are only 200 URLs, no redirects 301, 302, or 404's or noindex nofollow URLs in the XML sitemap because obviously, Google does not put them into the index if the Search Console does not tell you the issue & you would like to share your domain with me, I'm sure I could figure it out.
I don't know if you're using a CDN and if you could share a little more with me especially the domain I can be a lot more helpful.
You could also use a tool like screaming frog and generate a new site map and make sure that is not the issue. If you're using Yoast, you can turn it on and off if you wanted to create a new site map.
You can create up to 500 pages for free using Screaming Frog SEO Spider it is paid after that https://www.screamingfrog.co.uk/xml-sitemap-generator/
Or if you want it or you can generate over 1000 URLs for free online I would recommend https://www.sureoak.com/seo-tools/google-xml-sitemap-generator
However, please keep in mind the sureoak tool has things like a "keyword density checker" that makes me feel like this site is giving out that information because that's not a real thing that Google considers unless you use the same word for every word in the document. Keyword density is one of those things that are not real
But the XML site map generator works just fineI hope this was of help,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any crawl issues with TLS 1.3?
Not a techie here...maybe this is to be expected, but ever since one of my client sites has switched to TLS 1.3, I've had a couple of crawl issues and other hiccups. First, I noticed that I can't use HTTPSTATUS.io any more...it renders an error message for URLs on the site in question. I wrote to their support desk and they said they haven't updated to 1.3 yet. Bummer, because I loved httpstatus.io's functionality, esp. getting bulk reports. Also, my Moz campaign crawls were failing. We are setting up a robots.txt directive to allow rogerbot (and the other bot), and will see if that works. These fails are consistent with the date we switched to 1.3, and some testing confirmed it. Anyone else seeing these types of issues, and can suggest any workarounds, solves, hacks to make my life easier? (including an alternative to httpstatus.io...I have and use screaming frog...not as slick, I'm afraid!) Do you think there was a configuration error with the client's TLS 1.3 upgrade, or maybe they're using a problematic/older version of 1.3?? Thanks -
Technical SEO | | TimDickey0 -
Will putting a one page site up for all other countries stop Googlebot from crawling my UK website?
I have a client that only wants UK users to be able to purchase from the UK site. Currently, there are customers from the US and other countries purchasing from the UK site. They want to have a single webpage that is displayed to users trying to access the UK site that are outside the UK. This is fine but what impact would this have on Google bots trying to crawl the UK website? I have scoured the web for an answer but can't find one. Any help will be greatly appreciated. Thanks 🙂
Technical SEO | | lbagley0 -
Googlebot take 5 times longer to crawl each page
Hello All From about mid September my GWMT has show that the average time to crawl a page on my site has shot up from an average of 130ms to an average of 700ms and peaks at 4000ms. I have checked my server error logs and found nothing there, I have checked with the hosting comapny and there are no issues with the server or other sites on the same server. Two weeks after this my ranking fell by about 950 places for most of my keywords etc.I am really just trying to eliminate this as a possible cause, of these ranking drops. Or was it the Pand/ EMD algo that has done it. Many Thanks Si
Technical SEO | | spes1230 -
Http VS https and google crawl and indexing ?
Is it true that https pages are not crawled and indexed by Google and other search engines as well as http pages?
Technical SEO | | sherohass0 -
Is there a reason to set a crawl-delay in the robots.txt?
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
Technical SEO | | MichaelWeisbaum0 -
Having a massive amount of duplicate crawl errors
Im having over 400 crawl errors over duplicate content looking like this: http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Ftag%2Fmahjon http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Findex.php%3F etc.. etc... So there seems to be something with my login script that is not working, Anyone knows how to fix this? Thanks
Technical SEO | | stanken0 -
424 Crawl Notices Found - Most of these notices are 301 redirects for our blog. Are notices something that would keep me from ranking well for my keywords?
212 are rel canonical and 176 are 301 permanent re-direct. An example of the re-direct is a change I made to the /trackback 302 status on my blog like; http://www.bluesunproperties.com/2012-spring-biker-rally-thunder-beach/trackback/ Are these Crawl Notices something that I should spend resources on, or should I focus more on my errors and warnings?
Technical SEO | | classa0 -
Crawling image folders / crawl allowance
We recently removed /img and /imgp from our robots.txt file thus allowing googlebot to crawl our image folders. Not sure why we had these blocked in the first place, but we opened them up in response to an email from Google Product Search about not being able to crawl images - which can/has hurt our traffic from Google Shopping. My question is: will allowing Google to crawl our image files eat up our 'crawl allowance'? We wouldn't want Google to not crawl/index certain pages, and ding our organic traffic, because more of our allotted crawl bandwidth is getting chewed up crawling image files. Outside of the non-detailed crawl stat graphs from Webmaster Tools, what's the best way to check how frequently/ deeply our site is getting crawled? Thanks all!
Technical SEO | | evoNick0