Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Exclude status codes in Screaming Frog
-
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration.
The site has approximately 190,000 pages according to the results of a Google site: command.
- The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root.
- There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters.
- There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs.
Does anyone know how to exclude files using status codes? I know that would help.
If it helps, the site is kodylighting.com.
Thanks in advance for any guidance you can provide.
-
Thanks for your help. It literally was just the fact that it had to be done before the crawl began and could not be changed during the crawl. Hopefully this is changed because sometimes during a crawl you find things you want to exclude that you may have not known of their existence before hand.
-
Are you sure it's just on Mac,have you tried on PC? Do you have any other rules in include or perhaps a conflicting rule in exclude? Try running a single exclude rule, also on another small site to test.
Also from support if failing on all fronts:
- Mac version, please make sure you have the most up to date version of the OS which will update Java.
- Please uninstall, then reinstall the spider ensuring you are using the latest version and try again.
To be sure - http://www.youtube.com/watch?v=eOQ1DC0CBNs
-
does the exclude function work on mac. i have tried every possible way to exclude folders and have not been successful while running an analysis
-
That's exactly the problem, the redirects are disbursed randomly throughout the site. Although, and the job's still running, it now appears as though there's almost a 1-2-1 correlation between pages and redirects on the site.
I also heard from Dan Sharp via Twitter. He said "You can't, as we'd have to crawl a URL to see the status code You can right click and remove after though!"
Thanks again Michael. Your thoroughness and follow through is appreciated.
-
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
Thank you Michael.
You're right. I was on a 64 bit machine running a 32 bit verson of java. I updated it and the scan has been running for more than 24 hours now without hanging. So thank you.
If anyone else knows of a way to exclude files using status codes I'd still like to learn about it. So far the scan is showing me 20,000 redirected files which I'd just as soon not inventory.
-
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Using NoIndex Tag instead of 410 Gone Code on Discontinued products?
Hello everyone, I am very new to SEO and I wanted to get some input & second opinions on a workaround I am planning to implement on our Shopify store. Any suggestions, thoughts, or insight you have are welcome & appreciated! For those who aren't aware, Shopify as a platform doesn't allow us to send a 410 Gone Code/Error under any circumstance. When you delete or archive a product/page, it becomes unavailable on the storefront. Unfortunately, the only thing Shopify natively allows me to do is set up a 301 redirect. So when we are forced to discontinue a product, customers currently get a 404 error when trying to go to that old URL. My planned workaround is to automatically detect when a product has been discontinued and add the NoIndex meta tag to the product page. The product page will stay up but be unavailable for purchase. I am also adjusting the LD+JSON to list the products availability as Discontinued instead of InStock/OutOfStock.
Technical SEO | | BakeryTech
Then I let the page sit for a few months so that crawlers have a chance to recrawl and remove the page from their indexes. I think that is how that works?
Once 3 or 6 months have passed, I plan on archiving the product followed by setting up a 301 redirect pointing to our internal search results page. The redirect will send the to search with a query aimed towards similar products. That should prevent people with open tabs, bookmarks and direct links to that page from receiving a 404 error. I do have Google Search Console setup and integrated with our site, but manually telling google to remove a page obviously only impacts their index. Will this work the way I think it will?
Will search engines remove the page from their indexes if I add the NoIndex meta tag after they have already been index?
Is there a better way I should implement this? P.S. For those wondering why I am not disallowing the page URL to the Robots.txt, Shopify won't allow me to call collection or product data from within the template that assembles the Robots.txt. So I can't automatically add product URLs to the list.0 -
422 vs 404 Status Codes
We work with an automotive industry platform provider and whenever a vehicle is removed from inventory, a 404 error is returned. Being that inventory moves so quickly, we have a host of 404 errors in search console. The fix that the platform provider proposed was to return a 422 status code vs a 404. I'm not familiar with how a 422 may impact our optimization efforts. Is this a good approach, since there is no scalable way to 301 redirect all of those dead inventory pages.
Technical SEO | | AfroSEO0 -
Some URLs were not accessible to Googlebot due to an HTTP status error.
Hello I'm a seo newbie and some help from the community here would be greatly appreciated. I have submitted the sitemap of my website in google webmasters tools and now I got this warning: "When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted." How do I fix this? What should I do? Many thanks in advance.
Technical SEO | | GoldenRanking140 -
Do I have a problem with missing pages in Screaming Frog?
We have category pages and some of those pages have pagination due to us having additional items. Screaming Frog could not find the items that were after page 1. Is this a problem for Google? These item pages are still in the sitemap. I am sure they can find them to index them but does it hurt rankings at all.
Technical SEO | | EcommerceSite0 -
Screaming Frog Content Showing charset=UTF-8
I am running a site through Screaming Frog and many of the pages under "Content" are reading text/html; charset=UTF-8. Does this harm ones SEO and what does this really mean? I'm running his site along with this competitors and the competitors seems very clean with content pages reading text/html. What does one do to change this if it is a negative thing? Thank you
Technical SEO | | seoessentials0 -
Screaming From occurences and canonicals what does it all mean
Bonjourno from Wetherby UK... Ive used a package called screamong frog to diagnose canonical errors but can anyone tell me what this means? http://i216.photobucket.com/albums/cc53/zymurgy_bucket/understand-occurances-canonical.jpg Thanks in advance. David
Technical SEO | | Nightwing0 -
Diagnosing Canonical Errors Is Screaming frog reliable?
Morning from suny & warm wetherby UK 🙂 On this page http://www.goldsboroughestates.co.uk/how-we-care-for-you/right-to-manage/ screaming frog is citing a canonical error but I'm confused as this piece of code is in place: http://www.goldsboroughestates.co.uk/About/right-to-manage" /> So my question is please - "Does this page http://www.goldsboroughestates.co.uk/how-we-care-for-you/right-to-manage/ have a caninical error or is screaming frog useless? Other examples where screaming frog is picking up canonical errors include:
Technical SEO | | Nightwing
http://www.goldsboroughestates.co.uk/what-our-customers-say/right-to-manage/
http://www.goldsboroughestates.co.uk/buying-a-home/right-to-manage/ Oh forgot to say the preffered version is http://www.goldsboroughestates.co.uk/About/right-to-manage/ Any insights welcvome 🙂0 -
How Add 503 status to IIS 6.0
Hi, Our IS department is bringing down our network for maintenance this weekend for 24 hours. I am worried about search engine implications. all Traffic is being diverted, and the diverted traffic is being sent to another server with IIS 6.0 From all research i have done it appears creating a custom 503 error message in IIS 6 is not possible Source: http://technet.microsoft.com/en-us/library/bb877968.aspx So my question is does anyone have any suggestions on how to do a proper 503 temporarily unavailable in IIS 6.0 with a custom error message? Thanks
Technical SEO | | Jinx146780