Recommended log file analysis software for OS X?
-
Due to some questions over direct traffic and Googlebot behavior, I want to do some log file analysis. The catch is this is a Mac shop, so all our systems are on OS X. I have Windows 8 running in an emulator, but for the sake of simplicity I'd rather run all my software in OS X.
This post by Tim Resnik recommended Web Log Explorer, but it's for Windows only. I did discover Sawmill, which claims to run on any platform.
Any other suggestions? Bear in mind our site is load balanced over three servers, so please take that into consideration.
-
Hello! I just came across your question. Here is a recent Moz post of mine on how to use log files for technical SEO -- the example screenshots are from the SEO dashboard in Logz.io. Feel free to sign-up at our site and test our public beta to see if it works for you.
Disclosure: Yes, I work for the company.
-
Hiya ufmedia,
Funny you bring this up, I just spent a few hours pulling my hair out trying to get a log parser to work on my Mac. Ultimately, I went back to my PC and ran Web Log Explorer. I have heard good things about Splunk, but seems too robust for SEO purposes alone. Their free version allows for 500/MB per day. If you are under that, It might be worth giving it a go. Sawmill looks like it could do the trick, but may require a decent amount of setup. Thanks for the tip! I will check it out.
Thanks!
Tim
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Three version of english pages: EN-US, EN-GB und EN as x-default
We want address the search market for USA and UK. Therefore all english pages have small regional variations with similar content. Since a longer time (after a relaunch) Google has problems to identify the right page (/en-gb/) for the right search market (UK) - although we use hreflang and sitemaps from the beginning. We monitor those in moz for our UK campaign (/en-gb/ pages) by jumps in the ranking of individual keywords (>-50 and >+50). -50 means not that the ranking of our website is lost. In this case Google will substitute the ranking of the /en-gb/ page with the variant /en/. One excample:
Technical SEO | | PeterGolze
https://www.openmind-tech.com/en-gb/industries/cam-software-for-motor-sports/
This page lost the ranking and the other languag variant is ranking for position 2.
https://www.openmind-tech.com/en/industries/cam-software-for-motorsport/ In the moment I have no idea what we can change in our html code.0 -
Several Items in the Organization schema structured file
Hi MOZ community! Could you please help me with the issue? I have implemented Organization Schema on the website. But according to the structure I can not markup data once. So now I have 2 Items for a Organization schema on a page. The questions are: 1. Does Google consider both of them? 2. Is that OK to have a few Items for one type of schema on the page? Thank you
Technical SEO | | juicefromtheraw10 -
Any recommendations for specialist magneto dedicated hosting in UK/Ireland?
Any one with experience with a specialist magneto dedicated hosting in UK/Ireland? Just getting a few quotes and looking for recommendations ( and any I should avoid) Thanks
Technical SEO | | PaddyDisplays0 -
Website content has been scraped - recommended action
So whilst searching for link opportunities, I found a website that has scraped content from one of our websites. The website looks pretty low quality and doesn't link back. What would be the recommended course of action? Email them and ask for a link back. I've got a feeling this might not be the best idea. The website does not have much authority (yet) and a link might look a bit dodgy considering the duplicate content Ask them to remove the content. It is duplicate content and could hurt our website. Do nothing. I don't think our website will get penalised for it since it was here first and is in the better quality website. Possibly report them to google for scraping? What do you guys think?
Technical SEO | | maxweb0 -
Where Is This Being Addended to Our Page File Names?
I have worked over the last several months to eliminate duplicate page titles at our site. Below is one situation that I need your advice on. Google Webmaster Tools is reporting several of our pages with
Technical SEO | | lbohen
duplicate title such as this one: This is a valid page at our Web store: http://www.audiobooksonline.com/159179126X.html This is an invalid page that Google says is a duplicate of the one above: http://www.audiobooksonline.com/159179126X.html?gdftrk=gdfV2138_a_7c177_a_7c432_a_7c9781591791263 Where might the code ?gdftrk=.... be coming from? How to get rid of it?0 -
Oh no googlebot can not access my robots.txt file
I just receive a n error message from google webmaster Wonder it was something to do with Yoast plugin. Could somebody help me with troubleshooting this? Here's original message Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. Recommended action If the site error rate is 100%: Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot. If your robots.txt is a static page, verify that your web service has proper permissions to access the file. If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure. If the site error rate is less than 100%: Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors. The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website. After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
Technical SEO | | BistosAmerica0 -
I am using SEOmoz pro software and my blog tags are bringing up 404 errors.
After checking they do bring back a 404 page, so i am wondering what to do. Do i remove all the blog tags? We use a Drupal cms system.
Technical SEO | | AITLtd0 -
Recommended Wordpress Plugins
Hey Seomoz'ers, What would you recommend to use as plugins for wordpress when starting a new blog?
Technical SEO | | asimahme0