Lumberjack – lumberjack.nginx (version 0.1.0)

As I posted last time, lumberjack is my start of a log line analyzer/visualizer project in Clojure. This write up will cover the version 0.1.0 lumberjack.nginx namespace.

As this is a version 0.1.0, and to get it out, I am parsing Nginx log lines that take the following format, as all the log lines that I have been needing to parse match it. - - [18/Mar/2013:15:20:10 -0500] "PUT /logon" 404 1178 "" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)"

The function nginx-logs takes a sequence of Nginx log filenames to convert to a hash representing a log line by calling process-logfile on each one.

(defn nginx-logs [filenames]
  (mapcat process-logfile filenames))

The function process-logfile takes a single filename and gets the lines from the file using slurp, and then maps over each of the lines using the function parse-line.

(defn- logfile-lines [filename]
  (string/split-lines (slurp filename)))

(defn process-logfile [filename]
    (map parse-line (logfile-lines filename)))

At this point, this is sufficient for what I am needing, but have created an issue on the Github project to address large log files, and the ability to lazily read in the lines so the whole log file does not have to reside in memory.

The function parse-line, holds a regex, and does a match of each line against the pattern. It takes each part of the match and associates to a hash using the different parts of the log entry as a vector of the keywords that represent each part of the regex. This is done by reducing against an empty hash and taking the index of the part into match, the result of re-find.

(def parts [:original

(defn parse-line [line]
  (let [parsed-line {}
        pattern #"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})? - - \[(.*)\] \"(\w+) ([^\"]*)\" (\d{3}) (\d+) \"([^\"]*)\".*"
        match (re-find pattern line)]
    (reduce (fn [memo [idx part]]
                (assoc memo part (nth match idx)))
            parsed-line (map-indexed vector parts))))

Looking at this again a few days later, I went and created and issue to pull out the definition of pattern into a different definition, outside of the let, and even the parse-line function. I also want to go back and clean up the parsed-line from the let statement as it does not need to be declared inside the let, but can just pass the empty hash to the reduce. This was setup there before I refactored to a reduce, and was just associating keys one at a time to the index of matched as I was adding parts of the log entry.

Any comments on this are welcome, and I will be posting details on the other files soon as well.


, ,

1 Comment

Lumberjack – Log file parsing and analysis for Clojure

I have just pushed a 0.1.0 version of a new project called Lumberjack. The goal is to be a library of functions to help parse and analyze log files in Clojure.

At work I have to occasionally pull down log files and do some visualization of log files from our Nginx webservers. I decided that this could be a useful project to play with to help me on my journey with Clojure and Open Source Software.

This library will read in a set of Nginx log files from a sequence, and parse them to a structure to be able to analyze them. It currently also provides functionality to be able to visualize the data as a set of time series graphs using Incanter, as that is currently the only graphing library I have seen so far.

A short future list of things I would like to be able to support that come to mind very quickly, and not at all comprehensive:

  • Update to support use of BufferedReader for very long log files so the whole file does not have to reside in memory before parsing, and take advantage of lazyness.
  • The ability to only construct records with a subset of the parsed data, such as request type, and timestamp.
  • The ability to parse log lines of different types, e.g. Apache, IIS or other formats
  • Additional graphs other than time series, e.g. bar graphs to show number of hits based off of IP Address.
  • Possibility of using futures, or another concurrency mechanism, to do some of the parsing and transformation of log lines into the data structures when working on large log files.

The above are just some of my thoughts on things that might fit well as updates to this as I start to use this more and flush out more use cases.

I would love comments on my code, and any other feedback that you may have. This is still early but I wanted to put something out there that might be of some use to others as well.

You can find Lumberjack on my Github account at

Thanks for your comments and support.

, ,


Error installing iconv

Today I was trying to install the iconv gem, and was getting this error

$gem install iconv
Building native extensions.  This could take a while...
ERROR:  Error installing iconv:
	ERROR: Failed to build gem native extension.

        <rvm_dir>/rubies/ruby-2.0.0-preview2/bin/ruby extconf.rb
checking for rb_enc_get() in ruby/encoding.h... yes
checking for iconv() in iconv.h... no
checking for iconv() in -liconv... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:

Gem files will remain installed in <rvm_dir>/gems/ruby-2.0.0-preview2/gems/iconv-1.0.2 for inspection.
Results logged to <rvm_dir>/gems/ruby-2.0.0-preview2/gems/iconv-1.0.2/ext/iconv/gem_make.out

After a bit of searching, I found a number of answers suggesting that I would need to reinstall ruby via RVM, but in the following StackOverflow question

In it was the solution:
gem install iconv -- --with-iconv-dir=/usr/local/Cellar/libiconv/1.13.1

After that, all was well.


, ,

Leave a comment

Starting a DFW Erlang User Group

At work we have some Erlang applications for some of our services, and there is a potential to move another application to Erlang in the future. With this, a co-worker and myself are proud to start a Erlang User Group for the Dallas/Fort Worth Metroplex.

We are coordinating through and the user group can be found at

We have our Kickoff Meeting scheduled for 6:30 pm on February 7th hosted at Cooper’s BBQ (map) in the Fort Worth Stockyards District, with the food to be sponsored by

Our agenda is to do introductions and learn how people are using Erlang, and to come up with our initial plan for how we want the User Group to function so that everyone gets the most value they can from sharing their time with everyone.

If you are have been using Erlang, or are just interested in learning more about it, please join us at our Kickoff Meeting and if you know anyone else who would please pass this along to them.



Leave a comment

Log File parsing with Futures in Clojure

As the follow up to my post Running Clojure shell scripts in *nix enviornments, here is how I implemented an example using futures to parse lines read in from standard in as if the input was piped from a tail and writing out the result of parsing the line to standard out.

First due to wanting to run this a script from the command line I add this a the first line of the script:

!/usr/bin/env lein exec

As well, I will also be wanting to use the join function from the clojure.string namespace.

(use '[clojure.string :only (join)])

When dealing with futures I knew I would need an agent to adapt standard out.

(def out (agent *out*))

I also wanted to separate each line by a new line so I created a function writeln. The function takes a Java Writer and calls write and flush on each line passed in to the function:

(defn writeln [^ w line]
  (doto w
    (.write (str line "\n"))

Next I have my function to analyze the line, as well as sending the result of that function to the agent via the send-off function.

(defn analyze-line [line]
  (str line "   " (join "  " (map #(join ":" %) (sort-by val > (frequencies line))))))

(defn process-line [line]
  (send-off out writeln (analyze-line line)))

The analyze-line function is just some sample code to return a string of the line and the frequencies of each character in the line passed in. The process-line function takes a line and calls send-off to the agent out for the function writeln with the results of calling the function analyze-line.

With all of these functions defined I now need to just loop continuously and process lines that are not empty, and call process-line for each line as a future.

(loop []
  (let [line (read-line)]
    (when line
      (future (process-line line)))

, , , , ,

1 Comment

Running Clojure shell scripts in *nix environments

I was recently trying to create a basic piece of Clojure code to play with “real-time” log file parsing by playing with futures. The longer term goal of the experiment is to be able to tail -f a log file pipe that into my Clojure log parser as input.

As I wasn’t sure exactly what I would need to be doing, I wanted an easy way to run some code quickly without having to rebuild the jars through Leiningen every time I wanted to try something, in a manner similar to the way I am thinking I will be using it if the experiment succeeds.

I created a file test_input with the following lines:

1 hello
2 test
3 abacus
4 qwerty
5 what
6 dvorak

With this in place, my goal was to be able to run something like cat test_file | parser_concept. After a bit of searching I found the lein-exec plugin for Leiningen, and after very minor setup I was able to start iterating with input piped in from elsewhere.

The first step was to open my profiles.clj file in my ~/.lein directory. I made sure lein-exec was specified in my user plugins as so:

{:user {:plugins [[lein-exec "0.2.1"]
                  ;other plugins for lein

With this in place I just put the following line at the top of my script.clj file:

#!/usr/bin/env lein exec

I then changed the permissions of script.clj file to make it executable, I was able to run the following and have my code run against the input.

cat test_input | ./script.clj

I will be posting a follow up entry outlining my next step of experimenting with “processing” each line read in as a future.

, , , , ,


Remove first and last lines from file in OS X

Just a quick post to help burn this into longer term memory.

Today I was having to check some info in a generated csv file that had a header and footer row. I only wanted the records in between, so I needed to remove the first and last lines of that CSV, after I got the columns I needed.

cut <args> my.csv | tail -n +2 | sed '$d'

The tail -n +2 command starts at the second line and outputs the input stream/file. The sed '$d' command deletes the last line of the file.

Leave a comment