Logstash configuration tips for Windows – log4net configuration
August 22, 2014, with 2 commentsAfter setting up Logstash on Windows in basic manner and adding Eventlog handling (see here), you might want to make use of Logstash for your .net applications.
There are multiple solutions on the internet, including using custom appenders. I decided to take a different route, following e.g. this post.
I am using UdpAppender to efficiently push log entries to Logstash. As udp is not realiable, I furthermore decided to run one Logstash instance on each application server which then subsequently forwards the log entry to a central Logstash server. In fact, this works much like MTA daemons on Unix systems. The overhead on each machine is small and the big advantage is that I can use vanilla log4net and use Logstash without modifying my .net applications. Small disadvantage is that I have to care about Logstash on all of my servers, i.e. making sure that it is running and up-to-date.
The log4net configuration looks like:
<appender name="UdpAppender" type="log4net.Appender.UdpAppender"> <RemoteAddress value="127.0.0.1" /> <RemotePort value="5960" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level - %property{log4net:HostName} - MyApplication - %message%newline" /> </layout> </appender>
I use a dedicated input which is used to mark all entries to come from log4net:
udp { port => 5960 codec => plain { charset => "UTF-8" } type => "log4net" }
I use a custom conversion pattern which is then dissected by a grok filter:
filter { if [type] == "log4net" { grok { remove_field => message match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{GREEDYDATA:tempMessage}" } } if !("_grokparsefailure" in [tags]) { mutate { replace => [ "message" , "%{tempMessage}" ] replace => [ "host" , "%{tempHost}" ] } } mutate { remove_field => [ "tempMessage" ] remove_field => [ "tempHost" ] } }
This rule parses the log entries and fills the entries sourceTimestamp, threadid, loglevel, host, message.
We no have the messages stored on my application servers. For moving the messages to the central Logstash server, I decided to use Redis. It is an efficient database/ queueing system. Unfortunately, only an old version can be downloaded from the Redis site.
As data is only stored temporarily in Redis and bleeding-edge features are not required, this version is sufficient for my needs.
The configuration for exporting the data from the application servers is:
output { redis { key => "logstash-centralized" codec => json { charset => "UTF-8" } data_type => "list" host => "my-central-server" } }
I then have full options for further processing. I can either read directly from Redis or read the log entries from Redis with my central Logstash instance.
Previous posts on Logstash:
Next posts:
Aktualisiert am February 19, 2015
Rob Thijssen schrieb am
February 19, 2015 um 12:26 pmThanks! I found your post useful. If you add “(?m)” to the beginning of your grok match message, you will also get the multiline exception stack traces from log4net. Eg:
match => { message => “(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} – %{GREEDYDATA:tempMessage}” }
stephan schrieb am
February 19, 2015 um 9:04 pmThanks, I just added this!