Star

Benchmark

Method

All logging frameworks are configured to write log entries with the level info or higher to a log file. The output of log entries with the level trace and debug is disabled. After each benchmark run, the created log file is automatically checked for completeness and correctness.

The output format is: Date [Thread] Class.Method(): Message

For all logging framework, the default synchronous and - if available - the asynchronous output have been tested. All logging frameworks but JUL offer mechanisms for asynchronous output of log entries. tinylog uses a writing thread, Log4j 1 and Logback use an asynchronous appender and Log4j 2 the promoted asynchronous loggers. If asynchronous logging is activated, log entries are written buffered to the log file.

Every benchmark has been executed 120 times for each logging framework. For each run and framework, a fresh JVM has been set up. The 10 best and 10 worst runs have been discarded to avoid that outliers that could distort the whole result. The final result is the average of the remaining 100 runs.

Environment

All benchmarks have been executed on an Intel Core i7-4790 (3.60 GHz quad-core CPU) with 16 GB memory on Windows 7 (SP1) with JRE 8u151.

The following logging frameworks have been tested:

Maximum logging performance

This benchmark measures how fast each logging framework can output log entries. The benchmark just creates log entries with the levels error, warning, info, debug and trace in a loop one million times.

tinylog
with writing thread
 2 s 483 ms
 1 s 256 ms
JUL
 6 s 982 ms
Log4j 1
with async appender
9 s 989 ms
 6 s 79 ms
Logback
with async appender
11 s 531 ms
 7 s 496 ms
Log4j 2
with async logger
18 s 354 ms
 8 s 840 ms

Influence on compute-intensive application

In practice, the maximum logging performance is probably less important than the influence of logging on the performance of the actual application. Thus, the latter is tested in a second benchmark that calculates all prime numbers from 2 to 10,000,000. The calculation is done in 16 threads to use all cores at full capacity. All found prime numbers are logged as info log entries and all others are dropped as trace log entries.

tinylog
with writing thread
 3 s 43 ms
 1 s 275 ms
Logback
with async appender
 4 s 519 ms
 3 s 652 ms
Log4j 2
with async logger *
 6 s 18 ms
 4 s 90 ms
JUL
12 s 59 ms
Log4j 1
with async appender
16 s 218 ms
9 s 309 ms

* Log4j 2.10 has lost one or two log entries in 5 of 120 runs. This is reproducible, but has not happened in previous tested versions of Log4j 2.

Conclusion

In comparison to other logging frameworks, tinylog is the fastest in both benchmarks. The logging performance can be further improved by using asynchronous and buffered output of log entries. On the other hand, this has the disadvantage that possibly the last - and thus the most important - log entries are lost after a crash of the JVM.

The benchmark program is available on GitHub as open source for the reproducing the results or adaption of the benchmarks to own requirements. The benchmarks are parameterized, making it possible to configure the output format of log entries or the stack trace depth, for example.