Second milestone of tinylog 2.5 is out

You can edit this post on GitHub

The focus of the second milestone of tinylog 2.5 was the support of line-delimited JSON and the buffering of log entries in case of temporary loss of database connections.

Already with version 2.4 a JSON writer was integrated into tinylog. This JSON writer stores all log entries in a JSON array by default to comply with the JSON standard. However, most professional JSON consumers, including ElasticSearch, accept line-delimited JSON. Here, every log entry is a JSON object in a separate line without any JSON array brackets around it. This simplifies parsing of JSON log files, reduces resource consumption (memory and CPU usage), and even allows streaming of log entries. Line-delimited JSON can be enabled via the configuration by the property writer.format = LDJSON.

Example JSON writer configuration:

writer               = json
writer.format        = LDJSON
writer.file          = log.json
writer.field.level   = level
writer.field.message = message

Example JSON output:

{"level": "INFO", "message": "Hello World"}
{"level": "INFO", "message": "Good Bye"}

The JDBC writer can insert log entries into SQL databases. Of course, this requires a working connection to the database. However, tinylog now keeps log entries buffered in case of temporary losses of the database connection. tinylog buffers now up to 100 log entries and inserts them into the database as soon as the connection can be reestablished. This prevents log entries from being lost in case of short temporary database downtimes without consuming much memory.

There is a small change in the DynamicPolicy. Now, existing log files are automatically continued. Furthermore, there is a bugfix for using the new JndiValueResolver in Java modules projects. The org.tinylog.api module now correctly defines java.naming as an optional dependency.