The logging
handler classes have a flush() method.
And looking at the code, logging.FileHandler
does not pass a specific buffering mode when calling open()
. Therefore when you write to a log file, it will be buffered using a default block size.
Is that correct?
It surprises me, because when I manage my own system, I am used to watching log files as a live (or near-live) view on the system. For this use case, line-buffering is desired. Also, traditional syslog() to a logging daemon does not buffer messages.
I am interested in Python versions 2.7 and 3.7.
Not really. It will flush each individual message, which is what you want.
FileHandler inherits from StreamHandler. StreamHandler calls self.flush() after every write() to the stream.
The flush() method starts to make more sense if you look at logging.MemoryHandler
. For programs which want to add buffering, MemoryHandler allows wrapping another handler, and buffering a set number of messages. It will also flush immediately on messages above a set severity level. logging
does not include a handler which automatically flushes every second or so, but you could always write one yourself.
The flush calls in StreamHandler also mean that it does what you want, if your program is run as a systemd
service and you log to stderr
. Python 3 requires flushes in this case. Python 3 currently uses block buffering for stderr when it is not a TTY. See discussion on Python issue 13597
I think I was confused by the StreamHandler code. If the user never needed to call the flush() method, why would StreamHandler define a non-empty, publicly documented implementation?
I think I was assuming too much, and I did not allow for how inheritance (argh) is used here. E.g. the base Handler class has an empty flush() method, but StreamHandler does not want to inherit that because it has a weird docstring "This version does nothing and is intended to be implemented by subclasses".