pythonpython-3.xloggingsyslogdowntime

How do I prevent python logging module from raising errors stopping execution on downtime, permanently retry and buffer events


I have a Python process whose output is logged, among other handlers, to a syslog server over tcp. The syslog server may incur in some downtime from time to time. I need the script to run regardless of the logging mechanism. Are there any options/commonly used (or built-in) libraries for buffering the logs and retrying ad infinitum that I may be missing or do I need to write a custom wrapper class for my logging that handles the buffering and retrying?

The issue arises when I stop the syslog server, in which case any "logger" statement raises an error and stops the script execution.

import logging
from logging.handlers import SysLogHandler
...
logger = logging.getLogger()
handler = SysLogHandler(address=syslog_address, socktype=socket.SOCK_STREAM)
logger.addHandler(handler)
...
logger.info("Some statements all over my code I want logged and buffered if possible but I do not want to raise exceptions stopping execution and I don't want to repeat myself wrapping them all in try/except blocks"

Solution

  • The built-in that python offers for this is the QueueHandler. What you do is move the SysLogHandler to a separate thread (or process) with a QueueListener and replace it with a QueueHandler in the application. This way you are insulating your app from failures caused by the syslog and queue messages are automatically buffered. Implementing infinite retry is pretty easy with a Queue, just put failed tasks back.