I am experiencing an issue persisting a log file write stream through pyinotify
and it's threads. I am using pyinotify
to monitor a directory for CLOSE_WRITE
file events. Before I initialize pyinotify
I create a log stream using the built-in logging
module like so:
import os, logging
from logging import handlers
from logging.config import dictConfig
log_dir = './var/log'
name = 'com.sadmicrowave.tesseract'
LOG_SETTINGS = { 'version' : 1
,'handlers': { 'core': {
# make the logger a rotating file handler so the file automatically gets archived and a new one gets created, preventing files from becoming too large they are unmaintainable.
'class' : 'logging.handlers.RotatingFileHandler'
# by setting our logger to the DEBUG level (lowest level) we will include all other levels by default
,'level' : 'DEBUG'
# this references the 'core' handler located in the 'formatters' dict element below
,'formatter' : 'core'
# the path and file name of the output log file
,'filename' : os.path.join(log_dir, "%s.log" % name)
,'mode' : 'a'
# the max size we want to log file to reach before it gets archived and a new file gets created
,'maxBytes' : 100000
# the max number of files we want to keep in archive
,'backupCount' : 5 }
}
# create the formatters which are referenced in the handlers section above
,'formatters': {'core': {'format': '%(levelname)s %(asctime)s %(module)s|%(funcName)s %(lineno)d: %(message)s'
}
}
,'loggers' : {'root': {
'level' : 'DEBUG' # The most granular level of logging available in the log module
,'handlers' : ['core']
}
}
}
# use the built-in logger dict configuration tool to convert the dict to a logger config
dictConfig(LOG_SETTINGS)
# get the logger created in the config and named root in the 'loggers' section of the config
__log = logging.getLogger('root')
So, after my __log
variable get initialized it works immediately, allowing for log writes. I want to start the pyinotify
instance next and would like to pass __log
using the following class definitions:
import asyncore, pyinotify
class Notify (object):
def __init__ (self, log=None, verbose=True):
wm = pyinotify.WatchManager()
wm.add_watch( '/path/to/folder/to/monitor/', pyinotify.IN_CLOSE_WRITE, proc_fun=processEvent(log, verbose) )
notifier = pyinotify.AsyncNotifier(wm, None)
asyncore.loop()
class processEvent (pyinotify.ProcessEvent):
def __init__ (self, log=None, verbose=True):
log.info('logging some cool stuff')
self.__log = log
self.__verbose = verbose
def process_IN_CLOSE_WRITE (self, event):
print event
In the above implementation, my process_IN_CLOSE_WRITE
method gets triggered exactly as expected from the pyinotify.AsyncNotifier
; however, the log line for logging some cool stuff
never writes to the log file.
I feel like it has something to do with persisting the file stream through the pyinotify threading process; however, I'm not sure how to resolve this.
Any ideas?
I might have found a resolution that seems to work. Not sure if it is the best approach, so I will leave the OP open for now to see if any other ideas are posted.
I think I was handling my pyinotify.AsyncNotifier
setup wrong. I changed the implementation to:
class Notify (object):
def __init__ (self, log=None, verbose=True):
notifiers = []
descriptors = []
wm = pyinotify.WatchManager()
notifiers.append ( pyinotify.AsyncNotifier(wm, processEvent(log, verbose)) )
descriptors.append( wm.add_watch( '/path/to/folder/to/monitor/', pyinotify.IN_CLOSE_WRITE, proc_fun=processEvent(log, verbose), auto_add=True )
asyncore.loop()
Now, when my wrapper class processEvents
gets triggered on instantiation of the listener, and when an event is triggered from a CLOSE_WRITE
event, the log
object is maintained and passed appropriately and can receive write events.