pythontkinterqueuetclblocking

Querying root.main() / Tkinter / TCL event queue depth


I have an application written with a tkinter UI that performs real world activities via a GPIO stack.

As an example, imagine my application has a temperature probe that checks the room temp and then has an external hardware interface to the air conditioning to adjust the temperature output of the HVAC system. This is done in a simple way via a GPIO stack.

The application has some "safety critical" type actions that it can perform. There are lots of hardware interlocks, but I still don't want the application crashing and causing the AC to run full blast and turn my theoretical house into a freezer.

The problem is always.....how to convert a GUI event driven application into a GUI application that also has another loop running without:

  1. Breaking tkinter / tcl design by shoving everything in a "while(true): do things" and adding root.update() to it.
  2. Adding threading to the application which is difficult and potentially unsafe

So, I've settled on the following architecture:

import tkinter as tk


def main(args):
    
    # Build the UI for the user
    root = tk.Tk()
    buttonShutDown = tk.Button(root, text="PUSH ME!")#, command=sys.exit(0))
    buttonShutDown.pack()
    
    # Go into the method that looks at inputs and outputs
    monitorInputsAndActionOutputs(root)
    
    # The main tkinter loop. NOTE, there are about 9000 articles on why you should NOT just shove everyhtnig in a loop and call root.update(). This is the corect way of doing it! Use root.after() calls!
    root.mainloop()

def monitorInputsAndActionOutputs(root): 

    print ("checked the room temperature")
    print ("adjusted the airconditioning to make comfy")
    root.after(100, monitorInputsAndActionOutputs, root) # readd one event of the monitorInputsAndActionOutputs process to the tkinter / tcl queue

    return None

if __name__ == '__main__':
    import sys
    sys.exit(main(sys.argv))

This actually works just fine, and theoretically it will continue to work forever and everyone will have the comfy(TM).

However, I see a potential problem in that the application has a decent number of methods doing different closed loop PID style adjustments (read, think, change, iterate) and I'm concerned that the tkinter event queue will start to clog with processing after() commands and the overall system will either slow to a crawl, break completely, freeze the UI or a combination of all three.

So what I want to do is add in something like (rough code example, this does not run of course):

def (checkQueueDepth):
   
   myQueue = root.getQueueLength()

   if(myQueue > 100 items):
       doSomethingToAlertSomeone()
       deprioritiseNonCriticalEventsTillThingsCalmTFDown()

In order to implement something like this, I need to programmatically acquire the root / tkinter queue depth.

After an extensive beardscratching wade through the

  1. tkinter docs
  2. TCL / TK docs
  3. The source code of tkinter
  4. The rest of stackoverflow
  5. The Internet
  6. The fridge (for a snacc)

I have ascertained that there is no ability for an external module to query the queue length of tk, which is based on the tcl library and probably implements or extends their notifier.

It also doesn't look like based on the tkinter code that there is any private method or object/variable that I can crash through the front door onto and interrogate.

Thirdly, the only "solution" it looks to be would be to potentially implement a new notifier system for TCL and then extend it's functionality to provide this feature. There is zero chance of me doing that, I don't have the knowledge or time and I'd sooner rewrite my entire application in another language.

Since it can't be done the "right" way, I thought about the "we're not going to call it wrong, but might as well be" way.

What I came up with is running something like this:

def checkCodeLoops(root):
    
    # The maximum time we will accept the queue processing to blow out to is 3 seconds.
    maxDelayInQueueResponse = datetime.timedelta(seconds = 3)
    
    # Take a note of the time now as well as the last time this method was run
    lastCheckTime = checkTime
    checkTime = datetime.datetime.now()
    
    # If the time between this invocation of the checkCodeLoops method and the last one is greater than the max allowable time, panic!
    if(checkTime - lastCheckTime > maxDelayInQueueResponse):
        doSomethingToReduceLoad()
        lockUISoUserCantAddMoreWorkToTheQueue()
    
    # reque this method so that we can check again in 100ms (theoretically in 100ms at least - if the queue's flooded then this might never fire)
    root.after_idle(100, monitorInputsAndActionOutputs, root)
    
    return None

Overall it's a s*** way of doing it. You basically add work to the queue in order to see if the queue is busy (stupid) and also there is another issue - if the queue is flooded then this code won't actually run because any events added to the after_idle() process are only processed when the queue is empty of after() and other higher priority items.

So I could change it to root.after instead of root.after_idle, and theoretically that would help, but we still have the problem of using a queued event to check if the queue is not functioning, which is kinda stupid.

If I had a way of checking the queue depth, I could start to implement load control strategies BEFORE things got to the panic stage.

Really keen to hear if anyone has a better way of doing this.


Solution

  • There isn't really any way to know, unfortunately. At the point when you could ask the question, many event sources only know whether there is at least one event pending (that corresponds to a single OS event) and the system calls all are capable of accepting multiple events from the corresponding source at that point (and can do so safely because of non-blocking I/O). It's really complicated down there!

    What you can do is monitor whether regular timer events have an idle event occur between them. Idle events fire when the other event sources have nothing to contribute and the alternative is to go to sleep in a select() syscall. (Or poll() or whatever the notifier has been configured to use, depending on platform and build options.) If idle events are being serviced between those regular timer events, the queue is healthy; if not, you are in an overrun situation.

    Queueing theory tells us that as long as the consumers keep up with the producers, queue size will tend to zero. When it is the other way round, queue size tends to infinity. (Exact balancing is very unlikely.)