python-3.xlinuxfile-descriptorresource-leak

How many file descriptors are open?


I have a server using libzmq that leaks filedescriptors under certain conditions. The problem seems to be a race condition inside libzmq and is hard to debug and is not going to get solved anytime soon. So please don't recommend fixing it. That's a long term solution and I need a quick fix. So the plan is now to restart the server whenever it accumulates ~500 FDs.

The question now is: How many file descriptors are open? What's a good way of counting them with python3 under Linux?

What I have tried is stating "/proc/self/fd" and checking the link count. But that's always 2. Now I open and readdir "/proc/self/fd" and count entries. But that just seems rather ineffiecient.

Are there any other / better solutions?

Update: In case it wasn't clear, the server is written in python3 and the plan is for the server itself to check the descriptor count and if it has too many leaked FDs it shall restart itself (at a suitable time where it doesn't cause problems) before it runs out of FDs and causes failures.

I don't want some external monitor or something that kills the server when it leaks. The server is well aware of the problem and shall deal with it directly.


Solution

  • You can use os.listdir :

    # Create 1000 temporary files, and count netries in /proc/self/fd
    import tempfile
    from timeit import default_timer as timer
    array = [tempfile.TemporaryFile('w') for _ in range(1000)]
    start=timer()
    sum(1 for _ in os.listdir('/proc/self/fd'))
    print(timer() - start)
    # output : 0.001 seconds
    

    Not sure if can be considered rather ineffiecient