unixforkparent-childfile-descriptorsetrlimit

Closing opened file descriptors in child process


Is there a way to iterate through already open file descriptors (opened by parent process) and close them one by one in child process?

OS: Unix.

Reason for closure: RLIMIT_NOFILE limit of the setrlimit() constrains the number of file descriptors that a process may allocate.If we want to restrict our child process by setting this limit, it depends on the already allocated file descriptors.

Trying to set this limit in a child process is restricted as the parent process has some open file descriptors and hence we cannot set this limit lesser than that number.

Example: If parent process has 10 file descriptors allocated and we wish to limit the child process file descriptor number to less than 10 (Say 3), we would need to close 7 file descriptors inside the child process.

The solution to this can benefit all those who want to restrict their child process from creating new files or opening new network connections.


Solution

  • The following idiom is not uncommon (this is taken from the C part of MIMEDefang):

    /* Number of file descriptors to close when forking */
    #define CLOSEFDS 256
    ...
    
    static void
    closefiles(void)
    {
        int i;
        for (i=0; i<CLOSEFDS; i++) {
            (void) close(i);
       }
    }
    

    (That's from mimedefang-2.78, the implementation has been changed slightly in later releases.)

    It is something of a hack (as the MIMEDefang code freely admitted). In many cases it's more useful to start at FD 3 (or STDERR_FILENO+1) instead of 0. close() returns EBADF with an invalid FD, but this doesn't usually present problems (at least not in C, in other languages an exception may be thrown).

    Since you can determine the file-descriptor upper limit with getrlimit(RLIMIT_NOFILE,...) which is defined as:

    RLIMIT_NOFILE

    This is a number one greater than the maximum value that the system may assign to a newly-created descriptor. If this limit is exceeded, functions that allocate a file descriptor shall fail with errno set to [EMFILE]. This limit constrains the number of file descriptors that a process may allocate.

    you can use this (subtracting 1) as the upper limit of the loop. The above and ulimit -n, getconf OPEN_MAX and sysconf(OPEN_MAX) should all agree.

    Since open() always assigns the lowest free FD, the maximum number of open files and the highest FD+1 are the same number.

    To detemine what fds are open, instead of close() use a no-op lseek(fd, 0, SEEK_CUR) which will return EBADF if the fd is not open (there's no obvious benefit to calling lseek() for a conditional close() though). socat's filan loops over 0 .. FD_SETSIZE calling fstat()/fstat64().

    The libslack daemon utility which daemonizes arbitrary processes also uses this brute-force approach (while making sure to keep the first three descriptors open when used under inetd).

    In the case where your program can track file handles it is preferable to do so, or use FD_CLOEXEC where available. However, should you wish to code defensively, you might prefer to distrust your parent process, say for an external handler/viewer process started by a browser, e.g. like this long-lived and ancient Mozilla bug on Unix platforms.

    For the paranoid (do you want your PDF viewer to inherit every open Firefox FD including your cache, and open TCP connections?):

    #!/bin/bash
    # you might want to use the value of "ulimit -n" instead of picking 255
    for ((fd=3; fd<=255; fd++)); do
      exec {fd}<&- # close
    done
    exec /usr/local/bin/xpdf "$@"
    

    Update after 15 years this issue was resolved in Firefox 58 (2018) when process creation was changed from the Netscape Portable Runtime (NSPR) API to use LaunchApp.