perlipcpipefilehandle

What is the preferred cross-platform IPC Perl module?


I want to create a simple IO object that represents a pipe opened to another program to that I can periodically write to another program's STDIN as my app runs. I want it to be bullet-proof (in that it catches all errors) and cross-platform. The best options I can find are:

open

sub io_read {
    local $SIG{__WARN__} = sub { }; # Silence warning.
    open my $pipe, '|-', @_ or die "Cannot exec $_[0]: $!\n";
    return $pipe;
}

Advantages:

Disadvantages

IO::Pipe

sub io_read {
    IO::Pipe->reader(@_);
}

Advantages:

Disadvantages

IPC::Run

There is no interface for writing to a file handle in IPC::Run, only appending to a scalar. This seems…weird.

IPC::Run3

No file handle interface here, either. I could use a code reference, which would be called repeatedly to spool to the child, but looking at the source code, it appears that it actually writes to a temporary file, and then opens it and spools its contents to the pipe'd command's STDIN. Wha?

IPC::Cmd

Still no file handle interface.


What am I missing here? It seems as if this should be a solved problem, and I'm kind of stunned that it's not. IO::Pipe comes closest to what I want, but the lack of $SIG{PIPE} error handling and the lack of support for Windows is distressing. Where is the piping module that will JDWIM?


Solution

  • Thanks to guidance from @ikegami, I have found that the best choice for interactively reading from and writing to another process in Perl is IPC::Run. However, it requires that the program you are reading from and writing to have a known output when it is done writing to its STDOUT, such as a prompt. Here's an example that executes bash, has it run ls -l, and then prints that output:

    use v5.14;
    use IPC::Run qw(start timeout new_appender new_chunker);
    
    my @command = qw(bash);
    
    # Connect to the other program.
    my ($in, @out);
    my $ipc = start \@command,
        '<' => new_appender("echo __END__\n"), \$in,
        '>' => new_chunker, sub { push @out, @_ },
        timeout(10) or die "Error: $?\n";
    
    # Send it a command and wait until it has received it.
    $in .= "ls -l\n";
    $ipc->pump while length $in;
    
    # Wait until our end-of-output string appears.
    $ipc->pump until @out && @out[-1] =~ /__END__\n/m;
    
    pop @out;
    say @out;
    

    Because it is running as an IPC (I assume), bash does not emit a prompt when it is done writing to its STDOUT. So I use the new_appender() function to have it emit something I can match to find the end of the output (by calling echo __END__). I've also used an anonymous subroutine after a call to new_chunker to collect the output into an array, rather than a scalar (just pass a reference to a scalar to '>' if you want that).

    So this works, but it sucks for a whole host of reasons, in my opinion:

    I now realize that, although the interface for IPC::Run could potentially be a bit nicer, overall the weaknesses of the IPC model in particular makes it tricky to deal with at all. There is no generally-useful IPC interface, because one has to know too much about the specifics of the particular program being run to get it to work. This is okay, maybe, if you know exactly how it will react to inputs, and can reliably recognize when it is done emitting output, and don't need to worry much about cross-platform compatibility. But that was far from sufficient for my need for a generally useful way to interact with various database command-line clients in a CPAN module that could be distributed to a whole host of operating systems.

    In the end, thanks to packaging suggestions in comments on a blog post, I decided to abandon the use of IPC for controlling those clients, and to use the DBI, instead. It provides an excellent API, robust, stable, and mature, and suffers none of the drawbacks of IPC.

    My recommendation for those who come after me is this: