I would like to have function change value of global variable, but it fails when the function is used in a pipe:
#!/bin/bash
declare GVAR="abc"
func1() {
read GVAR
echo>&2 "func1: GVAR is $GVAR"
}
func2() {
echo "xyz"
}
func2 | func1
echo>&2 "main: GVAR is $GVAR"
Output:
func1: GVAR is xyz
main: GVAR is abc
This is obviously because "Each command in a multi-command pipeline, where pipes are created, is executed in its own subshell" as per Bash manual on pipelines.
The best I could achieve is use shopt -s lastpipe
, but then this only works when the function is last one (with extra conditions), which makes it basically unusable to 'tee out' something that is passing through a pipe before letting it continue to another command.
If impossible, what is the alternative approach to achieve the same? Can I avoid having to create temporary files?
If you reassign your file descriptors so stdout goes to real stdout but an alternate FD goes to a FIFO that the parent process reads from, then you can insert code anywhere in your pipeline that writes to that alternate fd.
#!/usr/bin/env bash
gvar_parent=abc
# backup "real" stdout so we can reach it even within a redirection
exec 3>&1
func1() {
read -r gvar_child
echo "func1: GVAR is $gvar_child" >&2
printf '%s\0' "$gvar_child" >&4 # return new GVAR result to parent
}
func2() { echo "xyz"; }
# aside: don't use all-caps variable names
IFS= read -r -d '' gvar_parent < <(exec 4>&1 >&3; func2 | func1)
echo "GVAR is $gvar_parent"
Some notes:
IFS= read -r -d ''
dance allows you to pass variables that contain newlines back to your parent process by using a NUL delimiter; that's why we use %s\0
to format the output string from the child, to terminate the variable with a NUL literal. If that variable is not returned, read
will have a nonzero exit status, letting you detect that the variable wasn't successfully written (and distinguish between child-didn't-run and child-returned-an-empty-string states).