Hi, I'm not so sure this is a bug rather than a feature but it has undesirable behaviour to my eye. I found it originally in 3.0.16 but I've just reproduced it in 4.1.
If I have a script where I use process substitution to log the output yet keep stdout and stderr as they stand: command1 > >(tee out) 2> >(tee err >&2) I understand that process substitution will find two free high numbered file descriptors (to avoid clashes with low numbered file descriptors that the script might arbitrarily use) and that seems fine. However, the redirection on the command line means I will have: fds 1 and 63 going to "tee out" and fds 2 and 62 going to "tee err". In fact, because of left-to-right processing, "tee err" will also have fd 63 going to "tee out". So far, it's OK as when command1 exits, everything will close up and sub-processes will exit. However, if command1 were to run command2 in the background then exit itself, command2, having inherited fds 62 and 63, will keep the pipes open to the two tee processes and all will hang about until command2 (or its descendants) exit. Before I realised fds 63 and 62 were in play, I thought I could prevent the tee processes hanging about by having command2 redirect its stdout and stderr but no such luck. I can have command2 invoke exec 63>&- 62>&- but the choice of numbers is somewhat arbitrary depending on what has happened in the history of the processes. The manual suggests I could move and close file descriptors with [n]>&digit- but I would need the equivalent of command1 >&>(...)- Digit might very well mean (just a) digit but here the process substitution, of course, is replaced with /dev/fd/63, say, certainly not a digit. Basically, by mixing IO redirection and process substitution I'm left with a trailing file descriptor which can cause scripts to hang around despite any subsequent redirection of stdout/stderr best practices. There's no mechanism to discover these new file descriptors and they are not closed on exec. Is there any hope for me? Cheers, Ian