process substitution error handling

2020-08-06 Thread Jason A. Donenfeld
Hi,

It may be a surprise to some that this code here winds up printing
"done", always:

$ cat a.bash
set -e -o pipefail
while read -r line; do
   echo "$line"
done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
sleep 1
echo done

$ bash a.bash
1
2
done

The reason for this is that process substitution right now does not
propagate errors. It's sort of possible to almost make this better
with `|| kill $$` or some variant, and trap handlers, but that's very
clunky and fraught with its own problems.

Therefore, I propose a `set -o substfail` option for the upcoming bash
5.1, which would cause process substitution to propagate its errors
upwards, even if done asynchronously.

Chet - thoughts?

It'd certainly make a lot of my scripts more reliable.

Jason



Re: process substitution error handling

2020-08-06 Thread Oğuz
6 Ağustos 2020 Perşembe tarihinde Jason A. Donenfeld 
yazdı:

> Hi,
>
> It may be a surprise to some that this code here winds up printing
> "done", always:
>
> $ cat a.bash
> set -e -o pipefail
> while read -r line; do
>echo "$line"
> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> sleep 1
> echo done
>
> $ bash a.bash
> 1
> 2
> done
>
> The reason for this is that process substitution right now does not
> propagate errors. It's sort of possible to almost make this better
> with `|| kill $$` or some variant, and trap handlers, but that's very
> clunky and fraught with its own problems.
>
> Therefore, I propose a `set -o substfail` option for the upcoming bash
> 5.1, which would cause process substitution to propagate its errors
> upwards, even if done asynchronously.
>
>
set -e o substfail
: <(sleep 10; exit 1)
foo

Say that `foo' is a command that takes longer than ten seconds to complete,
how would you expect the shell to behave here? Should it interrupt `foo' or
wait for its termination and exit then? Or do something else?


> Chet - thoughts?
>
> It'd certainly make a lot of my scripts more reliable.
>
> Jason
>
>

-- 
Oğuz


Re: Expand first before asking the question "Display all xxx possibilities?"

2020-08-06 Thread Ilkka Virta

On 5.8. 22:21, Chris Elvidge wrote:

On 05/08/2020 02:55 pm, Chet Ramey wrote:

On 8/2/20 6:55 PM, 積丹尼 Dan Jacobson wrote:

how about doing the expansion first, so entering
$ zz /jidanni_backups/da would then change into

 >> $ zz /jidanni_backups/dan_home_bkp with below it the question
 >> Display all 113 possibilities? (y or n)

What happens if you have:
dan_home-bkp, dan_home_nobkp, dan-home-bkp, dan-nohome-bkp, 
dan_nohome-bkp (etc.) in /jidanni_backups/?

Which do you choose for the first expansion?


I think they meant the case where all the files matching the given 
beginning have a longer prefix in common. The shell expands that prefix 
to the command line after asking to show all possibilities.


 $ rm *
 $ touch dan_home_bkp{1..199}
 $ ls -l da[TAB]
 Display all 199 possibilities? (y or n) [n]
 $ ls -l dan_home_bkp[cursor here]

So the shell has to fill in the common part anyway, and it might as well 
do it first, without asking.


(Which just so happens to be what Zsh does...)


--
Ilkka Virta / itvi...@iki.fi



Re: process substitution error handling

2020-08-06 Thread Jason A. Donenfeld
On Thu, Aug 6, 2020 at 1:15 PM Oğuz  wrote:
>
>
>
> 6 Ağustos 2020 Perşembe tarihinde Jason A. Donenfeld  yazdı:
>>
>> Hi,
>>
>> It may be a surprise to some that this code here winds up printing
>> "done", always:
>>
>> $ cat a.bash
>> set -e -o pipefail
>> while read -r line; do
>>echo "$line"
>> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
>> sleep 1
>> echo done
>>
>> $ bash a.bash
>> 1
>> 2
>> done
>>
>> The reason for this is that process substitution right now does not
>> propagate errors. It's sort of possible to almost make this better
>> with `|| kill $$` or some variant, and trap handlers, but that's very
>> clunky and fraught with its own problems.
>>
>> Therefore, I propose a `set -o substfail` option for the upcoming bash
>> 5.1, which would cause process substitution to propagate its errors
>> upwards, even if done asynchronously.
>>
>
> set -e o substfail
> : <(sleep 10; exit 1)
> foo
>
> Say that `foo' is a command that takes longer than ten seconds to complete, 
> how would you expect the shell to behave here? Should it interrupt `foo' or 
> wait for its termination and exit then? Or do something else?

It's likely simpler to check after foo, since bash can just ask "are
any of the process substitution processes that I was wait(2)ing on in
exited state with non zero return?", which just involves looking in a
little list titled exited_with_error_process_subst for being non-null.

A more sophisticated implementation could do that asynchronously with
signals and SIGCHLD. In that model, if bash gets sigchld from a
process that exits with failure, it then exits inside the signal
handler there. This actually wouldn't be too hard to do either.

Jason



Re: process substitution error handling

2020-08-06 Thread Greg Wooledge
On Thu, Aug 06, 2020 at 02:14:07PM +0200, Jason A. Donenfeld wrote:
> On Thu, Aug 6, 2020 at 1:15 PM Oğuz  wrote:
> > set -e o substfail
> > : <(sleep 10; exit 1)
> > foo
> >
> > Say that `foo' is a command that takes longer than ten seconds to complete, 
> > how would you expect the shell to behave here? Should it interrupt `foo' or 
> > wait for its termination and exit then? Or do something else?
> 
> It's likely simpler to check after foo, since bash can just ask "are
> any of the process substitution processes that I was wait(2)ing on in
> exited state with non zero return?", which just involves looking in a
> little list titled exited_with_error_process_subst for being non-null.

So, in a script like this:

set -e -o failevenharder
: <(sleep 1; false)
cmd1
cmd2
cmd3
cmd4

They're asking that the script abort at some unpredictable point during
the sequence of commands cmd1, cmd2, cmd3, cmd4 whenever the process
substitution happens to terminate?

I'm almost tempted to get behind that just to help the set -e users
reach the point of terminal absurdity even faster.  The wreckage should
be hilarious.



Re: Expand first before asking the question "Display all xxx possibilities?"

2020-08-06 Thread Davide Brini
On Thu, 6 Aug 2020 15:13:30 +0300, Ilkka Virta  wrote:

> I think they meant the case where all the files matching the given
> beginning have a longer prefix in common. The shell expands that prefix
> to the command line after asking to show all possibilities.
>
>   $ rm *
>   $ touch dan_home_bkp{1..199}
>   $ ls -l da[TAB]
>   Display all 199 possibilities? (y or n) [n]
>   $ ls -l dan_home_bkp[cursor here]
>
> So the shell has to fill in the common part anyway, and it might as well
> do it first, without asking.

Don't know about the OP's environment, but it works out of the box for me,
and it always has, as far as I can remember:

$ touch dan_home_bkp{1..199}
$ ls da[TAB]
$ ls dan_home_bkp[TAB][TAB]
Display all 199 possibilities? (y or n)

$ bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)

--
D.



Re: process substitution error handling

2020-08-06 Thread Jason A. Donenfeld
On Thu, Aug 6, 2020 at 2:14 PM Jason A. Donenfeld  wrote:
>
> On Thu, Aug 6, 2020 at 1:15 PM Oğuz  wrote:
> >
> >
> >
> > 6 Ağustos 2020 Perşembe tarihinde Jason A. Donenfeld  
> > yazdı:
> >>
> >> Hi,
> >>
> >> It may be a surprise to some that this code here winds up printing
> >> "done", always:
> >>
> >> $ cat a.bash
> >> set -e -o pipefail
> >> while read -r line; do
> >>echo "$line"
> >> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> >> sleep 1
> >> echo done
> >>
> >> $ bash a.bash
> >> 1
> >> 2
> >> done
> >>
> >> The reason for this is that process substitution right now does not
> >> propagate errors. It's sort of possible to almost make this better
> >> with `|| kill $$` or some variant, and trap handlers, but that's very
> >> clunky and fraught with its own problems.
> >>
> >> Therefore, I propose a `set -o substfail` option for the upcoming bash
> >> 5.1, which would cause process substitution to propagate its errors
> >> upwards, even if done asynchronously.
> >>
> >
> > set -e o substfail
> > : <(sleep 10; exit 1)
> > foo
> >
> > Say that `foo' is a command that takes longer than ten seconds to complete, 
> > how would you expect the shell to behave here? Should it interrupt `foo' or 
> > wait for its termination and exit then? Or do something else?
>
> It's likely simpler to check after foo, since bash can just ask "are
> any of the process substitution processes that I was wait(2)ing on in
> exited state with non zero return?", which just involves looking in a
> little list titled exited_with_error_process_subst for being non-null.
>
> A more sophisticated implementation could do that asynchronously with
> signals and SIGCHLD. In that model, if bash gets sigchld from a
> process that exits with failure, it then exits inside the signal
> handler there. This actually wouldn't be too hard to do either.

Actually, it looks like all the infrastructure for this latter
approach is already there.



Re: process substitution error handling

2020-08-06 Thread Eli Schwartz
On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> Hi,
> 
> It may be a surprise to some that this code here winds up printing
> "done", always:
> 
> $ cat a.bash
> set -e -o pipefail
> while read -r line; do
>echo "$line"
> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> sleep 1
> echo done
> 
> $ bash a.bash
> 1
> 2
> done
> 
> The reason for this is that process substitution right now does not
> propagate errors.

Well, yes, it is an async command. But errexit has lots of other amusing
traps, like

$ echo $(false)

> It's sort of possible to almost make this better
> with `|| kill $$` or some variant, and trap handlers, but that's very
> clunky and fraught with its own problems.
> 
> Therefore, I propose a `set -o substfail` option for the upcoming bash
> 5.1, which would cause process substitution to propagate its errors
> upwards, even if done asynchronously.

Propagate the return value of async processes like this:

wait $! || die "async command failed with return status $?"

> It'd certainly make a lot of my scripts more reliable.

The use of errexit is the focus of a long-running holy war. Detractors
would point out a very lengthy list of reasons why it's conceptually
broken by design. Some of those reasons are documented here (including
process substitution): http://mywiki.wooledge.org/BashFAQ/105

I recommend you do NOT claim this feature is a magic panacea that will
make your scripts reliable; instead, just say you would find it useful.

-- 
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: Expand first before asking the question "Display all xxx possibilities?"

2020-08-06 Thread Chet Ramey
On 8/6/20 8:13 AM, Ilkka Virta wrote:
> On 5.8. 22:21, Chris Elvidge wrote:
>> On 05/08/2020 02:55 pm, Chet Ramey wrote:
>>> On 8/2/20 6:55 PM, 積丹尼 Dan Jacobson wrote:
 how about doing the expansion first, so entering
 $ zz /jidanni_backups/da would then change into
>>  >> $ zz /jidanni_backups/dan_home_bkp with below it the question
>>  >> Display all 113 possibilities? (y or n)
>>
>> What happens if you have:
>> dan_home-bkp, dan_home_nobkp, dan-home-bkp, dan-nohome-bkp,
>> dan_nohome-bkp (etc.) in /jidanni_backups/?
>> Which do you choose for the first expansion?
> 
> I think they meant the case where all the files matching the given
> beginning have a longer prefix in common. The shell expands that prefix to
> the command line after asking to show all possibilities.

Only if you set the "show-all-if-ambiguous" readline variable explicitly
asking for this behavior. Readline's default behavior is to complete up to
the longest common prefix, then, on the next completion attempt, to note
that there weren't any additional changes to the buffer and ask if the user
wants to see the alternatives. Dan wants a change in the behavior that
variable enables.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: process substitution error handling

2020-08-06 Thread Oğuz
6 Ağustos 2020 Perşembe tarihinde Greg Wooledge  yazdı:

> On Thu, Aug 06, 2020 at 02:14:07PM +0200, Jason A. Donenfeld wrote:
> > On Thu, Aug 6, 2020 at 1:15 PM Oğuz  wrote:
> > > set -e o substfail
> > > : <(sleep 10; exit 1)
> > > foo
> > >
> > > Say that `foo' is a command that takes longer than ten seconds to
> complete, how would you expect the shell to behave here? Should it
> interrupt `foo' or wait for its termination and exit then? Or do something
> else?
> >
> > It's likely simpler to check after foo, since bash can just ask "are
> > any of the process substitution processes that I was wait(2)ing on in
> > exited state with non zero return?", which just involves looking in a
> > little list titled exited_with_error_process_subst for being non-null.
>
> So, in a script like this:
>
> set -e -o failevenharder
> : <(sleep 1; false)
> cmd1
> cmd2
> cmd3
> cmd4
>
> They're asking that the script abort at some unpredictable point during
> the sequence of commands cmd1, cmd2, cmd3, cmd4 whenever the process
> substitution happens to terminate?
>
>
My thoughts exactly. That would be disastrous.


> I'm almost tempted to get behind that just to help the set -e users
> reach the point of terminal absurdity even faster.  The wreckage should
> be hilarious.
>
>

-- 
Oğuz


Re: process substitution error handling

2020-08-06 Thread

On 06/08/2020 13:33, Eli Schwartz wrote:

On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:

Hi,

It may be a surprise to some that this code here winds up printing
"done", always:

$ cat a.bash
set -e -o pipefail
while read -r line; do
echo "$line"
done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
sleep 1
echo done

$ bash a.bash
1
2
done

The reason for this is that process substitution right now does not
propagate errors.


Well, yes, it is an async command. But errexit has lots of other amusing
traps, like

$ echo $(false)


It's sort of possible to almost make this better
with `|| kill $$` or some variant, and trap handlers, but that's very
clunky and fraught with its own problems.

Therefore, I propose a `set -o substfail` option for the upcoming bash
5.1, which would cause process substitution to propagate its errors
upwards, even if done asynchronously.


Propagate the return value of async processes like this:

wait $! || die "async command failed with return status $?"


You beat me to it. I was just about to suggest wait $! || exit. Indeed, 
I mentioned the same in a recent bug report against wireguard-tools.





It'd certainly make a lot of my scripts more reliable.


The use of errexit is the focus of a long-running holy war. Detractors
would point out a very lengthy list of reasons why it's conceptually
broken by design. Some of those reasons are documented here (including
process substitution): http://mywiki.wooledge.org/BashFAQ/105

I recommend you do NOT claim this feature is a magic panacea that will
make your scripts reliable; instead, just say you would find it useful.



I concur. The scripts I looked at tended heavily towards error handling 
at a distance and were already subject to one or two amusing errexit 
pitfalls.


--
Kerin Millar



Re: process substitution error handling

2020-08-06 Thread Eli Schwartz
On 8/6/20 9:15 AM, k...@plushkava.net wrote:
> You beat me to it. I was just about to suggest wait $! || exit. Indeed,
> I mentioned the same in a recent bug report against wireguard-tools.

So if I understand correctly, you reported the lack of wait $! || exit
in a script, and the script author instead responded by requesting a new
feature in bash that does the same thing, except after a random interval
during another command's execution?

> I concur. The scripts I looked at tended heavily towards error handling
> at a distance and were already subject to one or two amusing errexit
> pitfalls.

lol, I bet we could fix that by adding even more error handling at a
distance.

-- 
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: Expand first before asking the question "Display all xxx possibilities?"

2020-08-06 Thread Ilkka Virta

On 6.8. 15:59, Chet Ramey wrote:

On 8/6/20 8:13 AM, Ilkka Virta wrote:

I think they meant the case where all the files matching the given
beginning have a longer prefix in common. The shell expands that prefix to
the command line after asking to show all possibilities.


Only if you set the "show-all-if-ambiguous" readline variable explicitly
asking for this behavior. Readline's default behavior is to complete up to
the longest common prefix, then, on the next completion attempt, to note
that there weren't any additional changes to the buffer and ask if the user
wants to see the alternatives. Dan wants a change in the behavior that
variable enables.


Right, sorry.

I do have it set because otherwise there's a step where tab-completion 
only produces a beep, and doesn't do anything useful. I didn't realize 
causes partial completion to be skipped too.



--
Ilkka Virta / itvi...@iki.fi



Re: process substitution error handling

2020-08-06 Thread

On 06/08/2020 14:57, Eli Schwartz wrote:

On 8/6/20 9:15 AM, k...@plushkava.net wrote:

You beat me to it. I was just about to suggest wait $! || exit. Indeed,
I mentioned the same in a recent bug report against wireguard-tools.


So if I understand correctly, you reported the lack of wait $! || exit
in a script, and the script author instead responded by requesting a new
feature in bash that does the same thing, except after a random interval
during another command's execution?


Well, I wouldn't presume to know whether there is any relationship 
between said report and the feature request under discussion. Briefly, 
the errexit pitfall that affected me can be seen here:


https://github.com/WireGuard/wireguard-tools/blob/v1.0.20200513/src/wg-quick/darwin.bash#L299

I happened to mention that the exit status value of networksetup(8) is 
never checked and that it ought to be, with wait $! being one way of 
doing so. That being said, the proposed solution eschewed the use of 
process substitution altogether.





I concur. The scripts I looked at tended heavily towards error handling
at a distance and were already subject to one or two amusing errexit
pitfalls.


lol, I bet we could fix that by adding even more error handling at a
distance.


--
Kerin Millar



Re: process substitution error handling

2020-08-06 Thread Chet Ramey
On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> Hi,
> 
> It may be a surprise to some that this code here winds up printing
> "done", always:
> 
> $ cat a.bash
> set -e -o pipefail
> while read -r line; do
>echo "$line"
> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> sleep 1
> echo done
> 
> $ bash a.bash
> 1
> 2
> done
> 
> The reason for this is that process substitution right now does not
> propagate errors. It's sort of possible to almost make this better
> with `|| kill $$` or some variant, and trap handlers, but that's very
> clunky and fraught with its own problems.
> 
> Therefore, I propose a `set -o substfail` option for the upcoming bash
> 5.1, which would cause process substitution to propagate its errors
> upwards, even if done asynchronously.
> 
> Chet - thoughts?

I don't like it, for two reasons:

1. Process substitution is a word expansion, and, with one exception, word
   expansions don't contribute to a command's exit status and
   consequently the behavior of errexit, and this proposal isn't compelling
   enough to change that even with a new option; and

2. Process substitution is asynchronous. I can't think of how spontaneously
   changing $? (and possibly exiting) at some random point in a script when
   the shell reaps a process substitution will make scripts more reliable.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: process substitution error handling

2020-08-06 Thread Jason A. Donenfeld
Hi Chet,

On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey  wrote:
>
> On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> > Hi,
> >
> > It may be a surprise to some that this code here winds up printing
> > "done", always:
> >
> > $ cat a.bash
> > set -e -o pipefail
> > while read -r line; do
> >echo "$line"
> > done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> > sleep 1
> > echo done
> >
> > $ bash a.bash
> > 1
> > 2
> > done
> >
> > The reason for this is that process substitution right now does not
> > propagate errors. It's sort of possible to almost make this better
> > with `|| kill $$` or some variant, and trap handlers, but that's very
> > clunky and fraught with its own problems.
> >
> > Therefore, I propose a `set -o substfail` option for the upcoming bash
> > 5.1, which would cause process substitution to propagate its errors
> > upwards, even if done asynchronously.
> >
> > Chet - thoughts?
>
> I don't like it, for two reasons:
>
> 1. Process substitution is a word expansion, and, with one exception, word
>expansions don't contribute to a command's exit status and
>consequently the behavior of errexit, and this proposal isn't compelling
>enough to change that even with a new option; and
>
> 2. Process substitution is asynchronous. I can't think of how spontaneously
>changing $? (and possibly exiting) at some random point in a script when
>the shell reaps a process substitution will make scripts more reliable.

Demi (CC'd) points out that there might be security dangers around
patterns like:

while read -r one two three; do
add_critical_thing_for "$one" "$two" "$three"
done < <(get_critical_things)

If get_critical_things returns a few lines but then exits with a
failure, the script will forget to call add_critical_thing_for, and
some kind of door will be held wide open. This is problematic and
arguably makes bash unsuitable for many of the sysadmin things that
people use bash for.

Perhaps another, clunkier, proposal would be to add `wait -s` so that
the wait builtin also waits for process substitutions and returns
their exit codes and changes $?. The downside would be that scripts
now need to add a "wait" after all of above such loops, but on the
upside, it's better than the current problematic situation.

Jason



Re: process substitution error handling

2020-08-06 Thread Chet Ramey
On 8/6/20 10:36 AM, Jason A. Donenfeld wrote:
> Hi Chet,
> 
> On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey  wrote:
>>
>> On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
>>> Hi,
>>>
>>> It may be a surprise to some that this code here winds up printing
>>> "done", always:
>>>
>>> $ cat a.bash
>>> set -e -o pipefail
>>> while read -r line; do
>>>echo "$line"
>>> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
>>> sleep 1
>>> echo done
>>>
>>> $ bash a.bash
>>> 1
>>> 2
>>> done
>>>
>>> The reason for this is that process substitution right now does not
>>> propagate errors. It's sort of possible to almost make this better
>>> with `|| kill $$` or some variant, and trap handlers, but that's very
>>> clunky and fraught with its own problems.
>>>
>>> Therefore, I propose a `set -o substfail` option for the upcoming bash
>>> 5.1, which would cause process substitution to propagate its errors
>>> upwards, even if done asynchronously.
>>>
>>> Chet - thoughts?
>>
>> I don't like it, for two reasons:
>>
>> 1. Process substitution is a word expansion, and, with one exception, word
>>expansions don't contribute to a command's exit status and
>>consequently the behavior of errexit, and this proposal isn't compelling
>>enough to change that even with a new option; and
>>
>> 2. Process substitution is asynchronous. I can't think of how spontaneously
>>changing $? (and possibly exiting) at some random point in a script when
>>the shell reaps a process substitution will make scripts more reliable.
> 
> Demi (CC'd) points out that there might be security dangers around
> patterns like:
> 
> while read -r one two three; do
> add_critical_thing_for "$one" "$two" "$three"
> done < <(get_critical_things)
> 
> If get_critical_things returns a few lines but then exits with a
> failure, the script will forget to call add_critical_thing_for, and
> some kind of door will be held wide open. This is problematic and
> arguably makes bash unsuitable for many of the sysadmin things that
> people use bash for.

If this is a problem for a particular script, add the usual `wait $!'
idiom and react accordingly. If that's not feasible, you can always
use some construct other than process substitution (e.g., a file).
I don't see how this "makes bash unsuitable for many [...] sysadmin
things."

> 
> Perhaps another, clunkier, proposal would be to add `wait -s` so that
> the wait builtin also waits for process substitutions and returns
> their exit codes and changes $?. The downside would be that scripts
> now need to add a "wait" after all of above such loops, but on the
> upside, it's better than the current problematic situation.

You can already do this. Since process substitution sets $!, you can
keep track of all of the process substitutions of interest and wait
for as many of them as you like. `wait' will return their statuses
and set $? for you.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: process substitution error handling

2020-08-06 Thread Chet Ramey
On 8/6/20 10:48 AM, Chet Ramey wrote:

>> Perhaps another, clunkier, proposal would be to add `wait -s` so that
>> the wait builtin also waits for process substitutions and returns
>> their exit codes and changes $?. The downside would be that scripts
>> now need to add a "wait" after all of above such loops, but on the
>> upside, it's better than the current problematic situation.
> 
> You can already do this. Since process substitution sets $!, you can
> keep track of all of the process substitutions of interest and wait
> for as many of them as you like. `wait' will return their statuses
> and set $? for you.

I should have also mentioned that while bash-5.0 requires you to wait
for these process substitutions one at a time, in between their creation,
bash-5.1 will allow you to save $! and wait for them later:

$ cat x2
: <(sleep 5; exit 1)
P1=$!

: <(sleep 5; exit 2)
P2=$!

: <(sleep 5; exit 3)
P3=$!

wait $P1 ; echo $?
wait $P2 ; echo $?
wait $P3 ; echo $?
$ ../bash-5.1-alpha/bash x2
1
2
3


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: process substitution error handling

2020-08-06 Thread Jason A. Donenfeld
On Thu, Aug 6, 2020 at 4:49 PM Chet Ramey  wrote:
>
> On 8/6/20 10:36 AM, Jason A. Donenfeld wrote:
> > Hi Chet,
> >
> > On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey  wrote:
> >>
> >> On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> >>> Hi,
> >>>
> >>> It may be a surprise to some that this code here winds up printing
> >>> "done", always:
> >>>
> >>> $ cat a.bash
> >>> set -e -o pipefail
> >>> while read -r line; do
> >>>echo "$line"
> >>> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> >>> sleep 1
> >>> echo done
> >>>
> >>> $ bash a.bash
> >>> 1
> >>> 2
> >>> done
> >>>
> >>> The reason for this is that process substitution right now does not
> >>> propagate errors. It's sort of possible to almost make this better
> >>> with `|| kill $$` or some variant, and trap handlers, but that's very
> >>> clunky and fraught with its own problems.
> >>>
> >>> Therefore, I propose a `set -o substfail` option for the upcoming bash
> >>> 5.1, which would cause process substitution to propagate its errors
> >>> upwards, even if done asynchronously.
> >>>
> >>> Chet - thoughts?
> >>
> >> I don't like it, for two reasons:
> >>
> >> 1. Process substitution is a word expansion, and, with one exception, word
> >>expansions don't contribute to a command's exit status and
> >>consequently the behavior of errexit, and this proposal isn't compelling
> >>enough to change that even with a new option; and
> >>
> >> 2. Process substitution is asynchronous. I can't think of how spontaneously
> >>changing $? (and possibly exiting) at some random point in a script when
> >>the shell reaps a process substitution will make scripts more reliable.
> >
> > Demi (CC'd) points out that there might be security dangers around
> > patterns like:
> >
> > while read -r one two three; do
> > add_critical_thing_for "$one" "$two" "$three"
> > done < <(get_critical_things)
> >
> > If get_critical_things returns a few lines but then exits with a
> > failure, the script will forget to call add_critical_thing_for, and
> > some kind of door will be held wide open. This is problematic and
> > arguably makes bash unsuitable for many of the sysadmin things that
> > people use bash for.
>
> If this is a problem for a particular script, add the usual `wait $!'
> idiom and react accordingly. If that's not feasible, you can always
> use some construct other than process substitution (e.g., a file).
> I don't see how this "makes bash unsuitable for many [...] sysadmin
> things."
>
> >
> > Perhaps another, clunkier, proposal would be to add `wait -s` so that
> > the wait builtin also waits for process substitutions and returns
> > their exit codes and changes $?. The downside would be that scripts
> > now need to add a "wait" after all of above such loops, but on the
> > upside, it's better than the current problematic situation.
>
> You can already do this. Since process substitution sets $!, you can
> keep track of all of the process substitutions of interest and wait
> for as many of them as you like. `wait' will return their statuses
> and set $? for you.

That doesn't always work:

set -e
while read -r line; do
   echo "$line" &
done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
sleep 1
wait $!
echo done

Either way, tagging on `wait $!` everywhere, and hoping it works like
I want feels pretty flimsy. Are you sure you're opposed to set -o
procsuberr that would do the right thing for most common use cases?



Re: process substitution error handling

2020-08-06 Thread Chet Ramey
On 8/6/20 11:31 AM, Jason A. Donenfeld wrote:
> On Thu, Aug 6, 2020 at 4:49 PM Chet Ramey  wrote:
>>
>> On 8/6/20 10:36 AM, Jason A. Donenfeld wrote:
>>> Hi Chet,
>>>
>>> On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey  wrote:

 On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> Hi,
>
> It may be a surprise to some that this code here winds up printing
> "done", always:
>
> $ cat a.bash
> set -e -o pipefail
> while read -r line; do
>echo "$line"
> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> sleep 1
> echo done
>
> $ bash a.bash
> 1
> 2
> done
>
> The reason for this is that process substitution right now does not
> propagate errors. It's sort of possible to almost make this better
> with `|| kill $$` or some variant, and trap handlers, but that's very
> clunky and fraught with its own problems.
>
> Therefore, I propose a `set -o substfail` option for the upcoming bash
> 5.1, which would cause process substitution to propagate its errors
> upwards, even if done asynchronously.
>
> Chet - thoughts?

 I don't like it, for two reasons:

 1. Process substitution is a word expansion, and, with one exception, word
expansions don't contribute to a command's exit status and
consequently the behavior of errexit, and this proposal isn't compelling
enough to change that even with a new option; and

 2. Process substitution is asynchronous. I can't think of how spontaneously
changing $? (and possibly exiting) at some random point in a script when
the shell reaps a process substitution will make scripts more reliable.
>>>
>>> Demi (CC'd) points out that there might be security dangers around
>>> patterns like:
>>>
>>> while read -r one two three; do
>>> add_critical_thing_for "$one" "$two" "$three"
>>> done < <(get_critical_things)
>>>
>>> If get_critical_things returns a few lines but then exits with a
>>> failure, the script will forget to call add_critical_thing_for, and
>>> some kind of door will be held wide open. This is problematic and
>>> arguably makes bash unsuitable for many of the sysadmin things that
>>> people use bash for.
>>
>> If this is a problem for a particular script, add the usual `wait $!'
>> idiom and react accordingly. If that's not feasible, you can always
>> use some construct other than process substitution (e.g., a file).
>> I don't see how this "makes bash unsuitable for many [...] sysadmin
>> things."
>>
>>>
>>> Perhaps another, clunkier, proposal would be to add `wait -s` so that
>>> the wait builtin also waits for process substitutions and returns
>>> their exit codes and changes $?. The downside would be that scripts
>>> now need to add a "wait" after all of above such loops, but on the
>>> upside, it's better than the current problematic situation.
>>
>> You can already do this. Since process substitution sets $!, you can
>> keep track of all of the process substitutions of interest and wait
>> for as many of them as you like. `wait' will return their statuses
>> and set $? for you.
> 
> That doesn't always work:

Because you have structured the loop so it's difficult to save the last
asynchronous command you want outside of it? There's an easy fix for that:
decide what your priority is and write code in a way to make that happen.

> 
> set -e
> while read -r line; do
>echo "$line" &
> done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
> sleep 1
> wait $!
> echo done

> Either way, tagging on `wait $!` everywhere, and hoping it works like
> I want feels pretty flimsy. Are you sure you're opposed to set -o
> procsuberr that would do the right thing for most common use cases?

Yes, I don't think there's real consensus on what the right thing is,
and "most common use cases" is already possible with existing features.
There will be more support in bash-5.1 to make certain use cases easier,
but at least they will be deterministic. I don't think introducing more
non-determinism into the shell is helpful.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: process substitution error handling

2020-08-06 Thread Eli Schwartz
On 8/6/20 11:31 AM, Jason A. Donenfeld wrote:
> That doesn't always work:
> 
> set -e
> while read -r line; do
>echo "$line" &
> done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
> sleep 1
> wait $!
> echo done

So instead of your contrived case, write it properly. Check the process
substitution first, and make sure as a bonus you don't run anything if
if it failed:

set -e
mapfile -t lines < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
wait $!

for line in "${lines[@]}"; do
   echo "$line" &
sleep 1
wait $!
echo done


(And in bash 5.1 you can remember each &'ed command to wait on later.)

> Either way, tagging on `wait $!` everywhere, and hoping it works like
> I want feels pretty flimsy. Are you sure you're opposed to set -o
> procsuberr that would do the right thing for most common use cases?

You're asking to add broken behavior which does the wrong thing in
bizarre, outrageous ways in many cases.

Your rationale is it works in your specific case, and you have a
mysterious aversion to wait $! which you haven't explained. (Why is it
flimsy? It performs exactly as documented.)

I think the problem here is twofold:

You have an emotional attachment to errexit and you think it solves
problems, so you want to turn everything into it.

Ultimately, you really don't have issues with "flimsy" anything, but you
want bash to be a different language, one with a formal exception model,
strong types, and possibly objects. Such a language would then react to
*programmer errors* by failing to compile, or dumping a traceback of the
call site, and could be used as a beginner-friendly language that
doesn't surprise the newbies.

There are a number of good languages like that. Shell isn't one of them.
Sometimes you need to understand how edge cases work.

Nevertheless, there are ways to code robustly in it. In another
language, you'd run forked commands upfront and save their results to
some sort of object or variable, and only then process it (after
checking the return code or using a function that automatically raises
exceptions on failure).
You can do that in bash too, as I demonstrated with mapfile and wait $!

It isn't really a terrible disadvantage that you need more characters to
type out your intent. Most particularly when your intent is "I want this
command to run async, but I also want it to exit the script at XXX
location if previous aysnc commands have failed". You cannot really get
around explicitly tagging the place where you want to raise errors.
Using an RNG to randomly raise the error automatically at unpredictable
locations isn't a valuable feature to add.

-- 
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: process substitution error handling

2020-08-06 Thread

On 06/08/2020 17:21, Eli Schwartz wrote:

On 8/6/20 11:31 AM, Jason A. Donenfeld wrote:

That doesn't always work:

set -e
while read -r line; do
echo "$line" &
done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
sleep 1
wait $!
echo done


I wonder why wait $! doesn't do the job here.



So instead of your contrived case, write it properly. Check the process
substitution first, and make sure as a bonus you don't run anything if
if it failed:

set -e
mapfile -t lines < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
wait $!

for line in "${lines[@]}"; do
echo "$line" &
sleep 1
wait $!
echo done


As Jason appears set on using "set -e -o pipefail", here is another 
approach that may be more to his taste:


set -e -o pipefail
{ echo 1; sleep 1; echo 2; sleep 1; exit 77; } | while read -r line; do
echo "$line" &
done
sleep 1
echo done

Of course, the loop will be executed in a subshell, but that can be 
averted by shopt -s lastpipe.


--
Kerin Millar



Re: process substitution error handling

2020-08-06 Thread Eli Schwartz
On 8/6/20 12:36 PM, k...@plushkava.net wrote:
> On 06/08/2020 17:21, Eli Schwartz wrote:
>> On 8/6/20 11:31 AM, Jason A. Donenfeld wrote:
>>> That doesn't always work:
>>>
>>> set -e
>>> while read -r line; do
>>>     echo "$line" &
>>> done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
>>> sleep 1
>>> wait $!
>>> echo done
> 
> I wonder why wait $! doesn't do the job here.

Because `echo "$line" &` sets a new value for $! after the <() did.

More to the point, you want to wait $! *before* running any commands in
the while loop, because if the <() failed, it might not be a good idea
to run those commands.

-- 
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: Is this a bug?

2020-08-06 Thread Dmitry Goncharov via Bug reports for the GNU Bourne Again SHell
On Thu, Aug 6, 2020 at 1:54 PM George R Goffe  wrote:
> I have several directories on a system with > 300k files. When I use filename 
> completion bash freezes for over a minute depending on the number of files. 
> I'm pretty sure that bash has to read the directory to do the completion but 
> the read appears to be uninterruptible. Is this a bug?

Why don't you run strace (or truss or whatever you have on your
system) and see which syscall bash is blocked in?

regards, Dmitry



Re: Is this a bug?

2020-08-06 Thread Chet Ramey
On 8/6/20 1:53 PM, George R Goffe wrote:
> Hi,
> 
> I apologize for bothering you with this question.
> 
> I have several directories on a system with > 300k files. When I use filename 
> completion bash freezes for over a minute depending on the number of files. 
> I'm pretty sure that bash has to read the directory to do the completion but 
> the read appears to be uninterruptible. Is this a bug?

Can you tell what system call bash is executing? Some file systems make
the system call underlying readdir() uninterruptible.

In general, the readline filename completion function that calls readdir
only reads a single directory entry at a time, and returns it to a caller.
If the SIGINT causes readdir to return NULL, the function returns normally.
If readdir returns a valid entry, the caller (e.g., rl_completion_matches)
checks for receipt of a signal. That should be enough to terminate the
directory read.

> Again, I apologize for bothering you with this.

No bother.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Is this a bug?

2020-08-06 Thread Valentin Bajrami
Are you trying to run autocompletion on an nfs mount? Are you using ''ls''
to autocomplete and if so did you run \ls /path/to/dir .. ?

Maybe worth to mention the bash version you are using.


Op do 6 aug. 2020 19:53 schreef George R Goffe :

> Hi,
>
> I apologize for bothering you with this question.
>
> I have several directories on a system with > 300k files. When I use
> filename completion bash freezes for over a minute depending on the number
> of files. I'm pretty sure that bash has to read the directory to do the
> completion but the read appears to be uninterruptible. Is this a bug?
>
> Again, I apologize for bothering you with this.
>
> Best regards,
>
> George...
>
>


Undocumented for-loop construct

2020-08-06 Thread Klaas Vantournhout
Dear Bash-developers,

Recently I came across a surprising undocumented bash-feature

   $ for i in 1 2 3; { echo $i; };

The usage of curly-braces instead of the well-documented do ... done
construct was a complete surprise to me and even lead me to open the
following question on stack overflow:


https://stackoverflow.com/questions/63247449/alternate-for-loop-construct

The community is unable to find any reference to this feature, except

* a brief slide in some youtube presentation by Stephen Bourne:

https://www.youtube.com/watch?v=2kEJoWfobpA&t=2095
Relevant part starts at 34:55

* and the actual source code of bash and the Bourne Shell V7

Questions:
1) Is there a reason why this is undocumented?
2) Can this become documented?
3) What is the historical background behind this alternative construct?

Thanks in advance,

Klaas


Re: Undocumented for-loop construct

2020-08-06 Thread Dale R. Worley
Klaas Vantournhout  writes:
> Recently I came across a surprising undocumented bash-feature
>
>$ for i in 1 2 3; { echo $i; };
>
> The usage of curly-braces instead of the well-documented do ... done
> construct was a complete surprise to me and even lead me to open the
> following question on stack overflow:

Interesting!  Looking at parse.y, it looks like do ... done can be
replaced with { ... } in 'for' and 'select' statements, but not 'while'
and 'until' statements.  Not clear why that would be, though I haven't
tried extending while/until and recompiling parse.y; maybe it doesn't
work.

Dale



Re: Undocumented for-loop construct

2020-08-06 Thread Lawrence Velázquez
> On Aug 6, 2020, at 10:29 PM, Dale R. Worley  wrote:
> 
> Klaas Vantournhout  writes:
>> Recently I came across a surprising undocumented bash-feature
>> 
>>   $ for i in 1 2 3; { echo $i; };
>> 
>> The usage of curly-braces instead of the well-documented do ... done
>> construct was a complete surprise to me and even lead me to open the
>> following question on stack overflow:
> 
> Interesting!  Looking at parse.y, it looks like do ... done can be
> replaced with { ... } in 'for' and 'select' statements, but not 'while'
> and 'until' statements.  Not clear why that would be

Unless I'm quite mistaken, our very own Oğuz left the only answer
on the OP's Stack Overflow question, explaining that braced forms
of the other compound commands would be syntactically ambiguous.

https://stackoverflow.com/a/63261064

vq