On Tue, Aug 4, 2009 at 12:44 PM, Chet Ramey<chet.ra...@case.edu> wrote: >> It seems to me that this loop should just wait until the process is >> 'kill -CONT'ed and keep right on going as if nothing had happened. Is >> there any reason not to do this? > > Ummm...yes. It renders job control useless. If we have the shell > hang until a stopped child process is continued, why run with job > control at all? If you want to treat the entire loop as a stoppable > unit, run it in a subshell.
I hadn't thought of that. And, indeed, just surrounding the loop with parentheses, giving a subshell, does solve my problem. (Well, except that the users typing in these loops are novices, so now I have the problem of trying to get them to surround these loops with parentheses. :-) I'm still not sure that the original behavior makes sense, as opposed to simply hanging until the child is continued (with the hang interruptible by Control-C). Is there some use case in which this provides a benefit? It surprised me, and I can't recall ever having seen it documented. This scenario is not something that will happen accidentally, since there's really no way to SIGSTOP the child without doing it from another shell, so the prospect of a user ending up in front of a "hung" shell doesn't seem like that much of a problem. Here's my use case, for what it's worth: Very novice users use these loops to run series of jobs, each of which may take hours. It would be handy for me in an admin role to occasionally be able to STOP/CONT these jobs as a method of ad hoc priority shuffling (to make one job step in front of another). I can't do this, though, if this breaks their loops. Mike