Re: pwd and prompt don't update after deleting current working directory

2024-07-12 Thread Chet Ramey

On 7/11/24 9:53 PM, David Hedlund wrote:

Thanks, Lawrence! I found this discussion helpful and believe it would be a 
valuable feature to add. Can I submit this as a feature request?


I'm not going to add this. It's not generally useful for interactive
shells, and dangerous for non-interactive shells.

If this is a recurring problem for you, I suggest you write a shell
function to implement the behavior you want and run it from
PROMPT_COMMAND.

That behavior could be as simple as

pwd -P >/dev/null 2>&1 || cd ..

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-12 Thread Chet Ramey

On 7/9/24 6:12 AM, Zachary Santer wrote:

On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:


On 6/29/24 10:51 PM, Zachary Santer wrote:

so you were then able to wait for each process substitution individually,
as long as you saved $! after they were created. `wait' without arguments
would still wait for all process substitutions (procsub_waitall()), but
the man page continued to guarantee only waiting for the last one.

This was unchanged in bash-5.2. I changed the code to match what the man
page specified in 10/2022, after

https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.html


Is what's being reported there undesirable behavior? 


Yes, of course. It shouldn't hang, even if there is a way to work around
it. The process substitution and the subshell where `wait' is running
don't necessarily have a strict parent-child relationship, even if bash
optimizes away another fork for the subshell.



On the other hand, allowing 'wait' without arguments to wait on all
process substitutions would allow my original example to work, in the
case that there aren't other child processes expected to outlive this
pipeline.


So you're asking for a new feature, probably controlled by a new shell
option.


We've discussed this before. `wait -n' waits for the next process to
terminate; it doesn't look back at processes that have already terminated
and been added to the list of saved exit statuses. There is code tagged
for bash-5.4 that allows `wait -n' to look at these exited processes as
long as it's given an explicit set of pid arguments.


I read through some of that conversation at the time. Seemed like an
obvious goof. Kind of surprised the fix isn't coming to bash 5.3,
honestly.


Not really, since the original intent was to wait for the *next* process
to terminate. That didn't change when the ability to wait for explicit
pids was added.


And why "no such job" instead of "not a child of this shell"?


Because wait -n takes pid arguments that are part of jobs.



They're similar, but they're not jobs. They run in the background, but you
can't use the same set of job control primitives to manipulate them.
Their scope is expected to be the lifetime of the command they're a part
of, not run in the background until they're wanted.


Would there be a downside to making procsubs jobs?


If you want to treat them like jobs, you can do that. It just means doing
more work using mkfifo and giving up on using /dev/fd. I don't see it as
being worth the work to do it internally.



Consider my original example:
command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )

Any nontrivial command is going to take more time to run than it took
to be fed its input.


In some cases, yes.


The idea that no process in a process
substitution will outlive its input stream precludes a reading process
substitution from being useful.


It depends on whether or not it can cope with its input (in this case)
file descriptor being invalidated. In some cases, yes, in some cases, no.


When you say "invalidated," are you referring to something beyond the
process in a reading process substitution simply receiving EOF?
Everything should be able to handle that much.


They're pipes, so there are more semantics beyond receiving EOF on reading.
Writing on a pipe where the reader has gone away, for example, like below.



And nevermind
exec {fd}< <( command )
I shouldn't do this?


Sure, of course you can. You've committed to managing the file descriptor
yourself at this point, like any other file descriptor you open with exec.


But then, if I 'exec {fd}<&-' before consuming all of command's
output, I would expect it to receive SIGPIPE and die, if it hasn't
already completed. And I might want to ensure that this child process
has terminated before the calling script exits.


Then save $! and wait for it. The only change we're talking about here is
to accommodate your request to be able to wait for multiple process
substitutions created before you have a chance to save all of the pids.




Why should these be different in practice?

(1)
mkfifo named-pipe
child process command < named-pipe &
{
foreground shell commands
} > named-pipe

(2)
{
foreground shell commands
} > >( child process command )


Because you create a job with one and not the other, explicitly allowing
you to manipulate `child' directly?


Right, but does it have to be that way? What if the asynchronous
processes in process substitutions were jobs?


If you want them to work that way, take a shot at it. I don't personally
think it's worth the effort.


If you need to capture all the PIDs of all your background processes,
you'll have to launch them one at a time.  This may mean using FIFOs
(named pipes) instead of anonymous process substitutions, in some cases.


Bash is already tracking the pids for all child processes not waited
on, internally. So I imagine it wouldn't be too much work to make that
information available to the script it's running.


So an addition

Re: waiting for process substitutions

2024-07-12 Thread Robert Elz
Date:Fri, 12 Jul 2024 11:48:15 -0400
From:Chet Ramey 
Message-ID:  <258bcd3a-a936-4751-8e24-916fbeb9c...@case.edu>


  | Not really, since the original intent was to wait for the *next* process
  | to terminate.

There are two issues with that.   The first is "next after what", one
interpretation would be "the next after the last which was waited upon"
(one way or another).   The other, and the one you seem to imply, is
"next which terminates after now" - ie: still running when the wait
command is executed.   But that's an obvious race condition, that's
the second issue, as there is no possible way to know (in the script
which is executing "wait -n") which processes have terminated at that
instant.

Eg: let's assume I have two running bg jobs, one which is going to take
a very long time, the other which will finish fairly soon.

For this e-mail, I'll emulate those two with just "sleep", though one
of them might be a rebuild of firefox, and all its dependencies, from
sources (yes, including rust), which will take some time, and the other
is a rebuild of "true" (/bin/true not the builtin), which probably won't,
as an empty executable file is all that's required.

So, and assuming an implementation of sleep which accepts fractional
seconds:

sleep $(( 5 * 24 * 60 * 60 )) & J1=$!
sleep 0.01 & J2=$!

printf 'Just so the shell is doing something: jobs are %s & %s\n' \
"${J1}" "${J2}"

wait -n

Now which of the two background jobs is that waiting for?  Which do you
expect the script writer intended to wait for?   You can make the 2nd
sleep be "sleep 0" if you want to do a more reasonable test, just make
sure when you test, to get a valid result, you don't interrupt that wait.

The current implementation is lunacy, cannot possibly have any users,
since without doing a wait the script cannot possibly know what has
finished already, so can't possibly be explicitly excluding jobs which
just happen to have finished after the last "wait -n" (or other wait).

Of course, in the above simple example, the wait -n could be replaced
by wait "${J2}" which would work just fine, but a real example would
probably have many running jobs, some of which are very quick, and
others which aren't, and some arbitrary ones of the quick ones might
be so quick that they are finished before the script is ready to wait.
Even a firefox build might be that quick, if the options passed to
the top level make happen to contain a make syntax error, and so all
that happens is an error message (Usage:...) and very quick exit.

Please just change this, use the first definition of "next job to
finish" - and in the case when there are already several of them,
pick one, any one - you could order them by the time that bash reaped
the jobs internally, but there's no real reason to do so, as that
isn't necessarily the order the actual processes terminated, just
the order the kernel picked to answer the wait() sys call, when
there are several child zombies ready to be reaped.


  | > Bash is already tracking the pids for all child processes not waited
  | > on, internally. So I imagine it wouldn't be too much work to make that
  | > information available to the script it's running.
  |
  | So an additional feature request.

If it helps, to perhaps provide some consistency, the NetBSD shell has
a builtin:

   jobid [-g|-j|-p] [job]
 With no flags, print the process identifiers of the processes in
 the job.

(-g instead gives the process group, -j the job identifier (%n), and
-p the lead pid (that which was $! when the job was started, which might
also be the process group, but also might not be).   The "job" arg (which
defaults to '%%') can identify the job by any of the methods that wait,
or kill, or "fg" (etc) allow, that us %% %- %+ %string or a pid ($!)).
Just one "job" arg, and only one option allowed, so there's no temptation
(nor requirement) to attempt to write sh code to parse the output and
work out what is what.  It's a builtin, running it multiple times is
cheaper than any parse attempt could possibly be.

jobid exits with status 2 if there is an argument error, status 1,
if with -g the job had no separate process group, or with -p there
is no process group leader (should not happen), and otherwise
exits with status 0.

("argument error" includes both things like giving 2 options, or an
invalid (unknown) one, or giving a job arg that doesn't resolve to a
current (running, stopped, or terminated but unwaited) job.   Job
control needs to be enabled (rare in scripts) to get separate process
groups.   The "process group leader" is just $! - has no particular
relationship with actual process groups (and yes, the wording could be better).

That command can be run after each job is created, using $! as the job
arg, and saving the pids, and/or job number (for later execution when
needed) however the script likes,

Mu

Re: waiting for process substitutions

2024-07-12 Thread Greg Wooledge
On Sat, Jul 13, 2024 at 07:40:42 +0700, Robert Elz wrote:
> Please just change this, use the first definition of "next job to
> finish" - and in the case when there are already several of them,
> pick one, any one - you could order them by the time that bash reaped
> the jobs internally, but there's no real reason to do so, as that
> isn't necessarily the order the actual processes terminated, just
> the order the kernel picked to answer the wait() sys call, when
> there are several child zombies ready to be reaped.

This would be greatly preferred, and it's how most people *think*
wait -n currently works.

The common use case for "wait -n" is a loop that tries to process N jobs
at a time.  Such as this one:

greg@remote:~$ cat ~greybot/factoids/wait-n; echo
Run up to 5 processes in parallel (bash 4.3): i=0 j=5; for elem in 
"${array[@]}"; do (( i++ < j )) || wait -n; my_job "$elem" & done; wait

If two jobs happen to finish simultaneously, the next call to wait -n
should reap one of them, and then the call after that should reap
the other.  That's how everyone wants it to work, as far as I've seen.

*Nobody* wants it to skip the job that happened to finish at the exact
same time as the first one, and then wait for a third job.  If that
happens in the loop above, you'll have only 4 jobs running instead of 5
from that point onward.



[bug #65981] "bash test -v" does not work as documented with "export KEY="

2024-07-12 Thread anonymous
URL:
  

 Summary: "bash test -v" does not work as documented with
"export KEY="
   Group: The GNU Bourne-Again SHell
   Submitter: None
   Submitted: Sat 13 Jul 2024 02:15:01 AM UTC
Category: None
Severity: 3 - Normal
  Item Group: None
  Status: None
 Privacy: Public
 Assigned to: None
 Open/Closed: Open
 Discussion Lock: Any


___

Follow-up Comments:


---
Date: Sat 13 Jul 2024 02:15:01 AM UTC By: Anonymous
The documentation of the
[https://www.gnu.org/software/bash/manual/bash.html#Bash-Conditional-Expressions
Bash-Conditional-Expressions] "-v varname" says:

> -v varname
>True if the shell variable varname is set (has been assigned a value).

I think "export TEST_EQUAL_WITHOUT_VALUE=" does not assign a value to the
varname, does it?

Test case:
> docker run -it --rm bash:latest
> export TEST_EQUAL_WITHOUT_VALUE=
> if test -v TEST_EQUAL_WITHOUT_VALUE; then echo "true"; else echo "false";
fi

The output is "true", but there is no value assigned to it.

That is fine to me. I suggest to change the documentation and remove the "(has
been assigned a value)" part.


Tested with: GNU bash, version 5.2.26(1)-release (x86_64-pc-linux-musl)







___

Reply to this item at:

  

___
Message sent via Savannah
https://savannah.gnu.org/


signature.asc
Description: PGP signature


Re: waiting for process substitutions

2024-07-12 Thread Oğuz
On Saturday, July 13, 2024, Greg Wooledge  wrote:
>
> If two jobs happen to finish simultaneously, the next call to wait -n
> should reap one of them, and then the call after that should reap
> the other.  That's how everyone wants it to work, as far as I've seen.
>
> *Nobody* wants it to skip the job that happened to finish at the exact
> same time as the first one, and then wait for a third job.  If that
> happens in the loop above, you'll have only 4 jobs running instead of 5
> from that point onward.
>
>
It feels like deja vu all over again. Didn't we already discuss this and
agree that `wait -n' should wait jobs one by one without skipping any? Did
it not make it to 5.3?


-- 
Oğuz


Re: [bug #65981] "bash test -v" does not work as documented with "export KEY="

2024-07-12 Thread Andreas Kähäri
On Fri, Jul 12, 2024 at 10:15:02PM -0400, anonymous wrote:
> URL:
>   
> 
>  Summary: "bash test -v" does not work as documented with
> "export KEY="
>Group: The GNU Bourne-Again SHell
>Submitter: None
>Submitted: Sat 13 Jul 2024 02:15:01 AM UTC
> Category: None
> Severity: 3 - Normal
>   Item Group: None
>   Status: None
>  Privacy: Public
>  Assigned to: None
>  Open/Closed: Open
>  Discussion Lock: Any
> 
> 
> ___
> 
> Follow-up Comments:
> 
> 
> ---
> Date: Sat 13 Jul 2024 02:15:01 AM UTC By: Anonymous
> The documentation of the
> [https://www.gnu.org/software/bash/manual/bash.html#Bash-Conditional-Expressions
> Bash-Conditional-Expressions] "-v varname" says:
> 
> > -v varname
> >True if the shell variable varname is set (has been assigned a value).
> 
> I think "export TEST_EQUAL_WITHOUT_VALUE=" does not assign a value to the
> varname, does it?


After the "export", the variable has been *set*. The statement
also assigns an empty string to the variable, so it has a value (testing
the variable's value against an empty string would yield a boolean true
result, showing that there is a value to test with).

The "export" is unimportant, it just promotes the shell variable to an
environment variable, which isn't relevant to this issue.
  
If you want to test whether a variable contains only an empty string,
use "test -z variablename". Note that that does not test whether the
variable is *set* though (unset variables expand to empty strings too,
unless "set -u" is in effect, in which case it provokes an "unbound
variable" diagnostic from the shell).

Andreas


> Test case:
> > docker run -it --rm bash:latest
> > export TEST_EQUAL_WITHOUT_VALUE=
> > if test -v TEST_EQUAL_WITHOUT_VALUE; then echo "true"; else echo "false";
> fi
> 
> The output is "true", but there is no value assigned to it.
> 
> That is fine to me. I suggest to change the documentation and remove the "(has
> been assigned a value)" part.
> 
> 
> Tested with: GNU bash, version 5.2.26(1)-release (x86_64-pc-linux-musl)
> 
> 
> 
> 
> 
> 
> 
> ___
> 
> Reply to this item at:
> 
>   
> 
> ___
> Message sent via Savannah
> https://savannah.gnu.org/



-- 
Andreas (Kusalananda) Kähäri
Uppsala, Sweden

.