PROMPT_COMMAND is not executed after edit-and-execute-command

2021-03-18 Thread earnestly
When using edit-and-execute-command to edit the command-line I've
noticed that PROMPT_COMMAND is not evaluated upon returning to the
prompt, e.g.:

earnest i ~ PROMPT_COMMAND='echo foobar'
foobar
earnest i ~ set -x
++ echo foobar
foobar
earnest c ~ # use edit-and-execute-command (C-x C-e)
++ fc -e editor
+++ editor /tmp/bash-fc.Xca0Uv

earnest i ~ :
+ :
++ echo foobar
foobar
earnest i ~

The issue this is giving me is that my PS1 is set via the PROMPT_COMMAND
which includes setting the terminal title.

If the PROMPT_COMMAND is not evaluated my terminal title remains set to
the command fc -e executes, (e.g.  editor /tmp/bash-fc.Xca0Uv) instead
of returning to my prefered terminal title.

Is this behaviour intended?



Re: PROMPT_COMMAND is not executed after edit-and-execute-command

2021-03-18 Thread earnestly
On Thu, Mar 18, 2021 at 03:49:33PM -0400, Chet Ramey wrote:
> Since readline prints the prompt as part of redisplay, and it doesn't know
> anything about PROMPT_COMMAND or command execution, it doesn't execute it.

That makes sense but doesn't bode well for my issue.  Perhaps I can find
a way to execute a prompt_command function (which I currently use) or
perhaps investigate some other method to workaround this entirely.

Thanks for the explanation nevertheless.



Unexpected behaviour when using process substitution with stdout and stderr

2021-07-11 Thread earnestly
GNU bash, version 5.1.8(1)-release (x86_64-pc-linux-gnu)

I have attempted to use process substitution in order to feed the
output of a command into two filters, one for handling stdout and the
other for stderr.

Prior to this I was using POSIX sh and named pipes to achieve this but
decided to try bash with its >() and <() notation.

Perhaps I am misunderstanding how this works because I found them to be
unusable in this context.

What appears to be happening is that the output from standard error is
being mixed into the function handling standard out, even more
surprisingly that xtrace output is also being consumed and filtered as
well.

I don't quite know how to describe what's happening so I have provided a
small demonstration/reproducer instead:

#!/bin/bash --

generate() {
for ((i=0; i<10; ++i)); do
if ((RANDOM % 2)); then
printf 'generate to stdout\n'
else
printf 'generate to stderr\n' >&2
fi
done
}

stdout() {
local line

while read -r line; do
printf 'from stdout: %s\n' "$line"
done
}

stderr() {
local line

while read -r line; do
printf 'from stderr: %s\n' "$line"
done
}

# Using process substitution.
unexpected() {
# This is particularly dangerous when the script is executed under `bash -x'
# as the xtrace output is read as standard input to the `stdout' function.
generate > >(stdout) 2> >(stderr)
wait

# Example output:
# from stdout: generate to stdout
# from stdout: generate to stdout
# from stdout: from stderr: generate to stderr
# from stdout: from stderr: generate to stderr
# from stdout: from stderr: generate to stderr
# from stdout: generate to stdout
# from stdout: from stderr: generate to stderr
# from stdout: generate to stdout
# from stdout: from stderr: generate to stderr
# from stdout: from stderr: generate to stderr
}

# Using named pipes.
expected() {
mkfifo a b
trap 'rm a b' EXIT

generate > a 2> b &

stdout < a &
stderr < b &
wait

# Example output:
# from stdout: generate to stdout
# from stderr: generate to stderr
# from stdout: generate to stdout
# from stderr: generate to stderr
# from stdout: generate to stdout
# from stdout: generate to stdout
# from stderr: generate to stderr
# from stdout: generate to stdout
# from stdout: generate to stdout
# from stderr: generate to stderr
}



How does this wait -n work to cap parallelism?

2019-07-29 Thread Earnestly
This mail was spurred on by users in the #bash IRC channel.  It started
after reading  where the
article introduces an example using 'wait -n' as a means to provide capped
parallism:


#!/usr/bin/env bash

# number of processes to run in parallel
num_procs=5

# function that processes one item
my_job() {
printf 'Processing %s\n' "$1"
sleep "$(( RANDOM % 5 + 1 ))"
}

i=0
while IFS= read -r line; do
if (( i++ >= num_procs )); then
wait -n   # wait for any job to complete. New in 4.3
fi
my_job "$line" &
done < inputlist
wait # wait for the remaining processes


The question is about how the example works in order to maintain
parallelism capped at num_proc.

Below I've provided a synthetic scenario which hopefully highlights my
(and others) confusion.

The logic is to provide two loops, one generating an initially slow feed of
"work" for the second loop which starts "agents" in the background.
Then the iteration 'i' is compared against 'nproc' (for which I use 3)
to guard calls to 'wait -n' once 'i' equals or exceeds 'nproc'.

As the initial feed rate and the backgrounded agents both initially take 2
seconds, there is only ever one agent started at a time, one after the other.

A typical process tree in top or htop might look something like this:


bash scriptname
|- bash scriptname (while read)
|  `- bash scriptname (agent)
| `- sleep 2
`- bash scriptname (slowthenfast)
 `- sleep 2


After some time the value of 'i' will have incremented well beyond the
value of 'nproc'.  It is now that the feed rate speeds up dramatically,
providing more work for the agents.

Due to this more agents are started while still maintaining the nproc limit:


bash scriptname
|- bash scriptname
|  |- bash scriptname
|  |  `- sleep 2
|  |- bash scriptname
|  |  `- sleep 2
|  `- bash scriptname
| `- sleep 2
`- bash scriptname
 `- sleep 0.1


And I have no idea why or how this works.  I hope the list can help
explain this behaviour.

---

My intuition, or assumption is as follows:

I would expect that the if statement in the second loop would always
succeed.  It would then call 'wait -n' and wait for the existing agent
to end (as I assume it's the only job running at this point).  Once it
ends a new agent will be started and back to the 'wait -n' the loop will
go.

In effect it should keep starting only one agent after the other.  E.g.:


agent0 (this is the last agent that ran before the loop speed increased)

while read
(i++ >= nproc) => always true
wait -n => waits for agent0 (as its the only job?)
agent0 ends

agent1 starts

while read
(i++ >= nproc) => always true
wait -n => waits for agent1 (as its the only job?)
agent1 ends

agent2 starts

while read
(i++ >= nproc) => always true
wait -n => waits for agent2 (as its the only job?)
agent2 ends

agent3 starts


But what appears to be happening is this:


agent0 (this is the last agent that ran before the loop speed increased)

while read
(i++ >= nproc) => always true
wait -n => waits for agent0 (as its the only job?)
agent0 ends

agent1 starts
agent2 starts
agent3 starts


---

#!/bin/bash

nproc=3

agent() {
printf 'agent: %d: started... (i is %d)\n' "$1" "$2"
sleep 2
printf 'agent: %d: finished\n' "$1"
}

slowthenfast() {
local a=0

while :; do
printf '%d\n' "$a"

if (( a >= 10 )); then
sleep 0.1
else
sleep 2
fi

(( ++a ))
done
}

i=0
slowthenfast | while read -r work; do
if (( i++ >= nproc )); then
wait -n
fi

agent "$work" "$i" &
done

wait



Re: How does this wait -n work to cap parallelism?

2019-07-29 Thread Earnestly
On Mon, Jul 29, 2019 at 02:38:48PM -0400, Greg Wooledge wrote:
> The same happens for my_job 7, and my_job 8.  Each one is preceded by
> a wait -n, so it waits for one of the existing jobs to terminate before
> the new job is launched.

This aspect of the behaviour isn't in question.

Without reiterating too much, the question is about how that cap is
maintained after the 'wait -n' loop only ever experiences a single agent
while 'i' is incrementing, and only after 'i' has exceeded 'nproc' does
the parallelism start.



Re: How does this wait -n work to cap parallelism?

2019-07-29 Thread Earnestly
On Mon, Jul 29, 2019 at 07:12:42PM +0100, Earnestly wrote:
> The question is about how the example works in order to maintain
> parallelism capped at num_proc.

Thanks to emg on #bash for explaining how what is essentially going on.
Bash essentially maintains a list of completed jobs until wait -n
is called and removes one of those completed jobs.  Bash then adds another
job according to the loop I used.

After the "slow" phase of my synthetic code it just so happens that
these jobs are removed and new ones are added rapidly.

And so the jobs are added and then removed accordingly but is never
allowed to exceed "nproc" because wait -n is always called thereafter.

> Below I've provided a synthetic scenario which hopefully highlights my
> (and others) confusion.

It was because of this that I trapped my mind.  Because I was watching
top I concluded that once a job had ended, bash itself would also do
likewise and discard it.  I wasn't able to consider that bash might
maintain a list of *completed* jobs for which subsequent 'wait' calls
would remove.

Sorry for the noise.