Re: feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-28 Thread Zachary Santer
On Tue, Jun 11, 2024 at 6:48 AM Zachary Santer  wrote:
>
> The (  ) within the parameter expansion would be roughly analogous to
> the right hand side of a compound assignment statement for an indexed
> array. The values found therein would be taken as the indices of array
> elements or characters to expand. Trying to set indices for the
> indices, i.e. "${array[@]( [10]=1 [20]=5 )}", wouldn't make any sense,
> though, so not quite the same construct.

It occurs to me that, if this were to be implemented, arithmetic
evaluation should be performed on the contents of the parentheses when
this is part of an indexed array expansion, since those indices can
only be integers anyway. This would then be the same behavior you get
within
declare -i -a indeces
indeces=( 2#0010 2#1000 )
for instance.

So, "${array[@]( 2#0010 2#1000 )}" would have the same effect as
"${array[@]( "${indeces[@]}" )}", though arithmetic evaluation was
already performed when indeces[@] was assigned a value.

Is "${array[@]( "${indeces[@]}" )}" ugly? Does that matter? It seems
like a good way to write what's happening. I still have to look up
some of the less-commonly-used parameter expansions every time I use
them. I think people would kind of "get" this more readily.



Re: feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-28 Thread Greg Wooledge
On Fri, Jun 28, 2024 at 08:50:50 -0400, Zachary Santer wrote:
> Is "${array[@]( "${indeces[@]}" )}" ugly? Does that matter? It seems
> like a good way to write what's happening. I still have to look up
> some of the less-commonly-used parameter expansions every time I use
> them. I think people would kind of "get" this more readily.

I'm still wondering when you'd ever use this in a shell script.

The first thing I can think of is "I'm presenting a menu to the user,
from which zero or more items may be selected.  The user's selection
indices are read into an array.  I want to map those selections to another
array to get the filenames-or-whatever-they-are."

In such a script, I would write a loop, retrieve the filenames one at a
time, and process them or append them to a list, depending on what the
script is supposed to do with them.

The amount of work it would take to support this new syntax seems like it
would exceed the value it adds to the quite rare script that would use it.



Re: Proposal for a New Bash Option: failfast for Immediate Pipeline Failure

2024-06-28 Thread Chet Ramey

On 6/24/24 10:21 AM, ama bamo wrote:


To address these issues, I propose the introduction of a new option,
failfast, which would immediately terminate the pipeline if any command in
the pipeline fails. This would streamline error handling and provide more
predictable script execution, aligning with user expectations in many
common use cases.


I don't see general value in killing pipeline processes (using SIGKILL,
possibly after trying SIGTERM first), as soon as one of them fails.

Chet
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-28 Thread Zachary Santer
On Fri, Jun 28, 2024 at 9:11 AM Greg Wooledge  wrote:
>
> I'm still wondering when you'd ever use this in a shell script.

In my case, I've got a couple of things:

(1)
One script has an array of names for named pipes. It might only need
two of the six for a given invocation, so why create all six every
time you run it? It fills an array of names of only the named pipes it
will need, then it expands that in a call to mkfifo. A common set of
readonly integer variables, serving the same purpose as an enum, is
used to index into this array and related arrays containing fds and
pids. An array of necessary fd indices is being created at the same
time as the array of necessary named pipes. However, there's only one
index in that array for each pair of fds that will be exec'd, so I'd
just be replacing an array of necessary named pipe names with an array
of necessary named piped indices in this case. Not a huge improvement.
(Yes, this is the same script that got me interested in having the
coproc keyword more fully implemented.)

( 2 )
A script that I've written more recently generates some arrays
describing everything it needs to do before it starts making updates.
This allows the script to catch a lot of error conditions before
making any changes, instead of potentially leaving the user in a
partially-updated state. When the script then iterates through indices
of these arrays, it will need to reference elements at earlier indices
at a later point. This may be necessary multiple times. This is
handled when generating the arrays by listing the indices of earlier
elements as whitespace-delimited integers in an array element for each
later index you'll be at when this needs to happen.

So, right now, the array I need when I get there is generated like so:
local -a A=()
local -i j
for j in ${B[i]}; do
  A+=( "${C[j]#"${C[i]}/"}" )
done

It's much more natural in this script to call commands with arguments
"${A[@]}" than it is to call those commands multiple times.

Admittedly, in my case, the proposed syntax would leave me doing
local -a A=( "${C[@]( ${B[i]} )}" )
A=( "${A[@]#"${C[i]}/"}" )
which might not be a marked improvement.

If I didn't need that remove matching prefix pattern parameter
expansion, it might've been natural to call the following commands
like
command -- "${C[@]( ${B[i]} )}"

And, yes, I am trying to do some complicated stuff in bash in the most
reasonable way I can find. If you want to act on the output from
external commands, it seems like the only game in town, and running
external commands in general is much more natural in bash than
elsewhere.

Hate to think I'm the only guy here doing interesting stuff like this.

> The amount of work it would take to support this new syntax seems like it
> would exceed the value it adds to the quite rare script that would use it.

I wanted to know if this would be valuable to others. If it's not, then fine.



anonymous pipes in recursive function calls

2024-06-28 Thread Zachary Santer
Was "feature suggestion: ability to expand a set of elements of an
array or characters of a scalar, given their indices"

On Fri, Jun 28, 2024 at 5:29 PM Zachary Santer  wrote:

> ( 2 )
> A script that I've written more recently generates some arrays
> describing everything it needs to do before it starts making updates. [...]

> And, yes, I am trying to do some complicated stuff in bash in the most
> reasonable way I can find. [...]

Speaking of, I finally got a chance to run this thing today. Has a
problem with an anonymous pipe in a recursive function call been
resolved since bash 4.2?


set -o nounset -o noglob +o braceexpand
shopt -s lastpipe

main () {
  # initialize arrays
  recursive_function .
  # loop through arrays
}

recursive_function () {
  local entry_path="${1}"
  local starting_PWD="${PWD}"
  local path
  command this-file |
while IFS='' read -r -d '' path; do
  cd -- "${starting_PWD}/${path}"
  if [[ -r this-file ]]; then
recursive_function "${entry_path}/${path}"
  fi
  # fill arrays
  # there is another anonymous pipe here, as well
done
  #
  if (( PIPESTATUS[0] != 0 )); then
printf '%s\n' "command failed" >&2
error='true'
  fi
  cd -- "${starting_PWD}"
}

main "${@}"


Looks like, at the end of the while loop in the first call to
recursive_function (), I get
/path/to/the/script: line :
wait_for: No record of process .

recursive_function () is called from main () and then calls itself
twice from within the while loop in the first call, in my case.

PIPESTATUS[@] isn't doing what I would expect either. Sometimes it's a
one-element array. Sometimes it's two. It was giving me an exit status
of 1 at element 0 and no element 1, when I would expect "command" to
give an exit status of 0, but not every time.

Wound up doing


set -o nounset -o noglob +o braceexpand -o pipefail
shopt -s lastpipe

main () {
  # initialize arrays
  recursive_function .
  # loop through arrays
}

recursive_function () {
  local entry_path="${1}"
  local starting_PWD="${PWD}"
  local path
  if ! \
command this-file |
  {
while IFS='' read -r -d '' path; do
  cd -- "${starting_PWD}/${path}"
  if [[ -r this-file ]]; then
recursive_function "${entry_path}/${path}"
  fi
  # fill arrays
  # there is another anonymous pipe here, as well.
done
true
  }
#
  then
printf '%s\n' "command failed" >&2
error='true'
  fi
  cd -- "${starting_PWD}"
}

main "${@}"


To work around the PIPESTATUS thing, but that hasn't made bash's error
message go away.

Not sure how I could come up with a repeat-by for this thing. It
already worked recursively before I modified it to track everything it
needed to do and then go back and do it. I don't think this error
message was present then. (This script produces a lot of output,
though.) I wasn't referencing PIPESTATUS[0] yet either. No way to run
this in a more recent version of bash.

I can try
while [...] done < <( command this-file )
later, to see if it *is* bash being unable to wait for the first
element of the pipeline in the first call to recursive_function (),
like it feels like it is.