Re: Checking executability for asynchronous commands

2020-12-27 Thread Markus Elfring
> If you have the pid of an asynchronous command -- and the easiest way to get 
> that pid
> is by referencing $! after it was started -- you can call `wait' with that pid
> to retrieve the status, even if it's already terminated.

Would you care if waiting on such identifications for background processes
will occasionally be forgotten?

How many efforts would you invest to add potentially missing wait function 
calls?

Regards,
Markus



Re: Checking executability for asynchronous commands

2020-12-27 Thread Eli Schwartz

On 12/27/20 5:01 AM, Markus Elfring wrote:

If you have the pid of an asynchronous command -- and the easiest way to get 
that pid
is by referencing $! after it was started -- you can call `wait' with that pid
to retrieve the status, even if it's already terminated.


Would you care if waiting on such identifications for background processes
will occasionally be forgotten?

How many efforts would you invest to add potentially missing wait function 
calls?


Would you care if configuring bash to wait on identification of 
background processes will occasionally be forgotten?


Would you care if checking the status of foreground processes and doing 
different things based on success or failure will occasionally be forgotten?


Would you care if  will occasionally be 
forgotten?



I'm not sure I understand the question? Writing programs in *any* 
programming language requires attention to detail and effectively 
conveying your need to the programming language. bash is no exception, 
even if people have a terrible habit of treating bash like it should be 
special or different merely because it uses subprocesses a lot, and is 
popular.


A stronger argument must be made for new features rather than merely 
"sometimes people are extremely forgetful, we need a new language 
feature that doesn't fit in well and doesn't behave consistently, so 
they can be forgetful about that instead".


--
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



OpenPGP_signature
Description: OpenPGP digital signature


Re: Checking executability for asynchronous commands

2020-12-27 Thread Chet Ramey

On 12/27/20 5:01 AM, Markus Elfring wrote:

If you have the pid of an asynchronous command -- and the easiest way to get 
that pid
is by referencing $! after it was started -- you can call `wait' with that pid
to retrieve the status, even if it's already terminated.


Would you care if waiting on such identifications for background processes
will occasionally be forgotten?

How many efforts would you invest to add potentially missing wait function 
calls?


It's axiomatic: if you want to make a decision based on the exit status of
any asynchronous process, you need to use `wait' to obtain the status of
that process.

I don't think "I don't want to do it that way" is a good reason to provide
a different method.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Request to the mailing list

2020-12-27 Thread Saint Michael
I want to suggest a new feature, that may be obvious at this point.
How do I do this?

Philip Orleans


New Feature Request

2020-12-27 Thread Saint Michael
Bash is very powerful for its ability to use all kinds of commands and pipe
information through them. But there is a single thing that is impossible to
achieve except using files on the hard drive or on /tmp. We need a new
declare -g (global) where a variable would have its contents changed by
subshells and keep it. I.e. all subshells may change the variable and this
will not be lost when the subshell exits. Also, a native semaphore
technology, so different code being executed in parallel would change the
variable in an orderly fashion.
I use GNU parallel extensively, basically, my entire business depends on
this technology, and now I need to use a database to pass information
between functions being executed, back to the main bash script. This is
basically ridiculous. At some point we need to turn Bash into more than a
shell, a power language. Now I do arbitrary math using embedded Python, but
the variable-passing restriction is a big roadblock.
Philip Orleans


Re: New Feature Request

2020-12-27 Thread Eli Schwartz

On 12/27/20 12:38 PM, Saint Michael wrote:

Bash is very powerful for its ability to use all kinds of commands and pipe
information through them. But there is a single thing that is impossible to
achieve except using files on the hard drive or on /tmp. We need a new
declare -g (global) where a variable would have its contents changed by
subshells and keep it. I.e. all subshells may change the variable and this
will not be lost when the subshell exits. Also, a native semaphore
technology, so different code being executed in parallel would change the
variable in an orderly fashion.
I use GNU parallel extensively, basically, my entire business depends on
this technology, and now I need to use a database to pass information
between functions being executed, back to the main bash script. This is
basically ridiculous. At some point we need to turn Bash into more than a
shell, a power language. Now I do arbitrary math using embedded Python, but
the variable-passing restriction is a big roadblock.
Philip Orleans



Essentially, you want IPC. But, you do not want to use the filesystem as 
the communications channel for the IPC.


So, what do you propose instead, that isn't the filesystem? How do you 
think your proposed declare -g would work? (There is already declare -g, 
maybe you'd prefer declare --superglobal or something?)


--
Eli Schwartz
Arch Linux Bug Wrangler and Trusted User



OpenPGP_signature
Description: OpenPGP digital signature


Re: New Feature Request

2020-12-27 Thread Saint Michael
Yes, superglobal is great.
Example, from the manual:
" Shared Memory
Shared memory allows one or more processes to communicate via memory that
appears in all of their virtual address spaces. The pages of the virtual
memory is referenced by page table entries in each of the sharing
processes' page tables. It does not have to be at the same address in all
of the processes' virtual memory. As with all System V IPC objects, access
to shared memory areas is controlled via keys and access rights checking.
Once the memory is being shared, there are no checks on how the processes
are using it. They must rely on other mechanisms, for example System V
semaphores, to synchronize access to the memory."

We could allow only strings or more complex objects, but using bash-language
only, an internal mechanism, and also we need to define a semaphore.

Is it doable?

I am not a low-level developer. My days coding assembler are long gone.

Philip Orleans

Reference: https://tldp.org/LDP/tlk/ipc/ipc.html







On Sun, Dec 27, 2020 at 12:50 PM Eli Schwartz 
wrote:

> On 12/27/20 12:38 PM, Saint Michael wrote:
> > Bash is very powerful for its ability to use all kinds of commands and
> pipe
> > information through them. But there is a single thing that is impossible
> to
> > achieve except using files on the hard drive or on /tmp. We need a new
> > declare -g (global) where a variable would have its contents changed by
> > subshells and keep it. I.e. all subshells may change the variable and
> this
> > will not be lost when the subshell exits. Also, a native semaphore
> > technology, so different code being executed in parallel would change the
> > variable in an orderly fashion.
> > I use GNU parallel extensively, basically, my entire business depends on
> > this technology, and now I need to use a database to pass information
> > between functions being executed, back to the main bash script. This is
> > basically ridiculous. At some point we need to turn Bash into more than a
> > shell, a power language. Now I do arbitrary math using embedded Python,
> but
> > the variable-passing restriction is a big roadblock.
> > Philip Orleans
>
>
> Essentially, you want IPC. But, you do not want to use the filesystem as
> the communications channel for the IPC.
>
> So, what do you propose instead, that isn't the filesystem? How do you
> think your proposed declare -g would work? (There is already declare -g,
> maybe you'd prefer declare --superglobal or something?)
>
> --
> Eli Schwartz
> Arch Linux Bug Wrangler and Trusted User
>
>


Re: New Feature Request

2020-12-27 Thread Léa Gris

On 27/12/2020 at 19:30, Saint Michael wrote:

Yes, superglobal is great.
Example, from the manual:
" Shared Memory
Shared memory allows one or more processes to communicate via memory that
appears in all of their virtual address spaces. The pages of the virtual
memory is referenced by page table entries in each of the sharing
processes' page tables. It does not have to be at the same address in all
of the processes' virtual memory. As with all System V IPC objects, access
to shared memory areas is controlled via keys and access rights checking.
Once the memory is being shared, there are no checks on how the processes
are using it. They must rely on other mechanisms, for example System V
semaphores, to synchronize access to the memory."

We could allow only strings or more complex objects, but using bash-language
only, an internal mechanism, and also we need to define a semaphore.

Is it doable?


Maybe you should consider that Bash or shell is not the right tool for 
your needs.


Bash/shell is designed to sequence commands and programs in a very 
linear way and only deals with character streams.


If you need to manipulate complex objects, work with shared resources, 
Bash is a very bad choice. If you want to stay with scripting, as you 
already mentioned using Python; Python is a way better choice for 
dealing with the features and requirements you describes.



--
Léa Gris




Re: New Feature Request

2020-12-27 Thread Chet Ramey

On 12/27/20 1:30 PM, Saint Michael wrote:


We could allow only strings or more complex objects, but using bash-language
only, an internal mechanism, and also we need to define a semaphore.

Is it doable?


Of course it's doable; all that takes is requirements, definition, and
implementation. The question is ROI: whether or not it's worth the effort
to do that implementation, even whether or not that feature is worth
having in the shell at all. I do not think it measures up.


I am not a low-level developer. My days coding assembler are long gone.


That's ok; bash is not written in assembly language.


--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Checking executability for asynchronous commands

2020-12-27 Thread Markus Elfring
>> Would you care if waiting on such identifications for background processes
>> will occasionally be forgotten?
>>
>> How many efforts would you invest to add potentially missing wait function 
>> calls?
>
> It's axiomatic: if you want to make a decision based on the exit status of
> any asynchronous process, you need to use `wait' to obtain the status of
> that process.
>
> I don't think "I don't want to do it that way" is a good reason to provide
> a different method.

I got another programming concern:
Process identifications are a system resource.
The exit status is preserved until this information will eventually be used
for corresponding data processing.
How many processes can a selected system configuration handle?

Regards,
Markus