Re: Revisiting Error handling (errexit)

2022-07-04 Thread Saint Michael
Sounds great to me. I also use Bash for mission-critical processes.
Philip

On Mon, Jul 4, 2022 at 8:22 AM Yair Lenga  wrote:
>
> Hi,
>
> In my projects, I'm using bash to manage large scale jobs. Works very well,
> especially, when access to servers is limited to ssh. One annoying issue is
> the error handling - the limits/shortcomings of the 'errexit', which has
> been documented and discussed to the Nth degree in multiple forums.
>
> Needless to say, trying to extend bash to support try/catch clauses, (like
> other scripting solutions: Python, Groovy, Perl, ...), will be a major
> effort, which may never happen. Instead, I've tried to look into a minimal
> solution that will address the most common pitfall of errexit, where many
> sequences (e.g., series of commands in a function) will not properly
> "break" with 'errexit'. For example:
>
> function foo {
> cat /missing/file   # e.g.: cat non-existing file.
> action2   # Executed even if action 1 fail.
> action3
> }
>
> set -oerrexit   # want to catch errors in 'foo'
> if ! foo ; then
> # Error handling for foo failure
> fi
>
> I was able to change Bash source and build a version that supports the new
> option 'errfail' (following the 'pipefail' naming), which will do the
> "right" thing in many cases - including the above - 'foo' will return 1,
> and will NOT proceed to action2. The implementation changes the processing
> of command-list ( '{ action1 ; action2 ; ... }') to break of the list, if
> any command returns a non-zero code, that is
>
> set -oerrfail
> { echo BEFORE ; false ; echo AFTER ; }
>
> Will print 'BEFORE', and return 1 (false), when executed under 'errfail'
>
> I'm looking for feedback on this implementation. Will be happy to share the
> code, if there is a chance that this will be accepted into the bash core
> code - I believe it will make it easier to strengthen many production
> systems that use Bash.
>
> To emphasize, this is a minimal proposal, with no intention of expanding it
> into full support for exceptions handling, finally blocks, or any of the
> other features implemented in other (scripting) languages.
>
> Looking for any feedback.
>
> Yair



Re: Light weight support for JSON

2022-08-28 Thread Saint Michael
He has a point, though. To have some of the functionality of jq inside Bash
may be very useful.
If he can supply a patch, why not?
Philip Orleans

On Sun, Aug 28, 2022, 3:22 PM John Passaro  wrote:

> interfacing with an external tool absolutely seems like the correct answer
> to me. a fact worth mentioning to back that up is that `jq` exists. billed
> as a sed/awk for json, it fills all the functions you'd expect such an
> external tool to have and many many more. interfacing from curl to jq to
> bash is something i do on a near daily basis.
>
> https://stedolan.github.io/jq/
>
> On Sun, Aug 28, 2022, 09:25 Yair Lenga  wrote:
>
> > Hi,
> >
> > Over the last few years, JSON data becomes a integral part of processing.
> > In many cases, I find myself having to automate tasks that require
> > inspection of JSON response, and in few cases, construction of JSON. So
> > far, I've taken one of two approaches:
> > * For simple parsing, using 'jq' to extract elements of the JSON
> > * For more complex tasks, switching to python or Javascript.
> >
> > Wanted to get feedback about the following "extensions" to bash that will
> > make it easier to work with simple JSON object. To emphasize, the goal is
> > NOT to "compete" with Python/Javascript (and other full scale language) -
> > just to make it easier to build bash scripts that cover the very common
> use
> > case of submitting REST requests with curl (checking results, etc), and
> to
> > perform simple processing of JSON files.
> >
> > Proposal:
> > * Minimal - Lightweight "json parser" that will convert JSON files to
> bash
> > associative array (see below)
> > * Convert bash associative array to JSON
> >
> > To the extent possible, prefer to borrow from jsonpath syntax.
> >
> > Parsing JSON into an associative array.
> >
> > Consider the following, showing all possible JSON values (boolean,
> number,
> > string, object and array).
> > {
> > "b": false,
> > "n": 10.2,
> > "s: "foobar",
> >  x: null,
> > "o" : { "n": 10.2,  "s: "xyz" },
> >  "a": [
> >  { "n": 10.2,  "s: "abc", x: false },
> >  {  "n": 10.2,  "s": "def" x: true},
> >  ],
> > }
> >
> > This should be converted into the following array:
> >
> > -
> >
> > # Top level
> > [_length] = 6# Number of keys in object/array
> > [_keys] = b n s x o a# Direct keys
> > [b] = false
> > [n] = 10.2
> > [s] = foobar
> > [x] = null
> >
> > # This is object 'o'
> > [o._length] = 2
> > [o._keys] = n s
> > [o.n] = 10.2
> > [o.s] = xyz
> >
> > # Array 'a'
> > [a._count] =  2   # Number of elements in array
> >
> > # Element a[0] (object)
> > [a.0._length] = 3
> > [a.0._keys] = n s x
> > [a.0.n] = 10.2
> > [a.0.s] = abc
> > [a.0_x] = false
> >
> > -
> >
> > I hope that example above is sufficient. There are few other items that
> are
> > worth exploring - e.g., how to store the type (specifically, separate the
> > quoted strings vs value so that "5.2" is different than 5.2, and "null"
> is
> > different from null.
> >
> > I will leave the second part to a different post, once I have some
> > feedback. I have some prototype that i've written in python - POC - that
> > make it possible to write things like
> >
> > declare -a foo
> > curl http://www.api.com/weather/US/10013 | readjson foo
> >
> > printf "temperature(F) : %.1f Wind(MPH)=%d" ${foo[temp_f]}, ${foo[wind]}
> >
> > Yair
> >
>


Re: IFS field splitting doesn't conform with POSIX

2023-04-01 Thread Saint Michael
There is an additional problem with IFS and the command read

Suppose I have variable  $line with a string "a,b,c,d"
IFS=',' read -r x1 <<< $line
Bash will assign the whole line to x1
 echo $x1
line="a,b,c,d";IFS=',' read -r x1 <<< $line;echo $x1;
a,b,c,d
but if I use two variables
line="a,b,c,d";IFS=',' read -r x1 x2 <<< $line;echo "$x1 ---> $x2";
a ---> b,c,d
this is incorrect. If IFS=",", then a read -r statement must assign the
first value to the single variable, and disregard the rest.
and so on, with (n) variables.
The compelling reason is: I may not know how many values are stored in the
comma-separated list.
GNU AWK, for instance, acts responsibly in the same exact situation:
line="a,b,c,d";awk -F, '{print $1}' <<< $line
a
We need to fix this





On Sat, Apr 1, 2023, 6:11 PM Mike Jonkmans  wrote:

> On Sat, Apr 01, 2023 at 03:27:47PM -0400, Lawrence Velázquez wrote:
> > On Fri, Mar 31, 2023, at 2:10 PM, Chet Ramey wrote:
> > > kre filed an interpretation request to get the language cleaned up.
> >
> > For those who might be interested:
> >
> > https://austingroupbugs.net/view.php?id=1649
>
> Thanks for the link.
>
> And well done, kre!
>
> --
> Regards, Mike Jonkmans
>
>


Re: FEATURE REQUEST : shell option to automatically terminate child processes on parent death

2023-11-11 Thread Saint Michael
I support this feature.


On Sat, Nov 11, 2023, 11:29 AM Corto Beau  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2
> uname output: Linux zinc 6.6.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Wed, 08
> Nov 2023 16:05:38 + x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 5.2 Patch
> Level: 21
> Release Status: release
>
> Description:
> Hi,
>
> I would like to suggest a new shell option to ensure child processes are
> automatically killed when the parent dies.
>
> Right now, it's already possible to emulate this feature by setting a
> trap to kill all child processes on exit (trap "kill 0" EXIT), but
> obviously it doesn't work if the process is terminated by a signal that
> cannot be caught (like SIGKILL).
>
> On Linux, it can be done by setting the PR_SET_PDEATHSIG flag to
> propagate the parent termination signal to the child, regardless of
> whether the signal can be caught or not.
>
> The rationale for this feature is that scripts spawning background
> processes to listen to various events (udev events, window manager
> events, etc) often leave orphan processes behind when terminated
> forcefully.
>
> I've attached a proof-of-concept patch.


Fwd: Question

2017-09-05 Thread Saint Michael
Dear Maintainer

Is there a commercial or free software that can take a Bash script and
transparently turn it into a C executable, provided the machines where it
runs has any of the external commands like awk, etc?
Something like a Java virtual machine, for Shell.

I think this language is powerful and I want to distribute some tools that
contain critical information.

So far I have been googling and there is only obfuscation tools, not
compilation tools.

Yours

​Philip Orleans


Re: Question

2017-09-11 Thread Saint Michael
Dear Bob
I use Linux. My business to provide services for problems that I solve, as
you mention, by calling Awk, sed, join, etc., and databases. I allow my
customers to login to a box that I provide in my datacenter. I cannot
accept that it is impossible to restrict them to only call my *,sh scripts.
I found a company that has a nice obfuscating and encrypting product for
shell script, and I am trying it. I am no worried only about passwords, but
the overall logic itself.

Many thanks for the time you took to respond. But in this case you fall
close to an error. The market needs a tool so we can generate solutions and
also protect our intellectual property. I am not smart enough to write it,
but somebody will.

Yours

Federico

On Mon, Sep 11, 2017 at 2:54 PM, Bob Proulx  wrote:

> Saint Michael wrote:
> > Dear Maintainer
>
> Note that I am not the maintainer.
>
> > Is there a commercial or free software that can take a Bash script and
> > transparently turn it into a C executable, provided the machines where it
> > runs has any of the external commands like awk, etc?
>
> Not as far as I am aware.  And since no one else has spoken up with a
> suggestion then I assume not.
>
> > Something like a Java virtual machine, for Shell.
>
> Yes.  That already exists!  It is the bash shell interpreter itself.
> If you install a JVM then it can interpret Java programs.  If you
> install bash then it can interpret bash programs.  Same thing.
>
> The underlying issue is that shells are by design mostly there to
> invoke external programs.  Therefore most of the activity of the shell
> is to run other programs such as awk, grep, sed, and so forth.  It
> isn't enough to have only the bash program but also the entire set of
> other programs that is invoked.  Java by comparison can also execute
> external programs but since that is not the primary design goal of it
> most Java programs do not spend 99% of their code doing so.
>
> > I think this language is powerful and I want to distribute some
> > tools that contain critical information.
> >
> > So far I have been googling and there is only obfuscation tools, not
> > compilation tools.
>
> Making a shell script compiler would be a very hard problem because
> the shell is an interpretive language with a primary task of invoking
> external programs.  It would be very, very hard.  Impossible actually.
> The nature of interpretive languages are that they can do things that
> compiled languages cannot do.  For example interpreted programs can
> generate code on the fly and interpret it.  Such as creating functions
> during runtime.  Such as dynamically creating and executing code based
> upon dynamic input.  Such tasks are easily done using an interpreted
> language but must be done differently when using a compiled language.
> One cannot create a compiler that can compile every capability of an
> interpreted language without itself embedding an interpreter for that
> language.
>
> If you restrict an interpretive language down to only features that
> can be compiled then it is always possible to write a compiler for it.
> But then it is for a restricted set of language features as compared
> to the original language.  It is a different language.  Sometimes this
> is an acceptable tradeoff.  But I will hazard a guess that for the
> shell and for programs written for it this would be a quite different
> language and no longer useful.  Possible.  But not useful.
>
> > I think this language is powerful and I want to distribute some
> > tools that contain critical information.
>
> You don't say what critical information you are talking about.  But it
> makes me think things like accounts and passwords that you would not
> want to be available in clear text.  Note that obfuscating them in a
> compiled program does not make them inaccessible.
>
> You also don't say which OS you are using.  Most of us here will
> probably be using a GNU/Linux distribution which includes the shell
> natively or the Apple Mac which includes a working environment.
> Therefore I will assume you are using MS-Windows.  I suggest either
> bundling an entire environment such as MinGW such as Bash for Windows
> or Cygwin so that all of your external dependencies are satisfied.
> Or using a different language such as C which is natively compiled.
> Or using a restricted set of Perl, Python, Ruby, or other that already
> has a compiler available for that restricted subset.
>
> Bob
>


Request to the mailing list

2020-12-27 Thread Saint Michael
I want to suggest a new feature, that may be obvious at this point.
How do I do this?

Philip Orleans


New Feature Request

2020-12-27 Thread Saint Michael
Bash is very powerful for its ability to use all kinds of commands and pipe
information through them. But there is a single thing that is impossible to
achieve except using files on the hard drive or on /tmp. We need a new
declare -g (global) where a variable would have its contents changed by
subshells and keep it. I.e. all subshells may change the variable and this
will not be lost when the subshell exits. Also, a native semaphore
technology, so different code being executed in parallel would change the
variable in an orderly fashion.
I use GNU parallel extensively, basically, my entire business depends on
this technology, and now I need to use a database to pass information
between functions being executed, back to the main bash script. This is
basically ridiculous. At some point we need to turn Bash into more than a
shell, a power language. Now I do arbitrary math using embedded Python, but
the variable-passing restriction is a big roadblock.
Philip Orleans


Re: New Feature Request

2020-12-27 Thread Saint Michael
Yes, superglobal is great.
Example, from the manual:
" Shared Memory
Shared memory allows one or more processes to communicate via memory that
appears in all of their virtual address spaces. The pages of the virtual
memory is referenced by page table entries in each of the sharing
processes' page tables. It does not have to be at the same address in all
of the processes' virtual memory. As with all System V IPC objects, access
to shared memory areas is controlled via keys and access rights checking.
Once the memory is being shared, there are no checks on how the processes
are using it. They must rely on other mechanisms, for example System V
semaphores, to synchronize access to the memory."

We could allow only strings or more complex objects, but using bash-language
only, an internal mechanism, and also we need to define a semaphore.

Is it doable?

I am not a low-level developer. My days coding assembler are long gone.

Philip Orleans

Reference: https://tldp.org/LDP/tlk/ipc/ipc.html







On Sun, Dec 27, 2020 at 12:50 PM Eli Schwartz 
wrote:

> On 12/27/20 12:38 PM, Saint Michael wrote:
> > Bash is very powerful for its ability to use all kinds of commands and
> pipe
> > information through them. But there is a single thing that is impossible
> to
> > achieve except using files on the hard drive or on /tmp. We need a new
> > declare -g (global) where a variable would have its contents changed by
> > subshells and keep it. I.e. all subshells may change the variable and
> this
> > will not be lost when the subshell exits. Also, a native semaphore
> > technology, so different code being executed in parallel would change the
> > variable in an orderly fashion.
> > I use GNU parallel extensively, basically, my entire business depends on
> > this technology, and now I need to use a database to pass information
> > between functions being executed, back to the main bash script. This is
> > basically ridiculous. At some point we need to turn Bash into more than a
> > shell, a power language. Now I do arbitrary math using embedded Python,
> but
> > the variable-passing restriction is a big roadblock.
> > Philip Orleans
>
>
> Essentially, you want IPC. But, you do not want to use the filesystem as
> the communications channel for the IPC.
>
> So, what do you propose instead, that isn't the filesystem? How do you
> think your proposed declare -g would work? (There is already declare -g,
> maybe you'd prefer declare --superglobal or something?)
>
> --
> Eli Schwartz
> Arch Linux Bug Wrangler and Trusted User
>
>


Re: New Feature Request

2021-01-04 Thread Saint Michael
In this case, how do I quickly increase the number stored in "foo"?
the file has 1 as content, and I have a new value to add to it quickly.
Is there an atomic way to read,add, write a value to "foo"?


On Mon, Jan 4, 2021 at 8:15 AM Greg Wooledge  wrote:

> On Fri, Jan 01, 2021 at 10:02:26PM +0100, Ángel wrote:
> > Yes. In fact, you can already do that using an interface exactly
> > identical to file operations:
> >
> > # Store a string in shared memory with key 'foo'
> > echo "Hello world" > foo
> >
> > # Read value of key foo
> > var="$( >
> >
> > You only need to use that on a tmpfs filesystem, and it will be stored
> > in memory. Or, if you wanted to persist your shared memory between
> > reboots, or between machines, you could place them instead on a local
> > or networked filesystem.
>
> It should be noted that $( script to be faster, you'll need some other way to read the data from
> the file -- perhaps using the read -r -d '' command, or the mapfile
> command.
>
> Other forms of IPC that are viable in scripts include using a FIFO that's
> held open by both parent and child, or launching the child as a coprocess.
>
>


Re: New Feature Request

2021-01-04 Thread Saint Michael
can you point me to your FAQ?

On Mon, Jan 4, 2021 at 8:39 AM Greg Wooledge  wrote:

> On Mon, Jan 04, 2021 at 08:26:59AM -0500, Saint Michael wrote:
> > In this case, how do I quickly increase the number stored in "foo"?
> > the file has 1 as content, and I have a new value to add to it
> quickly.
> > Is there an atomic way to read,add, write a value to "foo"?
>
> Nope!
>
> It's almost like bash isn't a real programming language, huh?
>
> (The workaround you're looking for is called "locking".  My FAQ has a
> page about it, but it's pretty basic stuff, intended to prevent two
> instances of a script from running simultaneously, not for distributed
> computing applications.)
>
>


Re: Feature Request: scanf-like parsing

2021-01-22 Thread Saint Michael
I vote for this new feature.

On Fri, Jan 22, 2021 at 9:16 AM Chet Ramey  wrote:

> On 1/22/21 12:29 AM, William Park wrote:
>
> > But, if data are buried in a mess, then it's very labour-intensive to
> > dig them out.  It might be useful to have scanf()-like feature, where
> > stdin or string are read and parsed according to the usual format
> > string.  I guess, it would actually be sscanf(), since 'read' builtin
> > reads a line first.  Since you're dealing with strings, only %s, %c, and
> > %[] are sufficient.
>
> Sounds like a good candidate for a loadable builtin. Anyone want to take
> a shot at it?
>
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


REQUEST

2024-06-04 Thread Saint Michael
>
> It's time to add floating point variables and math to bash.

It just makes so much easier to solve business problems without external
calls to bc or Python.
Please let's overcome the "shell complex". Let's treat bash a real language.


Re: REQUEST - bash floating point math support

2024-06-05 Thread Saint Michael
the most obvious use of floating variables would be to compare
balances and to branch based on if a balance is lower than a certain
value
I use:
t=$(python3 -c "import math;print($balance > 0)")
and the
if [ "$t" == "False" ];then
echo "Result <= 0 [$t] Client $clname $clid Balance $balance"
fi
There must be a solution without Awk or Python or BC. Internal to bash

On Wed, Jun 5, 2024 at 11:49 AM Robert Elz  wrote:
>
> Date:Wed, 5 Jun 2024 11:09:45 -0400
> From:Greg Wooledge 
> Message-ID:  
>
>   | > to convert floats back into integers again, controlling how
>   | > rounding happens).
>   |
>   | Ironically, that last one is the one we already *do* have.
>
> Yes, I know about printf (and while POSIX doesn't require that floats
> be supported in printf(1), all the implementations I am aware of, do) but:
>
>   | As long as you're OK with "banker's rounding"
>
> that is expressly not "controlling how rounding happens" - applications
> dealing with floats sometimes want round to nearest (which is what is
> happening in printf - the tiebreaker algorithm when up or down are equally
> near might be relevant, but usually isn't), others want round down (towards 0,
> that is, simply discard the fractional part - that's easy in sh with ${v%.*},
> though you might need to deal with "-0" as the result - others round up
> (away from 0) (harder in sh, but achievable), others want round towards
> minint (ie: positives round down, negatives round up), and I suppose round
> towards maxint (the opposite) might occur sometimes too, though I don't
> think I've ever seen a use for that one.
>
> Most of this can be done, with some difficulty sometimes, but they
> really ought to be done with arithmetic functions - in fact, it is
> hard to imagine any real floating point work that can be done without
> the ability to define functions that can be used in an arithmetic
> context.
>
> My whole point is that as simple as it seems to "just add float support
> to shell arithmetic" might seem, it wouldn't end there, it is almost
> certainly better to just not go there.   There are plenty of other
> languages that can work with floats - not everything needs to be a shell
> script using only shell primitives.   Use the appropriate tool, don't
> just pick one, and make it try to be everything.
>
> kre
>
> I have considered all this as I once thought of adding float arith to
> the shell I maintain, and it took very little of this kind of thought
> process to abandon that without ever writing a line of code for it.
>
>
>



Re: REQUEST - bash floating point math support

2024-06-05 Thread Saint Michael
I think that we should do this in the shell. I mean. It will get done at
some point, in the next decades or centuries. Why not do it now? Let's
compile some C library or allow inline C

On Wed, Jun 5, 2024, 2:12 PM Greg Wooledge  wrote:

> On Wed, Jun 05, 2024 at 01:31:20PM -0400, Saint Michael wrote:
> > the most obvious use of floating variables would be to compare
> > balances and to branch based on if a balance is lower than a certain
> > value
> > I use:
> > t=$(python3 -c "import math;print($balance > 0)")
> > and the
> > if [ "$t" == "False" ];then
> > echo "Result <= 0 [$t] Client $clname $clid Balance $balance"
> > fi
> > There must be a solution without Awk or Python or BC. Internal to bash
>
> The example you show is just comparing to 0, which is trivial.  If
> the $balance variable begins with "-" then it's negative.  If it's "0"
> then it's zero.  Otherwise it's positive.
>
> For comparing two arbitrary variables which contain strings representing
> floating point numbers, you're correct -- awk or bc would be the minimal
> solution.
>
>


Re: REQUEST - bash floating point math support

2024-06-12 Thread Saint Michael
I think that we should go ahead and do it.

On Wed, Jun 12, 2024, 5:06 PM Zachary Santer  wrote:

> On Thu, Jun 6, 2024 at 6:34 AM Léa Gris  wrote:
> >
> > Le 06/06/2024 à 11:55, Koichi Murase écrivait :
> >
> > > Though, I see your point. It is inconvenient that we cannot pass the
> > > results of arithmetic evaluations to the `printf' builtin. This
> > > appears to be an issue of the printf builtin. I think the `printf'
> > > builtin should be extended to interpret both forms of the numbers, the
> > > locale-dependent formatted number and the floating-point literals.
> >
> > Another way would be to expand string representation of floating-point
> > numbers using the locale.
>
> Parameter transformations could be implemented to serve this purpose.
> Let's say, if var is in the form of a C floating-point literal,
> ${var@F} would expand it to the locale-dependent formatted number, for
> use as an argument to printf or for output directly. And then ${var@f}
> would go the other way, taking var that's in the form of a
> locale-dependent formatted number, and expanding it to a C
> floating-point literal.
>
> In this scenario, any floating point numbers ingested by the bash
> script would be transformed to C floating-point literals before they
> are referenced or manipulated in an arithmetic context, and then they
> would be transformed back into the locale-dependent formatted number
> before being output from the script.
>
> Arithmetic evaluation is performed on the right-hand side of an
> assignment statement assigning to a variable declared with the -i
> integer attribute. A hypothetical -E floating-point variable attribute
> in bash should work the same way. As such, a variable with the -E
> attribute would have to be assigned a value in a format accepted by
> the arithmetic context, i.e. the C floating-point literal format.
>
> So you'd have to do
> declare -E var="${arg@f}"
> in order to safely handle the locale-dependent formatted number arg,
> an argument to the script.
>
> Any floating-point literals within the script would have to be
> formatted as C floating-point literals, at least if they're going to
> be assigned the -E attribute or used in an arithmetic context.
>
> And then you would have to do
> printf '%f' "${var@F}"
> to safely output the value using printf.
>
> Is that unreasonable?
>
> Zack
>
>


Re: REQUEST - bash floating point math support

2024-06-14 Thread Saint Michael
Great idea.

On Fri, Jun 14, 2024 at 3:18 AM Léa Gris  wrote:
>
> Le 14/06/2024 à 03:41, Martin D Kealey écrivait :
> > On Thu, 13 Jun 2024 at 09:05, Zachary Santer  wrote:
> >
> >>
> >> Let's say, if var is in the form of a C floating-point literal,
> >> ${var@F} would expand it to the locale-dependent formatted number, for
> >> use as an argument to printf or for output directly. And then ${var@f}
> >> would go the other way, taking var that's in the form of a
> >> locale-dependent formatted number, and expanding it to a C
> >> floating-point literal.
> >>
> >
> > How about incorporating the % printf formatter directly, like ${var@%f} for
> > the locale-independent format and %{var@%#f} for the locale-specific format?
> >
> > However any formatting done as part of the expansion assumes that the
> > variable holds a "number" in some fixed format, rather than a localized
> > string.
> > Personally I think this would actually be a good idea, but it would be
> > quite a lot bigger project than simply added FP support.
> >
> > -Martin
>
> Another elegant option would be to expand the existing variables' i flag
> to tell the variable is numeric rather than integer.
>
> Then have printf handle argument variables with the numeric flag as
> using the LC_NUMERIC=C floating-point format with dot radix point.
>
> Expanding the existing i flag would also ensure numerical expressions
> would handle the same value format.
>
> The David method: Take down multiple issues with one stone.
>
>
>
> --
> Léa Gris
>



Question that baffles AI (all of them)

2024-06-15 Thread Saint Michael
in this code:
data="'1,2,3,4','5,6,7,8'"

# Define the processing function
process() {
echo "There are $# arguments."
echo "They are: $@"
local job="$1"
shift
local a="$1"
shift
local b="$1"
shift
local remaining="$*"
echo "Arg1: '$a', Arg2: '$b'"
}
process "$data"

how can I get my (a) and (b) arguments right?
The length of both strings is unpredictable.
a="1,2,3,4" and b="5,6,7,8""



Re: REQUEST - bash floating point math support

2024-06-21 Thread Saint Michael
Anybody else with the knowledge to tackle this?
I am not capable of even writing C code correctly.

On Fri, Jun 21, 2024 at 4:22 PM Chet Ramey  wrote:
>
> On 6/21/24 3:59 PM, alex xmb sw ratchev wrote:
>
> >  > If floating point math support is added to bash, I would expect it to
> >  > be able to handle floating point literals in these forms as well.
> >
> > I'm not planning to do this any time soon.
> >
> >
> > sorry my forgetness ... why ?
>
> Because if floating point math support gets added to bash, someone has to
> do the work. It's not a priority for me right now, and no one else is
> offering.
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>



Fwd: New feature

2024-10-12 Thread Saint Michael
From: Saint Michael 
Date: Sat, Oct 12, 2024 at 9:49 AM
Subject: New feature

The command printf needs a new flag, -e, that would mimic that way the
same flag works with echo.
After using printf, right now I need to lunch a second command if I
need to expand the \n  into real new lines.

PROCEDURE_INFO=$(echo  -e "${PROCEDURE_INFO}")
this step would be redundant if printf had the flag.

Philip Orleans