Re: Likely Bash bug

2021-03-17 Thread David
On Wed, 17 Mar 2021 at 23:05, Greg Wooledge  wrote:
> On Wed, Mar 17, 2021 at 10:59:42AM +0700, Robert Elz wrote:

> >   | Operating system is BionicPup64 8.0.

> > That might.   More importantly is probably whatever package management
> > system it uses.   I have no idea what the "ash" the bug report refers to
> > is (there is an ancient shell of that name, but I cannot imagine any
> > distribution including that, instead of one of its bug fixed and updated
> > successors, like say, dash)

> "ash" is often secret code for "busybox sh".

> In this specific case, I'm not sure *what* it is.  I googled a few things
> and came up with this page:

> http://wikka.puppylinux.com/Ash

That page was last edited in 2011.

The package list for BionicPup64 8.0 (or other versions) can be found here:
https://distrowatch.com/table.php?distribution=puppy&pkglist=true&version=8.0#pkglist

Looks like the provided shells are bash 4.4.18 and busybox 1.29.3

But I wonder if this bug is reproducible even by the reporter.



Re: Possible bug with redirect.

2021-11-01 Thread David
On Tue, 2 Nov 2021 at 02:37, Rikke Rendtorff  wrote:
>
> I'm very new to linux

[...]

> I went to the https://linuxmint.com community website and joined their IRC
> chat to see if they had an explanation and they told me it may be a bug,
> since they couldn't reproduce it for other commands.
>
> P.S. They also told me I may be one of those users that gets myself into
> trouble by using commands in ways that no sane person would ever think of.

Hi, the advice you have quoted above is entirely wrong, and
seems to be completely ignorant of the topic, so I suggest that you
disregard *all* of it.

There are many different communities with different knowledge and
interests, and replies you get can vary in quality depending on who
there happens to notice a request for assistance at any given moment.

You can find a good resource of quality information on shell command
use here:
  http://mywiki.wooledge.org/

This (bug-bash) mailing list is intended for actual bug reports, not
maybe-bugs by folks who don't know what they are doing.
If you have questions in future, the best place to ask them is
another mailing list run by this community:
  https://lists.gnu.org/mailman/listinfo/help-bash

And just in case you are not subscribed to this mailing list, are you
aware that the other aspects of your question have already been
answered? If not, you can read them here:
  https://lists.gnu.org/archive/html/bug-bash/2021-11/msg00010.html



Re: Unclosed quotes on heredoc mode

2021-11-23 Thread David
On Wed, 24 Nov 2021 at 14:36, Martijn Dekker  wrote:

> There's a regularly updated mirror of the bash repo here:
> https://github.com/bminor/bash/

Or if you care about software freedom you might prefer:
  https://git.savannah.gnu.org/cgit/bash.git



Re: Long variable value get corrupted sometimes

2022-02-16 Thread David
On Wed, 16 Feb 2022 at 19:38, Daniel Qian  wrote:

> I encountered a problem that long variable value get corrupted sometimes.

> A UTF-8 encoded file containing a lot of Chinese characters, file size ~35K.

> FOO=$(cat /tmp/foo.txt)

Hi, this looks like something that was recently fixed, perhaps
you can try this patch:
  https://savannah.gnu.org/patch/?10035



Re: the "-e" command line argument is not recognized

2022-02-16 Thread David
On Wed, 16 Feb 2022 at 19:51, Viktor Korsun  wrote:

> produced output:
> ./get_env.sh
> -q
> -w
>
> -r
> -t
> -y
>
> expected output:
> ./get_env.sh
> -q
> -w
> -e
> -r
> -t
> -y

Hi, this behaviour is well known history and widely discussed.
You can search the web for "printf vs echo bash" and you
will find plenty of information.



Re: Possible bug in bash

2022-05-13 Thread David
On Fri, 13 May 2022 at 18:18, flyingrhino  wrote:

> Before opening the bug I looked online for if-then-else vs [[ and no
> proper information was available, definitely not to the extent you explain 
> here.

Have a look here:

http://mywiki.wooledge.org/BashPitfalls#cmd1_.26.26_cmd2_.7C.7C_cmd3

> This is very useful and rare knowledge and the effort
> you took to explain this is highly appreciated.

Have a thorough look around at:
http://mywiki.wooledge.org/BashPitfalls
http://mywiki.wooledge.org/BashFAQ
http://mywiki.wooledge.org/BashGuide

and you will find a lot of useful knowledge that is
explained with considerable effort by many people.



Re: Revisiting Error handling (errexit)

2022-07-04 Thread David
On Mon, 4 Jul 2022 at 22:22, Yair Lenga  wrote:

> I've tried to look into a minimal
> solution that will address the most common pitfall of errexit, where many
> sequences (e.g., series of commands in a function) will not properly
> "break" with 'errexit'. For example:
>
> function foo {
> cat /missing/file   # e.g.: cat non-existing file.
> action2   # Executed even if action 1 fail.
> action3
> }
>
> set -oerrexit   # want to catch errors in 'foo'
> if ! foo ; then
> # Error handling for foo failure
> fi

On Tue, 5 Jul 2022 at 04:34, Yair Lenga  wrote:

> Thanks for taking the time to review my post. I do not want to start a
> thread about the problems with ERREXIT. Instead, I'm trying to advocate for
> a minimal solution.

Here's the minimal solution that I would use.

function foo {
cat /missing/file || return 1
action2   # not executed if action 1 fail
action3 || return 2
}

foo
foo_exitstatus=$?
case "$foo_exitstatus" in
0 ) echo "no error"
;;
1 ) echo "handle first error"
;;
2 ) echo "handle a different error"
;;
esac



Re: [bash 4] 'test -v 1' is never true

2022-11-27 Thread David
On Mon, 28 Nov 2022 at 00:01, Alejandro Colomar  wrote:
> On 11/27/22 12:41, Alexey wrote:
> > On 2022-11-26 21:45, Alejandro Colomar wrote:

> > I could suggest you to use for clarity another construction:
> > [[ ${1+isset} ]] || echo "not set"

> > Here "isset" is just for readability. You could place any other string 
> > literal
> > there.

> I actually find that very confusing.  What feature is it using?  I couldn't 
> find
> it with `info bash | less` then `/\+`.

See:
https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion
- Under heading: ${parameter:+word}
- The parameter is 1
- The word is isset
- The colon ':' is optional, explained by the sentences beginning with
"Omitting the colon" ..



Re: declare XXX=$(false);echo $?

2022-12-02 Thread David
On Fri, 2 Dec 2022 at 21:29, Ulrich Windl
 wrote:

> Surprisingly "declare XXX=$(false);echo $?" outputs "0" (not "1")
> There is no indication in the manual page that "declare" ignores
> the exit code of commands being executed to set values.

The above is not surprising at all.

'declare' is a builtin command. It succeeded.

$ help declare | tail -n 3
Exit Status:
Returns success unless an invalid option is supplied or a variable
assignment error occurs.

'man bash' explains this comprehensively.

The return value is 0 unless an invalid option is encountered, an
attempt is made to define a function using ``-f foo=bar'', an attempt
is made to assign a value to a readonly variable, an attempt is made to
as‐ sign a value to an array variable without using the compound as‐
signment syntax (see Arrays above), one of the names  is  not  a valid
shell variable name, an attempt is made to turn off read‐ only status
for a readonly variable, an attempt is made to  turn off array status
for an array variable, or an attempt is made to display a non-existent
function with -f.

Also, $(false) probably does not produce the value
of $XXX that you expect. Try running 'echo $(false)'.



Re: Segmentation Fault in bash --posix

2023-01-20 Thread David
On Sat, 21 Jan 2023 at 09:24, N R  wrote:

> I ran into a segmentation fault running bash --posix. Here are the
> steps to reproduce :
> bash-5.1$ echo () { echo test }
> > echo test
> > }
> bash-5.1$ echo
> Even though I'm not sure what is causing this seg fault, I'm sure it is
> not the normal/expected behaviour.

That is the normal expected behaviour.

What do you expect this should do:
$ foo() { foo ; } ; foo

This code instructs the function foo to call itself again
and again without limit forever.

So it tries to do that until the process entirely fills its call
stack with return addresses and runs out of memory
resources at which point the operating system will kill
the process.

That's not bash' fault. That's the programmers fault.



Re: Vulnerability Report(No SPF Record)

2023-02-16 Thread David
On Fri, 17 Feb 2023 at 07:42, Greg Wooledge  wrote:
> On Thu, Feb 16, 2023 at 07:21:14PM -, Syed Maaz wrote:

> > I am a security researcher,I have found this vulnerability related to 
> > your website bash-hackers.org.

> Just for the record, "bash-hackers.org" is a third-party web site, not
> affiliated with the Free Software Foundation or with Chet Ramey.  I'm
> not sure who operates it, but I bet there's a decent chance they'll see
> your messages here.  So, you're mostly in the right place -- just keep
> in mind that it's not an officially supported site.

When I enter "bash-hackers.org"
into firefox it loads https://wiki.bash-hackers.org/ and at the bottom
of that page there is both a contact web form and adjacent there
is a link "imprint" to the name of the person who declares responsibility
for "site content and design". So it looks like that would be a more
direct point of contact rather than taking a punt on waiting for some
response here.



Re: error message lacks useful debugging information

2023-10-10 Thread David
On Fri, 6 Oct 2023 at 01:20, Ángel  wrote:
> On 2023-10-04 at 20:05 -0400, Dave Cigna wrote:

> > Here's how I encountered the problem. You might not be able to
> > reproduce
> > it on your machine, but that doesn't mean that it's not a bug with
> > bash:
> >
> > download:  candle_1.1.7.tar.gz
> > from: https://github.com/Denvi/Candle
> > Extract to the folder of your choosing. cd to that folder and execute
> > the
> > bash command:
> >
> > ../Candle
>
> For the record, the above program is a 32-bit ELF executable. The most
> likely caue for the error in this case is not having the /lib/ld-
> linux.so.2 interpreter it requests.

Offtopic, but in case it helps Dave Cigna, I have successfully built and
run Candle on a 64bit Debian 11 by doing something like:

somedir=/where/you/want/candle
cd $somedir
git clone https://github.com/Denvi/Candle
cd $somedir/Candle
CMAKE_INSTALL_PREFIX=$somedir/Candle/install
cmake src
make

It might be necessary to install additional required development
packages, in my case this included:
qtbase5-dev
libqt5opengl5-dev
libqt5serialport5-dev



Re: BUG: Colorize background of whitespace

2023-10-25 Thread David
On Wed, 25 Oct 2023 at 11:51, Greg Wooledge  wrote:
> On Wed, Oct 25, 2023 at 10:29:32AM +0200, Holger Klene wrote:

> > Description:
> > The initial bash background is hardcoded to some default (e.g. black) and 
> > cannot be colorized by printing "transparent" tabs/newlines with 
> > ANSI-ESC-codes.
> > Only after a vertical scrollbar appears, the whitespace beyond the window 
> > hight will get the proper background color.
>
> Terminals have colors (foreground and background).  Bash does not.  Bash
> is just a command shell.
>
> > Repeat-By:
> > run the following command line:
> > clear; seq 50; printf '\e[36;44m\nsome colored\ttext with\ttabs\e[m\n'
> > Play with the parameter to seq, to keep the line within the first screen or 
> > move it offscreen.
> >
> > Reproduced in:
> > - in Konsole on Kubuntu 23.04
> > - in the git bash for windows mintty 3.6.1
> > - in WSL cmd window on Windows 11
>
> I ran this command in xterm (version 379) and rxvt-unicode (9.30) on
> Debian, but I'm not sure what I'm supposed to be seeing.

The bug being reported is that the '\t' characters have a black background
if the 'seq' argument is low enough that its lines 1 and 2 remain
visible when run.
But if the 'seq' argument is changed to be bigger, so that (at least)
lines 1 and 2 both
scroll off the top of the terminal window so that they are not visible, then the
'\t' characters then get the expected blue background.

I see this in Debian 11, both in 'lxterminal' under LXDE, and in the
virtual consoles,
[david@kablamm ~]$ echo $BASH_VERSION
5.1.4(1)-release
[david@kablamm ~]$ cat /etc/debian_version
11.8



Schily ueber Deutschland

2005-05-14 Thread david
Lese selbst:
http://www.heise.de/newsticker/meldung/59427


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Deutsche Buerger trauen sich nicht ...

2005-05-15 Thread david
Auslaenderbanden terrorisieren Wahlkampf - deutsche Buerger trauen sich nicht 
ihre Meinung zu sagen!

Weiter auf:
http://www.npd-nrw.net/aktuelles/03_2005/ak_presse_nrw_1603.htm

Auslaender ueberfallen nationale Aktivisten:
http://www.npd.de/npd_info/meldungen/2005/m0505-13.html

http://www.npd.de/npd_info/meldungen/2005/m0505-14.html


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: Code Execution in Mathematical Context

2019-06-06 Thread David
On Thu, 6 Jun 2019 at 03:40, Ilkka Virta  wrote:
> On 5.6. 17:05, Chet Ramey wrote:
> > On 6/4/19 3:26 PM, Ilkka Virta wrote:
>
> >>$ echo "$(( 'a[2]' ))"
> >>bash: 'a[2]' : syntax error: operand expected (error token is "'a[2]' ")
> >
> > The expression between the parens is treated as if it were within double
> > quotes, where single quotes are not special.
>
> I did put the double-quotes around the $((...))

Hi Ilkka Virta

In case of any confusion...

Please note no-one is talking about putting double quotes around $((...))
which would be "$((...))"

Regarding $((...)) when Chet refers above to "the expression between the parens"
he means whatever is between the parentheses, in this case the three dots.

If I understand correctly, Chet is saying there that $((...)) is
parsed as if it was written
$(("...")) and therefore any single quotes inside the parentheses are
not special.



Re: T/F var expansion?

2019-07-29 Thread David
On Mon, 29 Jul 2019 at 14:18, L A Walsh  wrote:
>
> Is there a T/F var expansion that does:

[...]

>
> Sorry if this is already implemented in some form, I'm just
> not remembering there being one.

Your message is not a bug report. The list
  help-b...@gnu.org>
exists to help with questions like this.



Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David
n Thu, 11 Jul 2024 at 22:14, David Hedlund  wrote:

> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.

Hi, I disagree, and I think if you understand better why this occurs, you
will understand why knowledgable users will disagree, and you will
change your opinion.

This is a fundamental aspect of how Unix-like operating systems work,
and it will not be changed because it is very useful in other situations.
It occurs because of the designed behaviour of the 'unlink' system call.
You can read about that in 'man 2 unlink'.

>  Expected behaviour
> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.

>  Actual behaviour
> The terminal remains in the deleted directory's path, even though the
> directory no longer exists.

Your final phrase there "the directory no longer exists" is incorrect.

The directory does still exist. The 'rm' command did not destroy it.
Any processes that have already opened it can continue to use it.
The terminal is one of those processes.

Deleting any file (including your directory, because directories have
file-like behaviour in this respect, same as every other directory entry)
just removes that file object from its parent directory entries. It does
not destroy the file in any way. That means that no new processes
can access the file, because now there's no normal way to discover
that it exists, because it no longer appears in its parent directory entries.
But any process that already has the file open can continue to use it.

So your directory does not cease to exist until nothing is using it, and
even then it is not destroyed, merely forgotten entirely.

Here's more explanation:
  https://en.wikipedia.org/wiki/Rm_(Unix)#Overview



LINENO is affected by sync

2021-09-01 Thread David Collier
Version:

GNU bash, version 5.0.3(1)-release (arm-unknown-linux-gnueabihf)

Raspberry Pi using Raspbian.

Installed from repo?

LINENO goes backwards when run sync

echo "== At this point \$LINENO has correctly counted about
2800 lines=test @ 2832 $LINENO"
echo "=== Running 'sync' makes bash lose a few counts from \$LINENO - in
this case about ten - no idea how to fix it. ==="
sync
echo
"test
@ 2835 $LINENO"


Re: LINENO is affected by sync

2021-09-01 Thread David Collier
greg - I'm sorry - I assume there is a proper place for me to post
follow-up info, can you let me know where it is?

I could try for a short script, but this thing is a bit like herding eels.
I narrowed it down to a single line containing 'sync' - but as you say
that's clearly impossible.
And to my embarrassment the bug remains unchanged when I comment-out
the word sync.

!!!Here is a code segment ( I have removed a few == signs so it
fits across my screen. ):

echo " At this point \$LINENO has correctly counted about
2800 lines=test @ 2838 $LINENO"
echo "=== Something makes bash lose a few counts from \$LINENO - in this
case about nine - no idea how to fix it. ==="

if ! filesOrSubdirsPresent "${rootOfDummyFsToInstallAPADN}/usr/sbin/*"
then :
if ${G_verbose} ; then echo "Skipping  ${targetAPARN} - nothing found";
fi
else :
echo "Installing: ${targetAPARN}*"

# Do we really want the '-p' option here??? - it corrupts ownership of
directories!
( cp -dpR "${rootOfDummyFsToInstallAPADN}"/usr/sbin/* ${cpDestFlag}
 "${targetAPARN}" )

# Not sure we can know that there are no .sh or .pl scripts already
there but with execute off.
# but we install so many we can't go round naming them one-by-one.
#
setExecutabilityOfScriptsInEntireBranch  "${targetAPARN}"   # All of
our scripts in /usr/sbin do have .sh extensions.
echo
"===test
@ +16  $LINENO"
sync

fi
echo
"==test
@ +20  $LINENO"

!!!here is the output - it is identical if I put a pound
character in front of 'sync':

= At this point $LINENO has correctly counted about
2800 lines=test @ 2838 2838
=== Something makes bash lose a few counts from $LINENO - in this case
about nine - no idea how to fix it. ===
Installing: /usr/sbin/*
===test
@ +16  2854
===test
@ +20  2849

!!!

As you can see, four lines further on, and LINENO has gone down by 5 -
making it 9 too small
I would fiddle around with line endings, but since it literally cogs
backwards I don't think it's going to be anything simple like that.
This happens again at intervals - wrong by different small numbers I think
- weirdly enough the next time it happens is also near a 'sync' operation -
which is what misled me.

Anyway - apologies for the red herring any suggestion welcome.
Do tell me if there is a proper place to put this info.


On Wed, Sep 1, 2021 at 4:01 PM Greg Wooledge  wrote:

> On Wed, Sep 01, 2021 at 10:36:21AM +0100, David Collier wrote:
> > Version:
> >
> > GNU bash, version 5.0.3(1)-release (arm-unknown-linux-gnueabihf)
> >
> > Raspberry Pi using Raspbian.
> >
> > Installed from repo?
> >
> > LINENO goes backwards when run sync
> >
> > echo "== At this point \$LINENO has correctly counted
> about
> > 2800 lines=test @ 2832 $LINENO"
> > echo "=== Running 'sync' makes bash lose a few counts from \$LINENO - in
> > this case about ten - no idea how to fix it. ==="
> > sync
> > echo
> >
> "test
> > @ 2835 $LINENO"
>
> There is no plausible reason an external command (sync or any other
> external command) would be able to change the value of LINENO, which is
> an internal shell variable.
>
> Can you post a brief but complete script which demonstrates the problem?
> I suspect whatever issue you're seeing is caused by something else, not
> by the "sync" command.
>


Re: LINENO is affected by sync

2021-09-02 Thread David Collier
Not sure I can go with that analysis.
To put it politely I don't think you've looked at the code and output in
enough detail.
Ignore the first 'trace' line - it just happens to be there.
The substantive issue - LINENO going backwards - occurs across four source
lines, two of which are blank, and in which the only 'active ingredient' is
a 'fi'
 So unless the function calls are managing some 'delayed action' I can't
see how they can be involved.

I suspect that if I work hard I can get the example down to one empty line,
but I'll need to put money where mouth is on that.

I will put some effort into minimising the content of an example - but
starting with 2800 lines that could be much effort.

Thanks

David

On Wed, Sep 1, 2021 at 6:09 PM Greg Wooledge  wrote:

> On Wed, Sep 01, 2021 at 04:52:29PM +0100, David Collier wrote:
> > greg - I'm sorry - I assume there is a proper place for me to post
> > follow-up info, can you let me know where it is?
>
> On the bug-bash mailing list is fine.  If the script is too big to
> post on bug-bash, then it's not useful for debugging anyway.  We'd
> need something smaller that we can actually wrap our heads around.
>
> > I could try for a short script, but this thing is a bit like herding
> eels.
> > I narrowed it down to a single line containing 'sync' - but as you say
> > that's clearly impossible.
> > And to my embarrassment the bug remains unchanged when I comment-out
> > the word sync.
>
> OK, good, that's what I would expect.
>
> > !!!Here is a code segment ( I have removed a few == signs so
> it
> > fits across my screen. ):
> >
> > echo " At this point \$LINENO has correctly counted about
> > 2800 lines=test @ 2838 $LINENO"
> > echo "=== Something makes bash lose a few counts from \$LINENO - in this
> > case about nine - no idea how to fix it. ==="
> >
> > if ! filesOrSubdirsPresent "${rootOfDummyFsToInstallAPADN}/usr/sbin/*"
> > then :
> > if ${G_verbose} ; then echo "Skipping  ${targetAPARN} - nothing
> found";
> > fi
> > else :
> > echo "Installing: ${targetAPARN}*"
> >
> > # Do we really want the '-p' option here??? - it corrupts ownership
> of
> > directories!
> > ( cp -dpR "${rootOfDummyFsToInstallAPADN}"/usr/sbin/* ${cpDestFlag}
> >  "${targetAPARN}" )
> >
> > # Not sure we can know that there are no .sh or .pl scripts already
> > there but with execute off.
> > # but we install so many we can't go round naming them one-by-one.
> > #
> > setExecutabilityOfScriptsInEntireBranch  "${targetAPARN}"   # All of
> > our scripts in /usr/sbin do have .sh extensions.
> > echo
> >
> "===test
> > @ +16  $LINENO"
> > sync
> >
> > fi
> > echo
> >
> "==test
> > @ +20  $LINENO"
>
> I'm guessing filesOrSubdirsPresent and
> setExecutabilityOfScriptsInEntireBranch are functions.
>
> > As you can see, four lines further on, and LINENO has gone down by 5 -
> > making it 9 too small
>
> My first guess is that it has something to do with those function calls.
> Either the fact that bash is calling a function *at all*, or something
> that happens specifically in one of those functions, might be throwing
> off the count.
>
> IIRC you said you were using bash 5.0, so here's a dumb little test with
> bash-5.0 on my system:
>
>
> unicorn:~$ cat foo
> #!/usr/local/bin/bash-5.0
>
> f() {
>   : this is f
> }
>
> echo "Point A: line #$LINENO"
> f
> echo "Point B: line #$LINENO"
> unicorn:~$ ./foo
> Point A: line #7
> Point B: line #9
>
>
> This one passes the test, so it's probably not something like "all
> functions break LINENO in the caller in bash 5.0".  That would have been
> too obvious.
>
> You should look at setExecutabilityOfScriptsInEntireBranch (since that's
> the last function call before the problem is observed).  Maybe something
> funny happens in there.  You might be able to comment out that function
> call and see if that makes the problem vanish.  If it does, then you can
> try to pinpoint what part of the function is triggering the problem.
>
> If it doesn't, then I'm at a loss.
>


Interesting bug

2022-02-12 Thread David Hobach

Dear all,

I think I found a rather interesting bug:
```
#!/bin/bash

function badCode {
echo "bad code executed"
}

function testCode {
#pick some existing file, nonexisting works too though
echo "/etc/passwd"
}

function tfunc {
  local foo=
  foo="$(testCode)" || {echo "foo";}
  cat "$foo" || {
badCode
case $? in
 *)
   exit 1
esac
}
}

echo "Finished."
```
(I also attached it.)

I guess 99% of programmers would either expect "Finished" to be printed or some syntax 
error. In fact however the `badCode` function is executed. "Finished" is never executed.
This is a nice one to hide bad code...

Output:
```
cat: '': No such file or directory
bad code executed
```

Affected bash versions:
Debian 11: GNU bash, version 5.1.4(1)-release (x86_64-pc-linux-gnu)
Fedora 32: GNU bash, version 5.0.17(1)-release (x86_64-redhat-linux-gnu)
(Probably more, these were the only two I tested.)

Happy bug hunting!

Best Regards
David#!/bin/bash

function badCode {
echo "bad code executed"
}

function testCode {
#pick some existing file
echo "/etc/passwd"
}

function tfunc {
  local foo=
  foo="$(testCode)" || {echo "foo";}
  cat "$foo" || { 
badCode
case $? in
 *) 
   exit 1
esac
}
}

echo "Finished."


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Interesting bug

2022-02-12 Thread David Hobach

I guess 99% of programmers would either expect "Finished" to be printed or some 
syntax error.


Well, 99% of shell programmers will (hopefully ;-) ) put a blank between "{" and 
"echo" in the line

foo="$(testCode)" || {echo "foo";}


Yes, the interesting part is that depending on which space you accidentally forget, you'll either 
get the expected "Finished" or "bad code executed".
foo="$(testCode)" || {echo "foo"; } # --> bad code
foo="$(testCode)" || { echo "foo";} # --> Finished

I guess it closes the function and {echo is interpreted as string or so, but 
that is probably not all (testCode is e.g. never executed).

A syntax error would be nice instead.

Shellcheck at least gets this unless you do something weird and more obvious 
such as
```
#!/bin/bash

function badCode {
echo "bad code executed"
}

function testCode {
#pick some existing file
echo "/etc/passwd"
}

function tfunc {
  local foo=
  foo="$(testCode)" || "{echo" "foo";}
  cat "$foo" || {
badCode
case $? in
 *)
   exit 1
esac
}

echo "Finished."
```

It's also interesting that this - in contrast to the original example - 
triggers a syntax error:
```
#!/bin/bash

function badCode {
echo "bad code executed"
}

function testCode {
#pick some existing file
echo "/etc/passwd"
}

function tfunc {
  local foo=
  foo="$(testCode)" || {echo "foo";}
  cat "$foo" || {
badCode
case $? in
 *)
   exit 1
esac
} } #<-- only difference to the original example

echo "Finished."
```


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Interesting bug

2022-02-12 Thread David Hobach

P.S.: Also, if you remove the case/esac from the original example, it'll result 
in a syntax error. So why can the case/esac be used to ignore the syntax error?


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Interesting bug

2022-02-12 Thread David Hobach

Thanks a lot for the detailed explanations, much appreciated!

So essentially it's no bug - just a rather uncommon choice of keywords.

I still don't agree with that choice since it can be abused to somewhat hide 
code in pull requests to less experienced maintainers, but oh well... there's 
probably enough fun to be had with unicode et al already. And I guess the more 
modern tastes are relatively recent in the history of bash.


smime.p7s
Description: S/MIME Cryptographic Signature


Bash reference manual feedback

2022-09-11 Thread David Apps
8 A Programmable Completion Example


The -o bashdefault option brings in the rest of the "Bash default"
completions – possible completion that Bash adds to the default
Readline set.


18. Perhaps change "possible completion" to "possible completions".

h3 9.2 Bash History Builtins


When any of the -w, -r, -a, or -n options is used,
if filename is given, then it is used as the history file.


19. Perhaps change this to: 
"When you use the filename argument with the -w, -r, -a, or -n options, 
Bash uses that file as the history file."


h4 9.3.3 Modifiers


If new is is null, each matching old is deleted.


20. Perhaps change "is is" to "is".

h3 10.1 Basic Installation


bash-4.2$ ./configure --help


21. Perhaps remove the prompt string.

h3 10.3 Compiling For Multiple Architectures


If you have to use a make that does not supports the VPATH variable,


22. Perhaps change "supports" to "support".

h3 10.5 Specifying the System Type


but need to determine by the type of host Bash will run on.


23. Perhaps change "need" to "needs" or "need to determine by" to 
"determines from".


h3 10.8 Optional Features


libintl library instead ofthe version in lib/intl.


24. Perhaps change "ofthe" to "of the".


* Bash implements the ! keyword to negate the return value of a
pipeline (see {Pipelines}). Very useful when an if statement needs to
act only if a test fails. The Bash ‘-o pipefail’ option to set will
cause a pipeline to return a failure status if any command fails.


25. Should this be 2 separate list items?


Commands specified with an RETURN trap are executed before execution


26. Perhaps change "an" to "a".

Thank you.

David



Bash reference manual feedback

2022-09-17 Thread David Apps

Thank you very much for your message.

I am not surprised that I have misunderstood this item. I have much to 
learn.



* the shell does not print a warning message if an attempt is made to
use a quoted compound assignment as an argument to declare (declare -a
foo=’(1 2)’). Later versions warn that this usage is deprecated


I was so surprised to find the "’" character (Unicode 0x2019) here that 
I assumed that it should be some other character (perhaps "'"). I have 
not managed to get the warning message, so I still do not understand 
this item properly.


Thank you.

David



echo "${HOME#$*/}" segfaults

2010-12-01 Thread David Rochberg
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include
-I../bash/lib   -g -O2 -Wall
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.1
Patch Level: 5
Release Status: release

Description:

echo "${HOME#$*/}" segfaults

Here's a stack trace from a -g -O0 build of bash 4.1

  $ echo "${HOME#$*/}"

  Program received signal SIGSEGV, Segmentation fault.
  0x00454ef1 in quote_string (string=0x0) at subst.c:3490
  3490if (*string == 0)
  (gdb) where
  #0  0x00454ef1 in quote_string (string=0x0) at subst.c:3490
  #1  0x0045c85b in param_expand (string=0x715c68 "$*/",
sindex=0x7fffdb64, quoted=8, expanded_something=0x0,
contains_dollar_at=0x7fffdb54,
  quoted_dollar_at_p=0x7fffdb60,
had_quoted_null_p=0x7fffdb58, pflags=0) at subst.c:7228
  #2  0x0045d93d in expand_word_internal
(word=0x7fffdc10, quoted=8, isexp=1, contains_dollar_at=0x0,
expanded_something=0x0) at subst.c:7802
  #3  0x0045450b in call_expand_word_internal
(w=0x7fffdc10, q=8, i=1, c=0x0, e=0x0) at subst.c:3129
  #4  0x004549fd in expand_string_for_rhs (string=0x715c68
"$*/", quoted=8, dollar_at_p=0x0, has_dollar_at=0x0) at subst.c:3315
  #5  0x0045694f in getpattern (value=0x715c68 "$*/",
quoted=1, expandpat=1) at subst.c:4277
  #6  0x00456d41 in parameter_brace_remove_pattern
(varname=0x75f0a8 "HOME",
  value=0xa23388
"\001/\001h\001o\001m\001e\001/\001r\001o\001c\001h\001b\001e\001r\001g",
patstr=0x773f58 "$*/", rtype=35, quoted=1) at subst.c:4411
  #7  0x0045c298 in parameter_brace_expand
(string=0x757888 "${HOME#$*/}", indexp=0x7fffde6c, quoted=1,
pflags=0, quoted_dollar_atp=0x7fffdfa0,
  contains_dollar_at=0x7fffdf94) at subst.c:6990
  #8  0x0045c99f in param_expand (string=0x757888
"${HOME#$*/}", sindex=0x7fffdfa4, quoted=1,
expanded_something=0x0,
  contains_dollar_at=0x7fffdf94,
quoted_dollar_at_p=0x7fffdfa0, had_quoted_null_p=0x7fffdf98,
pflags=0) at subst.c:7314
  #9  0x0045d93d in expand_word_internal (word=0xa202a8,
quoted=1, isexp=0, contains_dollar_at=0x7fffe0d4,
expanded_something=0x0) at subst.c:7802
  #10 0x0045dfc5 in expand_word_internal (word=0xa8b948,
quoted=0, isexp=0, contains_dollar_at=0x7fffe168,
expanded_something=0x7fffe16c)
  at subst.c:7933
  #11 0x0045f725 in shell_expand_word_list
(tlist=0xa84f48, eflags=31) at subst.c:8893
  #12 0x0045f9df in expand_word_list_internal
(list=0x9eda48, eflags=31) at subst.c:9010
  #13 0x0045f04a in expand_words (list=0x9eda48) at subst.c:8632
  #14 0x0043ad49 in execute_simple_command
(simple_command=0x71fd88, pipe_in=-1, pipe_out=-1, async=0,
fds_to_close=0x764d08) at execute_cmd.c:3645
  #15 0x00435a49 in execute_command_internal
(command=0xa23108, asynchronous=0, pipe_in=-1, pipe_out=-1,
fds_to_close=0x764d08) at execute_cmd.c:730
  #16 0x00435221 in execute_command (command=0xa23108) at
execute_cmd.c:375
  #17 0x00421d75 in reader_loop () at eval.c:152
  #18 0x0041fbed in main (argc=1, argv=0x7fffe4a8,
env=0x7fffe4b8) at shell.c:749



Bash

2011-04-27 Thread David Barnet
I am new to Ubuntu and Linux. I am trying to install Bash Shell and I
get this error and cannot find a solution. When I type make install I
get this error. If you can give a direction to head in to correct this,
i would appreciate it. thanks

Dave

mkdir -p -- /usr/local/share/man/man1
mkdir: cannot create directory `/usr/local/share/man/man1': Permission
denied
make: *** [installdirs] Error 1





Bash associative arrays are slow, don't scale

2011-05-04 Thread David Kuehling
Hi,

I'm using a bash script (bash 4.1) to parse debian package indices (kind
of a mirroring script).  This stores all package fields for every
package into one big hash table.

That's about 300.000 hash array elements for Debian sid's "Sources"
index file.  Unfortunately performance doesn't scale at all.  Needs
about 2m30 for parsing that Sources file.  Needs exactly the same time
after completely re-writing the parser using a sed preprocessor script.

Looking at the hashlib.c source-code it's somewhat apparant why it's
that slow: bash's hashtables implement open hashing with 64 hash chains.
So performance for search and insert is linear in number N of stored
elements, as soon as N is much larger then 64.  Especially insert is
extremely slow as it has to scan the full chain from start to end.

Are there any plans to change that?  Would be helpful if the man page
stated that bash arrays are O(N) in performance for large N.  Had I
known that, I wouldn't have relied on using bash at all for such kind of
script.

cheers,

David
-- 
GnuPG public key: http://user.cs.tu-berlin.de/~dvdkhlng/dk.gpg
Fingerprint: B17A DC95 D293 657B 4205  D016 7DEF 5323 C174 7D40


pgpMKhA7Q14kW.pgp
Description: PGP signature


edit-and-execute-command is appropriately named, weird

2011-05-24 Thread David Thomas
Hi all,

In using bash over the years, I've been quite happy to be able to hit
ctrl-x ctrl-e to pull up an editor when my input has grown too
complicated.

When using read -e for input, however, the behavior I find makes a lot
less sense: the input line is still opened in an editor, but the
result is not processed by read - it is executed by bash.  While a
means to escape to the shell can certainly be useful it seems like it
should generally be done in a more controlled environment.  It is much
harder to reason about my scripts when arbitrary variables might be
overwritten just because I asked for input.  In any case I'm not sure
it makes sense to conflate this with the "open this line in an editor"
command.

I'm curious as to the reasoning was behind this behavior (if it was
intentional), and whether I'm likely to break anything if I "fix" it.

Thanks!



Re: edit-and-execute-command is appropriately named, weird

2011-05-27 Thread David Thomas
Hi Chet,

Thank you for the response, and the attempt at assistance.

I was unaware of the POSIX specifications relating to editing modes.
After reading the specs, however, I don't think they conflict with
what I propose.  While the description of the [count]v command does
say that it executes the commands in the temporary file, this cannot
be required to apply to line editing in the read builtin when run with
the -e option, as the -e option is not described in the specification
of read at all and so its behavior is up to the developers here to
define.

Note that I am not proposing "edit and keep in buffer" semantics (as
you provided an example of below, and which would clearly conflict
with the standard), but rather "edit and accept" which results in
conformant behavior at the command prompt when the accepted line is
then processed as a shell command.

In the short term, I was able to get the behavior I want by overriding
fc in the script in question, but I still think default behavior is
ugly.

Thanks again for the response, and I'm interested in your further thoughts.

- David


On Thu, May 26, 2011 at 9:09 AM, Chet Ramey  wrote:
> On 5/23/11 1:05 PM, David Thomas wrote:
>> Hi all,
>>
>> In using bash over the years, I've been quite happy to be able to hit
>> ctrl-x ctrl-e to pull up an editor when my input has grown too
>> complicated.
>>
>> When using read -e for input, however, the behavior I find makes a lot
>> less sense: the input line is still opened in an editor, but the
>> result is not processed by read - it is executed by bash.  While a
>> means to escape to the shell can certainly be useful it seems like it
>> should generally be done in a more controlled environment.  It is much
>> harder to reason about my scripts when arbitrary variables might be
>> overwritten just because I asked for input.  In any case I'm not sure
>> it makes sense to conflate this with the "open this line in an editor"
>> command.
>
> That editing command exists because Posix standardizes it for vi editing
> mode. (Posix declined to standardize emacs editing mode at all.)  It was
> useful to expose the same behavior when in emacs mode.
>
> If you want an `edit-in-editor' command that just replaces the readline
> editing buffer with the edited result, you should be able to write a shell
> function to do that and use `bind -x' to make it available on any key
> sequence you want (modulo the current restrictions on bind -x, of course).
>
> Something like this:
>
> edit_in_editor()
> {
>        typeset p
>        local TMPF=/tmp/readline-buffer
>
>        p=$(READLINE_POINT}
>        rm -f $TMPF
>        printf "%s\n" "$READLINE_LINE" > "$TMPF"
>        ${VISUAL:-${EDITOR:-emacs}} $TMPF && READLINE_LINE=$(< $TMPF)
>        rm -f $TMPF
>        READLINE_POINT=0        # or p or ${#READLINE_LINE} or ...
> }
>
> Salt to taste.
>
> Chet
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>                 ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRU    c...@case.edu    http://cnswww.cns.cwru.edu/~chet/
>



Re: edit-and-execute-command is appropriately named, weird

2011-05-31 Thread David Thomas
Oh, I wasn't asking you to do it, I was volunteering to.  I just
wanted to be sure there wasn't some overriding reason it was done the
way it was, and that there wouldn't be too many people relying on the
present behavior.  As it is, I think I'll be taking a swing at it once
my home internet is hooked up (which should be any day now...).

On Sat, May 28, 2011 at 3:42 PM, Chet Ramey  wrote:
> On 5/27/11 6:20 PM, David Thomas wrote:
>> Hi Chet,
>>
>> Thank you for the response, and the attempt at assistance.
>>
>> I was unaware of the POSIX specifications relating to editing modes.
>> After reading the specs, however, I don't think they conflict with
>> what I propose.  While the description of the [count]v command does
>> say that it executes the commands in the temporary file, this cannot
>> be required to apply to line editing in the read builtin when run with
>> the -e option, as the -e option is not described in the specification
>> of read at all and so its behavior is up to the developers here to
>> define.
>
> Quite true: it doesn't strictly apply to read -e or to emacs editing mode
> at all.  However, the specification is well-suited to being implemented
> using `fc', which is also specified by Posix.  It's also never seemed
> important enough to disable that particular command when running
> `read -e'.
>
>> Note that I am not proposing "edit and keep in buffer" semantics (as
>> you provided an example of below, and which would clearly conflict
>> with the standard), but rather "edit and accept" which results in
>> conformant behavior at the command prompt when the accepted line is
>> then processed as a shell command.
>
> That's probably true, but not quite how it works.  The contents of the
> edited file are executed directly rather than being placed back into the
> editing buffer and accepted because that's how `fc' does its job.  It
> would take some work to redo the implementation.
>
> It might be an interesting exercise to reimplement that editing function
> as you suggest, and it might even be more closely Posix-conformant, but
> I'm not inclined to do it right now.  There are other, higher-priority
> issues.  Maybe someday.
>
> In the meantime, the function I provided does what you would like at the
> cost of a single extra keystroke.
>
> Chet
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>                 ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRU    c...@case.edu    http://cnswww.cns.cwru.edu/~chet/
>



Memory leak with associative arrays

2011-10-04 Thread David Parks
Version: GNU bash, version 4.2.8(1)-release (x86_64-pc-linux-gnu) on Ubuntu
10.04

If I set an associative array, as in: 

MYARRAY["something"]="Goobledygook"

Then I set that same variable name again later (and so on in a loop). The
earlier variables are never released from memory, before I re-set a given
item I MUST call unset on the item first, possibly storing the value to a
temp variable if it's needed (such as is the case with a counter, for
example).

The scripts below demonstrate a case which exposes this memory leak, and one
that works around it with a store/unset/set operation, respectively.

--

#!/bin/bash
#Memory leak exposed in associative array
while true; do
declare -A DB
i=0
while (( i < 20 )); do
DB["static"]="Any old gobbledygook"
if (( $i == 10 )); then echo 100k; fi
i=$(($i+1))
done
echo "Clearing DB";
unset DB
done

--

#!/bin/bash
#Memory leak worked around with store/unset/set operation
while true; do
declare -A DB
DB["static"]="Any old gobbledygook"
i=0
while (( i < 20 )); do
TEMP=${DB["static"]}
unset DB["static"]
DB["static"]=$TEMP
if (( $i == 10 )); then echo 100k, DB val is:
${DB["static"]}; fi
   i=$(($i+1))
done
echo "Clearing DB";
unset DB
done






RE: Memory leak with associative arrays

2011-10-14 Thread David Parks
Great stuff, amazingly fast - I'm really surprised!
Thanks for that.
Dave


-Original Message-
From: bug-bash-bounces+davidparks21=yahoo@gnu.org
[mailto:bug-bash-bounces+davidparks21=yahoo@gnu.org] On Behalf Of Chet
Ramey
Sent: Tuesday, October 04, 2011 5:29 PM
To: David Parks
Cc: bug-bash@gnu.org; chet.ra...@case.edu
Subject: Re: Memory leak with associative arrays

On 10/4/11 2:48 PM, David Parks wrote:
> Version: GNU bash, version 4.2.8(1)-release (x86_64-pc-linux-gnu) on 
> Ubuntu
> 10.04
> 
> If I set an associative array, as in: 
> 
> MYARRAY["something"]="Goobledygook"
> 
> Then I set that same variable name again later (and so on in a loop). 
> The earlier variables are never released from memory, before I re-set 
> a given item I MUST call unset on the item first, possibly storing the 
> value to a temp variable if it's needed (such as is the case with a 
> counter, for example).

Thanks for the report.  This will be fixed in the next bash release.  I've
attached a patch you can use to test.

Chet

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/




bash-4.2: poor order of conditionals

2011-10-26 Thread David Binderman


Hello there,

I just ran the static analysis tool cppcheck over the source
code of bash-4.2

It said

[bash-4.2/builtins/mkbuiltins.c:554]: (style) Array index i is used before 
limits check

The source code is

  while (buffer[i] != '\n' && i < file_size)
i++;

You might be better off with

  while (i < file_size && buffer[i] != '\n')
i++;


Regards

David Binderman

  

Re: '>; ' redirection operator [was: [1003.1(2008)/Issue 7 0000530]: Support in-place editing in sed (-iEXTENSION)]

2011-12-22 Thread David Korn
cc: ebl...@redhat.com bug-bash@gnu.org d...@vger.kernel.org 
miros-disc...@mirbsd.org
Subject: Re: '>;' redirection operator [was: [1003.1(2008)/Issue 7 530]: 
Support  in-place editing in sed  (-iEXTENSION)]


> On 12/22/2011 08:39 AM, David Korn wrote:
> > Subject: Re: Re: [1003.1(2008)/Issue 7 530]: Support in-place editi=
> ng in sed  (-iEXTENSION)
> > 
> >=20
> > There are many commands other than sed that want the output to replace
> > an input file.  That is why I added the >; redirection operator to ksh9=
> 3.
> >=20
> > With >; you can do
> > sed -e s/foo/bar/ file >; file
> > to do in place sed.  The >; operator generates the output in a temporar=
> y file
> > and moves the file to the original file only if the command terminates
> > with 0 exit status.
> 
> On 12/22/2011 16:04 AM,Eric Blake  wrote:
> I agree that engineering a single fix into the shell that can apply to
> multiple situations, rather than chasing down a set of applications to
> add an in-place editing option to each, is a much more flexible and
> powerful approach.  Can we get buy-in from other shell developers to
> support '>;' as an atomic temp-file replacement-on-success idiom, if
> POSIX were to standardize the existing practice of ksh93 as the basis?
> 
> I assume on the ksh implementation that the temp file is discarded if
> the command (simple or compound) feeding the redirection failed?  If the
Yes.
> redirection is used on a simple command, is there any shorthand for
> specifying that the destination name on success also be fed as an
> argument to the command, to avoid the redundancy of having to type
> 'file' both before and after the '>;' operator?  I assume that this is
> like any other redirection operator, where an optional fd number can be
> prepended, as in '2>; file' to collect stderr and overwrite file on
> success?  What happens if there is more than one '>;' redirection in the
> same command, and both target the same end file (whether or not by the
> same file name)?  What happens if the command succeeds, but the rename
> of the temp file to the destination fails?  Are there clobber ('>|') or
> append ('>>') variants?
No, it only works if the file specified with >; is a regular file.
How could it know which command argument to use for the name, for example
cat foo bar >; /dev/fd/2
how would it know whether to use foo or bar?

If there is more than one >; command, then ksh will create a temporary file
for each of these but will only the last one will get standard out.
Thus the first one will be replaced by and empty file and the second one
will get the output if successful.  Thus
command ... >; f1 >; f2
is equivalent to
command ... > f1 >; f2


There is no clobber or append variants.  Append doesn't wipe anything out so
it is not needed.

I also added the operator <>; which is the same as <> except that the file
is truncated on close.  Thus,
tail -s 100 file <>; file
will remove the first 100 lines of a file and will not require extra
temporary disk space.
> 

David Korn
d...@research.att.com



Re: Edit vs delete a running script. Why difference?

2012-03-28 Thread David Thomas
On Jan 18, 5:22 am, Greg Wooledge  wrote:
> On Wed, Jan 18, 2012 at 01:19:20PM +0900, Teika Kazura wrote:
> > If the
> > entire script is read at invocation, then why should / does
> > modification affect? Is it a bug?
>
> The entire script *isn't* read at invocation.  Bash just reads a little
> bit at a time, as needed.

Interestingly, on many Linux systems this allows one to check
(crudely) how far a script has progressed by looking at /proc//
fdinfo/255.


Bug with print-completions-horizontally

2012-09-26 Thread David Kaasen
(I posted this to gnu.bash.bug on 2012-08-30 with message id 
k1nbtu$6g3$1...@orkan.itea.ntnu.no, but it never appeared)


Hello!

I have got tired of an old bug with print-completions-horizontally. 
When that variable is set, and one of the matches is longer than half 
of the terminal width, all the matches are printed on a single line 
separated by a lot of spaces. Also, in that case, the internal pager 
doesn't work.


I have seen this bug consistently on all platforms and terminals since 
bash version 2.something.


The following patch to bash 4.2 seems to fix this:

diff -aNru bash-4.2_a/lib/readline/complete.c bash-4.2_b/lib/readline/complete.c
--- bash-4.2_a/lib/readline/complete.c  2011-01-16 21:32:57.0 +0100
+++ bash-4.2_b/lib/readline/complete.c  2012-08-29 16:21:19.0 +0200
@@ -1463,7 +1463,7 @@
  /* Have we reached the end of this line? */
  if (matches[i+1])
{
- if (i && (limit > 1) && (i % limit) == 0)
+ if (i % limit == 0)
{
  rl_crlf ();
  lines++;

With best regards from
David Kaasen.



why does errexit exist in its current utterly useless form?

2012-12-14 Thread matei . david
I recently worked on a project involving many bash scripts, and I've been 
trying to use errexit to stop various parts of a script as soon as anything 
returns a non-0 return code. As it turns out, this is an utterly useless 
endeavour. In asking this question on this forum, I hope somebody out there can 
help me, who understands bash, POSIX, and why decisions were made to arrive at 
the current situation.

To recapitulate, errexit is turned on by "set -e" or "set -o errexit". This is 
what TFM says about it:

"Exit immediately if a pipeline (see Pipelines), which may consist of a single 
simple command (see Simple Commands), a subshell command enclosed in 
parentheses (see Command Grouping), or one of the commands executed as part of 
a command list enclosed by braces (see Command Grouping) returns a non-zero 
status. The shell does not exit if the command that fails is part of the 
command list immediately following a while or until keyword, part of the test 
in an if statement, part of any command executed in a && or || list except the 
command following the final && or ||, any command in a pipeline but the last, 
or if the command’s return status is being inverted with !. A trap on ERR, if 
set, is executed before the shell exits. This option applies to the shell 
environment and each subshell environment separately (see Command Execution 
Environment), and may cause subshells to exit before executing all the commands 
in the subshell."

Let's leave pipelines aside, because that adds more complexity to an already 
messy problem. So we're talking just simple commands.

My initial gripe about errexit (and its man page description) is that the 
following doesn't behave as a newbie would expect it to:

set -e
f() {
  false
  echo "NO!!"
}
f || { echo "f failed" >&2; exit 1; }

Above, "false" usually stands for some complicated command, or part of a 
sequence of many commands, and "echo NO!!" stands for a statement releasing a 
lock, for instance. The newbie assumes that the lock won't be released unless 
executing f goes well. Moreover, the newbie likes many error messages, hence 
the extra message in the main script.

Running it, you get:

NO!!

First of all, f is called as the LHS of ||, so we don't want the entire shell 
to crash if f returns non-0. That much, a not entirely dumb newbie can 
understand. But, lo and behold, NO!! gets printed. Do you see this explained in 
TFM, because I don't.

Question 1: Is this a bug in the manual itself?

As the hours of debugging pass by, the newbie learns about shells and subshells 
and subprocesses and what not. Also, apparently that one can see the current 
shell settings with $- or $SHELLOPTS. So the newbie changes f to:

f() {
  echo $-
  false
  echo "NO!!"
}

You get:

ehB
NO!!

This is now getting confusing: errexit seems to be active as bash executes f, 
but still it doesn't stop.

Question 2: Is this a bug in how $- is maintained by bash?

Next, the newbie thinks, oh, I'll just set errexit again inside f. How about:

f() {
  set -e
  echo $-
  false
  echo "NO!!"
}

You get:

ehB
NO!!

At this point, the newbie thinks, perhaps errexit isn't working after all.

Question 3: Under the current design (which I consider flawed), why doesn't 
bash at least print a warning that errexit is not active, when the user tries 
to set it?

As even more hours pass by, the newbie learns things about various other 
shells, POSIX mode, standards, etc. Useful things, but arguably useless for the 
task at hand. So, from what I the newbie gathered so far...

One can work around this by using && to connect all statements in f, or using 
"|| return 1" after each of them. This is ok if f is 2 lines, not if it's 200.

I also learned one can actually write a tiny function which tests if the ERR 
signal is active, and if it is not, to executed the invoking function (f) in a 
different shell, passing the entire environment, including function defs, with 
typeset. This is really awkward, but possible. However, it only works for 
functions, not for command lists run in a subshell, as in:

( false; echo "NO!!" ) || { echo "failed" >&2; exit 1; }

The common suggestion I see on various forums is- don't use errexit. I now 
understand this from a user perspective, and that's why I call errexit "utterly 
useless" in the subject. But, if I may ask, why is bash in this position?

Question 4: Back to the original f, why did bash (or even POSIX) decide to 
protect the "false" statement? Protecting f is clearly necessary, for otherwise 
|| would be useless. But why the "false"?

Question 4a (perhaps the same): TFM says: "the shell does not exit if the 
command that fails is part of the command list immediately following a while". 
Why protect commands in such a list other than the last?

And independent of the question(s) 4, the last one:

Question 5: Even assuming bash/POSIX decides to protect lists of commands where 
only the last is tested, why does bash entirely disable the errexit option?  

Re: why does errexit exist in its current utterly useless form?

2012-12-15 Thread matei . david
On Friday, December 14, 2012 6:23:41 PM UTC-5, Eric Blake wrote:
> Short answer: historical compatibility.  'set -e' has been specified to
> behave the way it did 30 years ago in one reference implementation, and
> while you can argue till you are blue in the face that the reference
> implementation is not intuitive, you have to at least appreciate that
> having a standard means you are likely to get the same (non-intuitive)
> behavior among several competing shell implementations that all strive
> to conform to POSIX.

Thanks for the reply. I understand the benefits of a standard. In this case, it 
seems to me that the problem we're talking about- stopping a script as soon as 
a command returns an unexpected non-0 code- is a very basic feature that many 
could benefit from, if implemented right. I'm trying to understand whether or 
not fixing this problem requires changing the standard or not.

My question 5 is about whether the standard itself requires that subsequent 
attempts to set errexit should be ignored, even assuming that errexit should be 
turned off once in a while for the sake of the standard. The alternative is 
that this is just a historical decision of bash that could be mended without 
breaking compliance.

If indeed the standard requires all further attempts to set errexit be ignored 
(which I think is a terrible idea), wouldn't it be a good idea to provide in 
bash another option doing the same thing, but correctly? Something like set -o 
strong_errexit, something that anybody writing a new script can choose to use 
or not, of course understanding that it is a bashism, but the right kind of 
bashism.

Also, my questions 1-3 relate to the current implementation...


Re: why does errexit exist in its current utterly useless form?

2012-12-15 Thread matei . david
On Saturday, December 15, 2012 5:23:04 PM UTC-5, Chet Ramey wrote:
> There is already a proposal for a new option similar to what you want; you
> can read the discussion at
> 
> http://austingroupbugs.net/view.php?id=537

Thank you for all the references, I'll have a look!


eval doesn't close file descriptor?

2013-02-11 Thread matei . david
With the script below, I'd expect any fd pointing to /dev/null to be closed 
when the second llfd() is executed. Surprisingly, fd 3 is closed, but fd 10 is 
now open, pointing to /dev/null, as if eval copied it instead of closing it. Is 
this a bug?

Thanks,
M


$ bash -c 'llfd () { ls -l /proc/$BASHPID/fd/; }; x=3; eval "exec 
$x>/dev/null"; llfd; eval "llfd $x>&-"'
total 0
lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2
l-wx-- 1 matei matei 64 Feb 11 18:36 3 -> /dev/null
lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv
total 0
lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2
l-wx-- 1 matei matei 64 Feb 11 18:36 10 -> /dev/null
lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2
lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv
$ bash --version
GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ 


Re: eval doesn't close file descriptor?

2013-02-12 Thread Matei David
Ok, but I see the same behaviour when eval runs in a subshell:

$ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/ >&2;
}; x=3; eval "exec $x>/dev/null"; llfd; echo | eval "llfd $x>&-"'
[same output, fd 10 open, pointing to /dev/null, even though it's a
subshell]

$ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/ >&2;
}; x=3; eval "exec $x>/dev/null"; llfd; echo | llfd 3>&-'
[not the same output; no fds pointing to /dev/null, as expected]

When eval is run by the main shell and I want $x closed, I can just do
'eval "exec $x>&-"'. However, I cannot do that with eval runs in a
subshell. In my script I needed $x open for other processes in that
pipeline.


On a different but related note, I hate having to do eval to manipulate an
fd stored in a variable. Why doesn't 'llfd $x>&-' work, especially since
'llfd >&$x' works just fine... so by the time >& is handled, the variable
substitutions seem to be done already.


On Tue, Feb 12, 2013 at 2:13 AM, Pierre Gaston wrote:

>
>
> On Tue, Feb 12, 2013 at 1:54 AM,  wrote:
>
>> With the script below, I'd expect any fd pointing to /dev/null to be
>> closed when the second llfd() is executed. Surprisingly, fd 3 is closed,
>> but fd 10 is now open, pointing to /dev/null, as if eval copied it instead
>> of closing it. Is this a bug?
>>
>> Thanks,
>> M
>>
>>
>> $ bash -c 'llfd () { ls -l /proc/$BASHPID/fd/; }; x=3; eval "exec
>> $x>/dev/null"; llfd; eval "llfd $x>&-"'
>> total 0
>> lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2
>> lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2
>> lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2
>> l-wx-- 1 matei matei 64 Feb 11 18:36 3 -> /dev/null
>> lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv
>> total 0
>> lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2
>> lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2
>> l-wx-- 1 matei matei 64 Feb 11 18:36 10 -> /dev/null
>> lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2
>> lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv
>> $ bash --version
>> GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)
>> Copyright (C) 2011 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later <
>> http://gnu.org/licenses/gpl.html>
>>
>> This is free software; you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law.
>> $
>>
>
> Note that the same happens without using eval:
> $ llfd 3>&-
> total 0
> lrwx-- 1 pgas pgas 64 Feb 12 08:00 0 -> /dev/pts/0
> lrwx-- 1 pgas pgas 64 Feb 12 08:00 1 -> /dev/pts/0
> l-wx-- 1 pgas pgas 64 Feb 12 08:00 10 -> /dev/null
> lrwx-- 1 pgas pgas 64 Feb 12 08:00 2 -> /dev/pts/0
> lrwx-- 1 pgas pgas 64 Feb 12 08:00 255 -> /dev/pts/0
>
> But you need to consider what process you are examining, you use a
> function and you examine the file descriptors of the process where this
> function runs.
>
> A function runs in the same process as the parent shell, if it simply
> closes 3 then there will be no more fd opened on >/dev/null in the parent
> shell when the function returns
> So what bash does is a little juggling with the file descriptors, moving 3
> temporarily to be able to restore it.
>
>
>
>
>
>
>
>


Re: eval doesn't close file descriptor?

2013-02-12 Thread Matei David
Wow, thanks, I didn't know that. So this syntax achieves a bit more than I
asked for- it allows you to get a new unused file descriptor, right? It
seems that the only useful way to use a non-closing form (>&-, <&-) is with
exec, as in 'exec {new_fd}>&2'. (Why would I want the fd in a variable
otherwise.)

Too bad the "natural" syntax 'llfd $x>&-' doesn't work, but I guess this
will do.


On Tue, Feb 12, 2013 at 11:12 AM, Greg Wooledge  wrote:

> On Tue, Feb 12, 2013 at 11:07:06AM -0500, Matei David wrote:
> > On a different but related note, I hate having to do eval to manipulate
> an
> > fd stored in a variable. Why doesn't 'llfd $x>&-' work, especially since
> > 'llfd >&$x' works just fine... so by the time >& is handled, the variable
> > substitutions seem to be done already.
>
>   Each redirection that may be preceded by a file descriptor number may
>   instead be preceded by a word of the form {varname}.  In this case,
>   for each redirection operator except >&- and <&-, the shell will
>   allocate a file descriptor greater than 10 and assign it to varname.
>   If >&- or <&- is preceded by {varname}, the value of varname defines
>   the file descriptor to close.
>
> This was added in Bash 4.1.
>


Re: eval doesn't close file descriptor?

2013-02-12 Thread Matei David
So in other words, you're saying I should use '... | eval "exec $x>&-;
llfd"' instead of '... | eval "llfd $x>&-"'. This way the subshell won't be
assuming I might use $x later. That works, but I still find it
counterintuitive that with the original syntax the subshell doesn't realize
there's nothing left to execute after $x>&-.

Also, I tried your other suggestions. The second one 'llfd () { bash -c
 }' works, but the other 'llfd () ( ... )' doesn't! I tried to
understand why...

Looking at this:
$ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -gG /proc/$BASHPID/fd >&2;
}; f () { llfd; (llfd); bash -c "echo pid:\$\$ >&2; ls -gG /proc/\$\$/fd
>&2"; }; x=3; exec 4>/tmp/fd_4; coproc cat; eval "exec $x>/tmp/fd_3"; llfd;
echo | eval "f $x>&-"'
pid:14920
total 0
lrwx-- 1 64 Feb 12 14:03 0 -> /dev/pts/2
lrwx-- 1 64 Feb 12 14:03 1 -> /dev/pts/2
lrwx-- 1 64 Feb 12 14:03 2 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 3 -> /tmp/fd_3
l-wx-- 1 64 Feb 12 14:03 4 -> /tmp/fd_4
l-wx-- 1 64 Feb 12 14:03 60 -> pipe:[5010928]
lr-x-- 1 64 Feb 12 14:03 63 -> pipe:[5010927]
lr-x-- 1 64 Feb 12 14:03 8 -> /proc/4520/auxv
pid:14924
total 0
lr-x-- 1 64 Feb 12 14:03 0 -> pipe:[5007145]
lrwx-- 1 64 Feb 12 14:03 1 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 10 -> /tmp/fd_3
lrwx-- 1 64 Feb 12 14:03 2 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 4 -> /tmp/fd_4
lr-x-- 1 64 Feb 12 14:03 8 -> /proc/4520/auxv
pid:14926
total 0
lr-x-- 1 64 Feb 12 14:03 0 -> pipe:[5007145]
lrwx-- 1 64 Feb 12 14:03 1 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 10 -> /tmp/fd_3
lrwx-- 1 64 Feb 12 14:03 2 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 4 -> /tmp/fd_4
lr-x-- 1 64 Feb 12 14:03 8 -> /proc/4520/auxv
pid:14928
total 0
lr-x-- 1 64 Feb 12 14:03 0 -> pipe:[5007145]
lrwx-- 1 64 Feb 12 14:03 1 -> /dev/pts/2
lrwx-- 1 64 Feb 12 14:03 2 -> /dev/pts/2
l-wx-- 1 64 Feb 12 14:03 4 -> /tmp/fd_4
lr-x-- 1 64 Feb 12 14:03 8 -> /proc/4520/auxv


... there seem to be not 2 but 3(!) types of file descriptors:
1. fds which are copied across both subshells and exec; like 4
2. fds which are not copied across subshells; like 60&63
3. fds which are copied across subshells, but not exec; like 10

I knew about types 1&2, but not about type 3. Apparently with your first
suggestion, fd 10 is created and survives a subshell creation. Is this
correct??


On Tue, Feb 12, 2013 at 11:40 AM, Pierre Gaston wrote:

>
>
> On Tue, Feb 12, 2013 at 6:07 PM, Matei David wrote:
>
>> Ok, but I see the same behaviour when eval runs in a subshell:
>>
>> $ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/
>> >&2; }; x=3; eval "exec $x>/dev/null"; llfd; echo | eval "llfd $x>&-"'
>> [same output, fd 10 open, pointing to /dev/null, even though it's a
>> subshell]
>>
>
> eval runs in a subshell, but it's the same thing inside this subshell.
> eg you could have: echo | { eval "llfd "$x>&-"; echo blah >&3; }
>
> Bash could optimize this once it realizes there's only one command, but
> it's probably not that simple to implement.
>
> Try with a function that spawns a subshell eg:
> llfd () (  echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/ >&2; )
>
> or llfd () { bash -c 'ls -l /proc/$$/fd' ; }
>
>
>


Re: eval doesn't close file descriptor?

2013-02-12 Thread Matei David
Hi Chet,

Conceptually, the availability of a single flag "close-on-exec" (BTW, is
there a way to check those flags in bash or using /proc?) should create
only 2, not 3 types of file descriptors- that's what I find confusing. Does
subprocess creation as in '(llfd)' entail an execve() call? My guess is
that no, because the "extended environment" like arrays and function
definitions would not survive it, so you'd have to copy them by hand.
Either way:

If subprocess creation DOES NOT perform execve(), then I don't understand
fd's of type 2, like 60&63: they should continue to exist in a subshell.
If subprocess creation DOES perform execve(), then I don't understand fd's
of type 3: they should not exist in a subshell.

My best guess here is that type 2 is non-standard? Meaning that they are
closed by bash during subprocess creation, even though there is no
execve()? I only came across those when trying to use coproc... I wonder
why this decision was made.


Generally speaking, it took me quite some time to figure out how to
properly create a "double pipe" without using any intermediate files or
fifos. The concept is very easy: in -> tee -> two parallel, independent
pipes -> join -> out. A first complication is the limited pipe capacity,
and the possibility of one side getting stuck if the other stops pulling in
data. I then wrote a cat-like program which doesn't block on stdout, but
keeps reading stdin, buffering in as much data as needed. I used it at the
very end of either pipe. But then I had to create the actual processes, and
then I stumbled upon all these issues with coproc and file descriptors. You
leave the wrong one open and the thing gets stuck... I wish there was a
howto on this subject.

Thanks,
M




On Tue, Feb 12, 2013 at 2:50 PM, Chet Ramey  wrote:

> On 2/12/13 2:07 PM, Matei David wrote:
>
> > ... there seem to be not 2 but 3(!) types of file descriptors:
> > 1. fds which are copied across both subshells and exec; like 4
> > 2. fds which are not copied across subshells; like 60&63
> > 3. fds which are copied across subshells, but not exec; like 10
> >
> > I knew about types 1&2, but not about type 3. Apparently with your first
> > suggestion, fd 10 is created and survives a subshell creation. Is this
> > correct??
>
> Yes, file descriptors used to save the state of other file descriptors
> are set close-on-exec.  `man fcntl' for a description.
>
> Chet
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>


Re: eval doesn't close file descriptor?

2013-02-13 Thread Matei David
Thank you for the explanation.


On Tue, Feb 12, 2013 at 8:32 PM, Chet Ramey  wrote:

> On 2/12/13 11:40 AM, Pierre Gaston wrote:
> > On Tue, Feb 12, 2013 at 6:07 PM, Matei David 
> wrote:
> >
> >> Ok, but I see the same behaviour when eval runs in a subshell:
> >>
> >> $ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/
> >>> &2; }; x=3; eval "exec $x>/dev/null"; llfd; echo | eval "llfd $x>&-"'
> >> [same output, fd 10 open, pointing to /dev/null, even though it's a
> >> subshell]
> >>
> >
> > eval runs in a subshell, but it's the same thing inside this subshell.
> > eg you could have: echo | { eval "llfd "$x>&-"; echo blah >&3; }
> >
> > Bash could optimize this once it realizes there's only one command, but
> > it's probably not that simple to implement.
>
> The basic flow is like this for any builtin command or shell function that
> has a redirection (let's choose 'llfd 3>&-').
>
> 1.  The redirection is performed in the current shell, noting that it
> should be `undoable'.  That takes three steps:
>
> 1a. In this case, since fd 3 is in use, we dup it (to fd 10) and mark fd
> 10 as close-on-exec.  We add a separate redirection to an internal
> list  that basically says "close fd 10".  Then we add another
> redirection to the front of the same internal list that says "dup fd
> 10 back to fd 3".  Let's call this list "redirection_undo_list".  We
> will use it to restore the original state after the builtin or
> function completes.
>
> 1b. Take the first redirection from step 1a and add it to a separate
> internal list that will clean up internal redirections in the case
> that exec causes the redirections to be preserved, and not undone.
> Let's call this list "exec_redirection_undo_list".
>
> 1c. Perform the redirection.  Here, that means close fd 3.
>
> [perform step 1 for each redirection associated with the command]
>
> 2.  If we're running the exec builtin, throw away the list from 1a.  If
> we're not running the exec builtin, throw away the list from 1b.  Save
> a handle to the list we didn't discard.
>
> 3.  Run the function or builtin.
>
> 4.  Take the list saved in step 2 and perform the redirections to
> restore the previous state.  Here, that means we dup fd 10 back to fd
> 3, then close fd 10.
>
> If you look at the steps, it should be clear why fd 10 is still open when
> llfd executes.
>
> Bash `cheats' when running builtins or shell functions in pipelines or
> other subshells.  It knows it's already going to be in a child process
> when it performs the redirections, so it doesn't bother setting up the
> structures to undo them.
>
> Chet
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>


Re: eval doesn't close file descriptor?

2013-02-13 Thread Matei David
Another thing I tried was to open a fd with <() and use it later in a shell
function. Surprisingly, the fd disappears (is closed) if the shell executes
something unrelated in subshell:

$ bash -c 'xxx () { echo "got arg: $1" >&2; ls -gG /proc/$BASHPID/fd >&2;
(echo "subprocess" >&2); ls -gG /proc/$BASHPID/fd >&2; cat $1; }; . lfd.sh;
xxx <(echo "hi")'
got arg: /dev/fd/63
total 0
lrwx-- 1 64 Feb 13 12:28 0 -> /dev/pts/9
lrwx-- 1 64 Feb 13 12:28 1 -> /dev/pts/9
lrwx-- 1 64 Feb 13 12:28 2 -> /dev/pts/9
lr-x-- 1 64 Feb 13 12:28 63 -> pipe:[5474849]
lr-x-- 1 64 Feb 13 12:28 8 -> /proc/4520/auxv
subprocess
total 0
lrwx-- 1 64 Feb 13 12:28 0 -> /dev/pts/9
lrwx-- 1 64 Feb 13 12:28 1 -> /dev/pts/9
lrwx-- 1 64 Feb 13 12:28 2 -> /dev/pts/9
lr-x-- 1 64 Feb 13 12:28 8 -> /proc/4520/auxv
cat: /dev/fd/63: No such file or directory
$

Note how 63 is open before '(echo)' and closed after. Is this expected?


On Wed, Feb 13, 2013 at 12:06 PM, Matei David  wrote:

> Thank you for the explanation.
>
>
> On Tue, Feb 12, 2013 at 8:32 PM, Chet Ramey  wrote:
>
>> On 2/12/13 11:40 AM, Pierre Gaston wrote:
>> > On Tue, Feb 12, 2013 at 6:07 PM, Matei David 
>> wrote:
>> >
>> >> Ok, but I see the same behaviour when eval runs in a subshell:
>> >>
>> >> $ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/
>> >>> &2; }; x=3; eval "exec $x>/dev/null"; llfd; echo | eval "llfd $x>&-"'
>> >> [same output, fd 10 open, pointing to /dev/null, even though it's a
>> >> subshell]
>> >>
>> >
>> > eval runs in a subshell, but it's the same thing inside this subshell.
>> > eg you could have: echo | { eval "llfd "$x>&-"; echo blah >&3; }
>> >
>> > Bash could optimize this once it realizes there's only one command, but
>> > it's probably not that simple to implement.
>>
>> The basic flow is like this for any builtin command or shell function that
>> has a redirection (let's choose 'llfd 3>&-').
>>
>> 1.  The redirection is performed in the current shell, noting that it
>> should be `undoable'.  That takes three steps:
>>
>> 1a. In this case, since fd 3 is in use, we dup it (to fd 10) and mark fd
>> 10 as close-on-exec.  We add a separate redirection to an internal
>> list  that basically says "close fd 10".  Then we add another
>> redirection to the front of the same internal list that says "dup fd
>> 10 back to fd 3".  Let's call this list "redirection_undo_list".  We
>> will use it to restore the original state after the builtin or
>> function completes.
>>
>> 1b. Take the first redirection from step 1a and add it to a separate
>> internal list that will clean up internal redirections in the case
>> that exec causes the redirections to be preserved, and not undone.
>> Let's call this list "exec_redirection_undo_list".
>>
>> 1c. Perform the redirection.  Here, that means close fd 3.
>>
>> [perform step 1 for each redirection associated with the command]
>>
>> 2.  If we're running the exec builtin, throw away the list from 1a.  If
>> we're not running the exec builtin, throw away the list from 1b.  Save
>> a handle to the list we didn't discard.
>>
>> 3.  Run the function or builtin.
>>
>> 4.  Take the list saved in step 2 and perform the redirections to
>> restore the previous state.  Here, that means we dup fd 10 back to fd
>> 3, then close fd 10.
>>
>> If you look at the steps, it should be clear why fd 10 is still open when
>> llfd executes.
>>
>> Bash `cheats' when running builtins or shell functions in pipelines or
>> other subshells.  It knows it's already going to be in a child process
>> when it performs the redirections, so it doesn't bother setting up the
>> structures to undo them.
>>
>> Chet
>>
>> --
>> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>>  ``Ars longa, vita brevis'' - Hippocrates
>> Chet Ramey, ITS, CWRUc...@case.edu
>> http://cnswww.cns.cwru.edu/~chet/
>>
>
>


read builtin. input processes improperly inheriting IFS setting

2013-07-26 Thread David H.
From: David H.
To: bug-bash@gnu.org,b...@packages.debian.org
Subject: read builtin. input processes improperly inheriting IFS setting

Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include
-I../bash/lib  -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wall
uname output: Linux mybox 3.2.0-3-686-pae #1 SMP Thu Jun 28 08:56:46
UTC 2012 i686 GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 4.2
Patch Level: 45
Release Status: release

Description:

When the read builtin is prefixed by an IFS setting, for field
splitting, and the input involves an unquoted variable expanded inside
a herestring, command substitution, or process substitution, the
contents of the variable are split by that IFS setting before being
read. This results in all input being stored in the first variable.

It is my understanding that redirection patterns like these should
be considered independent processes and not subject to the environment
settings of the target command.

The expected behavior appears if a heredoc or compound command is
used. ksh also shows the expected behavior for all attempted patterns.

Quoting the input variable also worked in all cases.

Repeat-By:

# The test string:
$ echo $instring
root:x:0:0:root:/root:/bin/bash

# Gives incorrect (unexpected) output:
$ ( IFS=: read -a strings < <( echo $instring ) ; printf '[%s]\n'
"${strings[@]}" )
[root x 0 0 root /root /bin/bash]

# Gives expected output
$ ( IFS=: read -a strings ; printf '[%s]\n' "${strings[@]}" ) < <(
echo $instring )
[root]
[x]
[0]
[0]
[root]
[/root]
[/bin/bash]


# Some other patterns that fail in the same way:
$ IFS=: read -a strings <<<$instring
$ IFS=: read -a strings <<<$( echo "$instring" )
$ IFS=: read -a strings <<<"$( echo $instring )"
$ IFS=: read -a strings <<<$( IFS="" ; echo $instring )

# Other patterns that appear to work properly:
$ IFS=: read a b c d e f g < $instring
> END

$ IFS=: read a b c d e f g < <( IFS=""; echo $instring )



i++ cause bad return code when result is 1

2013-08-18 Thread David Lehmann
% uname -a

Linux dph1d1ods13 2.6.32-279.2.1.el6.x86_64 #1 SMP Thu Jul 5 21:08:58 EDT
2012 x86_64 x86_64 x86_64 GNU/Linux


% bash --version

GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)


The ((i++)) fails only when the result is 1.  When the result is 0 or 2, it
does not fail.  This is a problem when 'set -e'.


### i++ failes when ‘I’ becomes one.
i=0
echo $?
0
% (( i++ ))
% echo $?  ### this should not fail
1
% (( i++ ))
% echo $?
0
% (( i++ ))
% echo $?
0


failed grep should cause subshell to exit

2013-08-18 Thread David Lehmann
### should never see done

** **

% uname -a

Linux x 2.6.32-279.2.1.el6.x86_64 #1 SMP Thu Jul 5 21:08:58 EDT 2012
x86_64 x86_64 x86_64 GNU/Linux

% bash --version

GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)



(

set -ex

** **

echo hello >x

grep hello x

! grep hello x

echo $?


### should not see "done" because set -e

echo done

)


Re: i++ cause bad return code when result is 1

2013-08-26 Thread David Lehmann
My issue is that the resulting behavior in Exercise 1 does not make sense.

The resulting value of i should have no bearing on the exit code.  If the
addition succeeded, the expression should return 0 (success).  If i was not
an integer (e.g. i=hello), then I expect (( i++ )) to return a non-zero
error code.

...IMHO, of course.



On Mon, Aug 19, 2013 at 7:40 AM, Greg Wooledge  wrote:

> On Mon, Aug 19, 2013 at 05:50:31AM +0200, Chris Down wrote:
> > On 2013-08-18 16:57, David Lehmann wrote:
> > > The ((i++)) fails only when the result is 1.  When the result is 0 or
> 2, it
> > > does not fail.  This is a problem when 'set -e'.
> >
> > This is normal and expected. If the value returned in an (( expression
> is zero,
> > then the exit code is 1. Since you're using a postincrement, zero is
> returned,
> > and then i is incremented to 1.
>
> In fact it's one of my sample cases on
> http://mywiki.wooledge.org/BashFAQ/105
> Note that whether it "works" depends on which version of bash is being
> used.
>


Re: failed grep should cause subshell to exit

2013-08-26 Thread David Lehmann
Andreas,

I expected the '!' to reverse the exit code, such that if the grep return 0
(success), the expression would return 1 (failure); if the grep returned
non-zero (failure), the expression would return 0 (success).   i.e. I
expected the '!' to behave like it does in C.

-David





On Mon, Aug 19, 2013 at 3:43 AM, Andreas Schwab  wrote:

> David Lehmann  writes:
>
> > ! grep hello x
>
> ! causes the shell to ignore -e.
>
> Andreas.
>
> --
> Andreas Schwab, SUSE Labs, sch...@suse.de
> GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
> "And now for something completely different."
>


C-style escapes within substitution expansion

2014-03-07 Thread David Sines
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -g -O2
uname output: Linux perseus 3.0.0-1 #1 Thu Aug 11 12:53:44 BST 2011
i686 GNU/Linux
Machine Type: i686-pc-linux-gnu

Bash Version: 4.3
Patch Level: 0
Release Status: release

Description:
When invoked as sh, bash 4.3.0 doesn't interpret C-style escapes
within double-quoted substitution expansions ("${var/$'what'/ever}").

Repeat-By:

  bash -c "a=-@- ; echo \${a/\$'\\x40'/+} ; echo \"\${a/\$'\\x40'/+}\""
  -+-
  -+-
  sh   -c "a=-@- ; echo \${a/\$'\\x40'/+} ; echo \"\${a/\$'\\x40'/+}\""
  -+-
  -@-



bash-4.3 bug report

2014-04-14 Thread David Binderman
Hello there,

 [bind.c:2238]: (style) Array index 'j' is used before limits check.

Source code is

  for (j = 0; invokers[j] && j < 5; j++)

Suggest new code

  for (j = 0; (j < 5) && (invokers[j] != NULL); j++)

Regards

David Binderman


  


RE: bash-4.3 bug report

2014-04-14 Thread David Binderman
Hello there,


> But my point remains to the original poster: a patch
> without justification is unlikely to be applied. Document WHY you think
> the existing code is a bug, not just HOW to fix it, for your patch to be
> usefully considered.

Standard software engineering practice is to look before leaping.
This means always check the array index before use.
The static analyser implements that standard practice.

The code in question, independent of whether it works ok or not,
does it's work in a non-standard way when the standard way
is easy to achieve and has some possible benefits for robustness,
as well as being easier on the eye to the experienced code reviewer.

Anyone experienced looking at the code will always need to examine it
more closely to find out why it's a good idea in this case to use an array
index and *then* sanity check it's value.


Regards

David Binderman

  


Empty sub-string expansion bug

2006-10-24 Thread David Purdy
Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2
uname output: Linux lnxdavid 2.6.17-2-686 #1 SMP Wed Sep 13 16:34:10 UTC 2006
i686 GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:
Substring operations that are meant to return an empty string ""
sometimes return character "\177" instead.

Repeat-By:
A=""
B="${A:0}"
touch "/tmp/test/TEST${A:0}"

-> touch: cannot touch `/tmp/test/TEST\177': No such file or directory

touch "/tmp/test/TEST$B"

-> touch: cannot touch `/tmp/test/TEST': No such file or directory

The example above is contrived, here is another (more realistic) 
example:

A="abc"
touch "/tmp/test/TEST${A:3}"

-> touch: cannot touch `/tmp/test/TEST\177': No such file or directory


Fix:
Bash script work-around: Use an intermediate variable until the bug is
fixed.


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Problem with declare -a using a subcommand that parses data containing ctrl-A characters.

2007-01-05 Thread David Anderson
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux'
-DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -O2
-fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -D_GNU_SOURCE
-DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39
UTC 2006 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:
Not sure if this is a change to how things are to be done or a 
bug. I've attached my test files and original bbug1 file.
I think the ctrl-as are giving my mailer a fit so bashbug didn't
send. Sorry if I'm including the same info twice.

When creating an array and populating it with a subcommands
output there has been a change between version 
2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

When creating an array and populating it with a subcommands
output there has been a change between 
version 2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

ie:

t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'
declare -a COM=($(echo $t | head -n1 | awk -F '{print $1,$2,$3}'))

does not work under 3.1.17(1) but does under 2.05b.0(1)

Both version works the same without the ctrl-As 
ie:


t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))


Repeat-By:


#!/bin/bash

#
# Test created by David B. Anderson dbanders @ gmail dot com
#

echo "Testing with: " 
bash --version

echo " "
echo "Testing bash array declare using the output of a subshell that
contains control A characters."
t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'

declare -a COM=($(echo $t | head -n1 | awk -F'' '{print $1,$2,$3}'))
echo "COM 0: should be 'mhost1'  is '${COM[0]}'"
echo "COM 1: should be '10.2.2.12/10.2.2.11' is '${COM[1]}'"
echo "COM 2: should be 'TCP' is '${COM[2]}'"

if [ "x${COM[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi

echo ""
echo "Testing bash array declare using the output of a subshell that
contains is the same as above with ':' instead of control A characters."
t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))
echo "COMC 0: should be 'mhost1'  is '${COMC[0]}'"
echo "COMC 1: should be '10.2.2.12/10.2.2.11' is '${COMC[1]}'"
echo "COMC 2: should be 'TCP' is '${COMC[2]}'"

if [ "x${COMC[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi


--- Begin Message ---
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1 
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux' -DCONF_VENDOR='suse' 
-DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H   -I.  
-I. -I./include -I./lib   -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g 
-D_GNU_SOURCE -DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39 UTC 2006 
x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:

 

Problem with declare -a using a subcommand that parses data containing ctrl-A characters.

2007-01-05 Thread David Anderson
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux'
-DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -O2
-fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -D_GNU_SOURCE
-DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39
UTC 2006 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:
Not sure if this is a change to how things are to be done or a 
bug. I've attached my test files and original bbug1 file.
I think the ctrl-as are giving my mailer a fit so bashbug didn't
send. Sorry if I'm including the same info twice.

When creating an array and populating it with a subcommands
output there has been a change between version 
2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

When creating an array and populating it with a subcommands
output there has been a change between 
version 2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

ie:

t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'
declare -a COM=($(echo $t | head -n1 | awk -F '{print $1,$2,$3}'))

does not work under 3.1.17(1) but does under 2.05b.0(1)

Both version works the same without the ctrl-As 
ie:


t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))


Repeat-By:


#!/bin/bash

#
# Test created by David B. Anderson dbanders @ gmail dot com
#

echo "Testing with: " 
bash --version

echo " "
echo "Testing bash array declare using the output of a subshell that
contains control A characters."
t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'

declare -a COM=($(echo $t | head -n1 | awk -F'' '{print $1,$2,$3}'))
echo "COM 0: should be 'mhost1'  is '${COM[0]}'"
echo "COM 1: should be '10.2.2.12/10.2.2.11' is '${COM[1]}'"
echo "COM 2: should be 'TCP' is '${COM[2]}'"

if [ "x${COM[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi

echo ""
echo "Testing bash array declare using the output of a subshell that
contains is the same as above with ':' instead of control A characters."
t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))
echo "COMC 0: should be 'mhost1'  is '${COMC[0]}'"
echo "COMC 1: should be '10.2.2.12/10.2.2.11' is '${COMC[1]}'"
echo "COMC 2: should be 'TCP' is '${COMC[2]}'"

if [ "x${COMC[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi


--- Begin Message ---
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1 
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux' -DCONF_VENDOR='suse' 
-DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H   -I.  
-I. -I./include -I./lib   -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g 
-D_GNU_SOURCE -DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39 UTC 2006 
x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:

 

Problem with declare -a using a subcommand that parses data containing ctrl-A characters.

2007-01-05 Thread David Anderson
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux'
-DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -O2
-fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -D_GNU_SOURCE
-DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39
UTC 2006 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:
Not sure if this is a change to how things are to be done or a 
bug. I've attached my test files and original bbug1 file.
I think the ctrl-as are giving my mailer a fit so bashbug didn't
send. Sorry if I'm including the same info twice.

When creating an array and populating it with a subcommands
output there has been a change between version 
2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

When creating an array and populating it with a subcommands
output there has been a change between 
version 2.05b.0(1) and 3.1.17(1)
such that if your subcommands generate/parse ctrl-A data 
the array is not populated as before.

ie:

t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'
declare -a COM=($(echo $t | head -n1 | awk -F '{print $1,$2,$3}'))

does not work under 3.1.17(1) but does under 2.05b.0(1)

Both version works the same without the ctrl-As 
ie:


t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))


Repeat-By:


#!/bin/bash

#
# Test created by David B. Anderson dbanders @ gmail dot com
#

echo "Testing with: " 
bash --version

echo " "
echo "Testing bash array declare using the output of a subshell that
contains control A characters."
t='mhost110.2.2.12/10.2.2.11TCP000ALIVE2010.2.2.1110.2.2.121
mhost1/dev/ttyS1TTY38400349350ALIVE002204541645384(null)(null)99
'

declare -a COM=($(echo $t | head -n1 | awk -F'' '{print $1,$2,$3}'))
echo "COM 0: should be 'mhost1'  is '${COM[0]}'"
echo "COM 1: should be '10.2.2.12/10.2.2.11' is '${COM[1]}'"
echo "COM 2: should be 'TCP' is '${COM[2]}'"

if [ "x${COM[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi

echo ""
echo "Testing bash array declare using the output of a subshell that
contains is the same as above with ':' instead of control A characters."
t1='mhost1:10.2.2.12/10.2.2.11:TCP:0:0:0:ALIVE:2:0:0:0:0:0:0:0:0:0:10.2.2.11:10.2.2.12:1
mhost1:/dev/ttyS1:TTY:38400:349:350:ALIVE:0:0:2:0:0:0:0:20:48892:48812:(null):(null):99'

declare -a COMC=($(echo $t1 | head -n1 | awk -F: '{print $1,$2,$3}'))
echo "COMC 0: should be 'mhost1'  is '${COMC[0]}'"
echo "COMC 1: should be '10.2.2.12/10.2.2.11' is '${COMC[1]}'"
echo "COMC 2: should be 'TCP' is '${COMC[2]}'"

if [ "x${COMC[0]}" != "xmhost1" ]; then
echo "TEST: FAILED"
else
echo "TEST: PASSED"
fi


--- Begin Message ---
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.1 
-L/usr/src/packages/BUILD/bash-3.1/../readline-5.1
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='x86_64-suse-linux' -DCONF_VENDOR='suse' 
-DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H   -I.  
-I. -I./include -I./lib   -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g 
-D_GNU_SOURCE -DRECYCLES_PIDS -Wall -pipe -g -fbranch-probabilities
uname output: Linux mahler 2.6.16.21-0.8-smp #1 SMP Mon Jul 3 18:25:39 UTC 2006 
x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-suse-linux

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:

 

strange bash state adds forward slash '/' after tab-completed executables

2007-03-12 Thread David Emerson
Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation 
CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' -DCONF_OSTYPE='linux-gnu' 
-DCONF_MACHTYPE='i486-pc-linux-gnu' -DCONF_VE$
uname output: Linux scooter 2.6.18-4-scooter #1 SMP Sun Mar 4 04:03:09 
PST 2007 i686 GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:
see reproduction steps

Repeat-By:
1. type `/lost
2. it should tab-complete to `/lost+found/
3. add the other backtick and execute this nonsense: `/lost+found/`
4. now we are in a strange state!
5. type egr
6. ...it completes to 'egrep/'

Note: this only works if you tab-complete the forward-slash after 
`/directory/ ... if you type `/bin/` then it will not get to this 
strange state.

Fix: ???




___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: \w in PS1 hopelessly confuses bash in workdir ending in \

2008-08-06 Thread David Kastrup
Chet Ramey <[EMAIL PROTECTED]> writes:

>> Machine Type: i486-pc-linux-gnu
>> 
>> Bash Version: 3.2
>> Patch Level: 39
>> Release Status: release
>> 
>> Description:
>> 
>> \w in PS1 prompt string confuses bash when ending in \
>> 
>>  The standard prompt setting in ubuntu is
>> PS1="${debian_chroot:+($debian_chroot)[EMAIL PROTECTED]:\w\$"
>> 
>> which seems harmless enough.  However, if you do
>> 
>> mkdir /tmp/chaos\\ ; cd /tmp/chaos\\
>> 
>> the prompt display hopelessly confuses bash.  At first it displays
>> nothing at all, then with repeated entries of RET fragments of color
>> ANSI sequences appear, like
>> 
>> [EMAIL PROTECTED]:/tmp/xxx$ mkdir /tmp/chaos\\ ; cd /tmp/chaos\\
>> ]0;[EMAIL PROTECTED]: /tmp/[EMAIL PROTECTED]:/tmp/chaos\$ 
>> ]0;[EMAIL PROTECTED]: /tmp/[EMAIL PROTECTED]:/tmp/chaos\$ 
>
> Since bash doesn't output any of that by default, I suspect you have
> something in PROMPT_COMMAND that tries to write to an xterm title bar
> and is confused by the escape at the end of the prompt string.

Bingo.  At my home machine, I don't have this effect.  This comes (for
new users) from

/etc/skel/.bashrc

where we have

# If this is an xterm set the title to [EMAIL PROTECTED]:dir
case "$TERM" in
xterm*|rxvt*)
PROMPT_COMMAND='echo -ne "\033]0;[EMAIL PROTECTED]: ${PWD/$HOME/~}\007"'
;;
*)
;;
esac

And this is in the package bash-3.2-0ubuntu18

It would appear that the trailing backslash in combination with echo -e
combines with the backslash of \007 and leaves a literal 007 afterwards.

[EMAIL PROTECTED]:/tmp/xxx\$ echo "\007"
\007
[EMAIL PROTECTED]:/tmp/xxx\$ echo -e "\007"

[EMAIL PROTECTED]:/tmp/xxx\$ echo "${PWD/$HOME/~}\007"
/tmp/xxx\\007
[EMAIL PROTECTED]:/tmp/xxx\$ echo -e "${PWD/$HOME/~}\007"
/tmp/xxx\007

Since PWD can contain backslashes at arbitrary positions, echo -e is
clearly inappropriate here.

-- 
David Kastrup, Kriemhildstr. 15, 44793 Bochum




'read' primitive

2008-08-21 Thread David Lütolf
From: dlutolf
To: bug-bash@gnu.org,[EMAIL PROTECTED]
Subject: [50 character or so descriptive subject here (for reference)]

Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr$
uname output: Linux dlutolf 2.6.24-16-generic #1 SMP Thu Apr 10 13:23:42
UTC 2008 i686 GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 3.2
Patch Level: 39
Release Status: release

Description:
'read' does not properly set variable when line ends with a
 character

Repeat-By:
~$ echo "foo " > bar
~$ read foo < bar
~$ echo "-$foo-"
-foo-

the output should of course be: -foo -

Fix:
include end-of-line spaces in variables when reading a file





Fwd: Re: Bash bug --- 1st parameter substitued for /InstallShield/ that is passed to the main entry pt of program

2008-09-16 Thread david bone

 Paul,
 Thk u for the quick response. I hope now this is the correct place to respond 
to your email regarding my misunderstanding.
 I'm answering my own raised questions from your response that others might 
also learn from:
 Yes my ignorance on the wild character substitution triggered by * and 
Pathname expansion. Your correction works.

 Interesting that the example given, the second comment /*there*/ on the cmd 
line(p3) was not substituted but left as is as shown by the output.  

 From your comments, I read the bash manual on Pathname Expansion.
 This raises some questions on pattern matching and how bash does its 
substitution particularly with my example:

The manual states:
  "if no matching file names are found, and the shell option nullglob is 
disabled, the word is left unchanged"
The 2nd comment is being handled in the same way as the 1st comment and there 
is no /*there*/ directory.

So where is /InstallShield/ coming from? Well low and behold there is a 
/InstallShield/ directory to which the /*hi*/ pattern matches.
Again thank u and your kind url pointer in how to deal with 2ndary responses.
Dave


>> On Monday, September 15, 2008, at 09:50PM, "Paul Jarc" <[EMAIL PROTECTED]> 
>> wrote:
>>>david bone <[EMAIL PROTECTED]> wrote:
>>>> The parameters passed to the ``main entry'' of a program substitutes
>>>> the first parameter "/*hi*/" which is a c++ type comment as
>>>> /InstallShield/
>>>
>>>It's not a bug.  Check the man page or info documentation for the
>>>description of "Pathname Expansion".  If you want to suppress the
>>>expansion, you can use single or double quotes:
>>> 
>> On Monday, September 15, 2008, at 09:50PM, "Paul Jarc" <[EMAIL PROTECTED]> 
>> wrote:
>>>david bone <[EMAIL PROTECTED]> wrote:
>>>> The parameters passed to the ``main entry'' of a program substitutes
>>>> the first parameter "/*hi*/" which is a c++ type comment as
>>>> /InstallShield/
>>>
>>>It's not a bug.  Check the man page or info documentation for the
>>>description of "Pathname Expansion".  If you want to suppress the
>>>expansion, you can use single or double quotes:
>>>
>>>/yacco2/bin/o2linker_debug '/*hi*/' /yacco2/compiler/grammars/yacco2.fsc 
>>>"/*there*/"
>>>
>>>
>>>paul
>>>
>>>
>
>




Bug in array populating does not respect quotes

2009-09-24 Thread David Martin
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDI$
uname output: Linux bristol 2.6.31 #10 SMP Thu Sep 10 17:59:29 CEST
2009 x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.0
Patch Level: 33 (debian bash-4.0-7)
Release Status: release

Description:
When populating an array from a string in a variable does not
handle quotes.

Repeat-By:

~$ declare -a samplearray
~$ samplearray=( x y 'z k')
~$ echo ${samplearray[2]}
z k
~$ samplestring="x y 'z k'"
~$ samplearray=( $samplestring )
~$ echo ${samplearray[2]}
'z




Re: Bug in array populating does not respect quotes

2009-09-25 Thread David Martin
Thank you for all and sorry for the noise, you were right.

David.

On Thu, Sep 24, 2009 at 6:38 PM, Chris F.A. Johnson
 wrote:
> On Thu, 24 Sep 2009, David Martin wrote:
>
>> Configuration Information [Automatically generated, do not change]:
>> Machine: x86_64
>> OS: linux-gnu
>> Compiler: gcc
>> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
>> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
>> -DCONF_VENDOR='pc' -DLOCALEDI$
>> uname output: Linux bristol 2.6.31 #10 SMP Thu Sep 10 17:59:29 CEST
>> 2009 x86_64 GNU/Linux
>> Machine Type: x86_64-pc-linux-gnu
>>
>> Bash Version: 4.0
>> Patch Level: 33 (debian bash-4.0-7)
>> Release Status: release
>>
>> Description:
>>         When populating an array from a string in a variable does not
>> handle quotes.
>>
>> Repeat-By:
>>
>> ~$ declare -a samplearray
>> ~$ samplearray=( x y 'z k')
>> ~$ echo ${samplearray[2]}
>> z k
>> ~$ samplestring="x y 'z k'"
>> ~$ samplearray=( $samplestring )
>
> eval "samplearray=( $samplestring )"
>
>> ~$ echo ${samplearray[2]}
>> 'z
>
> --
>   Chris F.A. Johnson, webmaster         <http://woodbine-gerrard.com>
>   ===
>   Author:
>   Shell Scripting Recipes: A Problem-Solution Approach (2005, Apress)
>




Failed bash -r command returns 0 exit status

2010-05-24 Thread Pitt, David
From: david.p...@anz.com
To: bug-bash@gnu.org
Subject: Failed bash -r command returns 0 exit status

Configuration Information [Automatically generated, do not change]:
Machine: sparc
OS: solaris2.10
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='sparc'
-DCONF_OSTYPE='solaris2.10' -DCONF_MACHTYPE='sparc-sun-solaris2.10'
-DCONF_VENDOR='sun' -DLOCALEDIR='/home/pittd1/local/share/locale'
-DPACKAGE='bash' -DSHELL  -DHAVE_CONFIG_H -DSOLARIS   -I.  -I.
-I./include -I./lib -I./lib/intl -I/tmp/bash-4.1/lib/intl  -g -O2
uname output: SunOS dfeo001z 5.10 Generic_141414-02 sun4u sparc
SUNW,SPARC-Enterprise
Machine Type: sparc-sun-solaris2.10

Bash Version: 4.1
Patch Level: 0
Release Status: release

Description:
Prohibited restricted shell command doesn't always return
non-zero exit
status.

Executing "/bin/ls" under a restricted shell returns a non-zero
exit
status, as expected.

However executing "/bin/ls && /bin/ls" under a restricted shell
returns a zero exit
status. This is not expected (at least not by me!). Zero exit
status is returned with
any list of commands, e.g. "/bin/ls && :".
 
I need to know whether a command list executed under a
restricted shell
succeeded or failed.


Repeat-By:
$ bash --restricted -c "/bin/ls"; echo $?
bash: /bin/ls: restricted: cannot specify `/' in command names
1
$ bash --restricted -c "/bin/ls && /bin/ls"; echo $?
bash: /bin/ls: restricted: cannot specify `/' in command names
bash: /bin/ls: restricted: cannot specify `/' in command names
0

Regards,

David Pitt | Developer/Designer
TI SD Risk Systems | Technology Solution Delivery | OTSS
> Level 3 Core A 833 Bourke Street Docklands VIC 3008
> Australia and New Zealand Banking Group Ltd | www.anz.com
> 
> 


RE: Failed bash -r command returns 0 exit status

2010-05-24 Thread Pitt, David
Hi Dave,

> As for "/bin/ls && /bin/ls", since neither command runs neither one fails, 
> either.  'Tis a bit of a head-scratcher, though.

Failure to run can in no way be construed as success, as is shown by:

$ bash --restricted -c "/bin/ls"; echo $?
bash: /bin/ls: restricted: cannot specify `/' in command names
1

It's the combination of commands that causes problems. Chet Ramey has 
acknowledged this as a bug.

In the meantime, can anyone suggest a workaround?

Regards,

David Pitt | Developer/Designer
TI SD Risk Systems | Technology Solution Delivery | OTSS
>Level 3 Core A 833 Bourke Street Docklands VIC 3008
>Australia and New Zealand Banking Group Ltd | www.anz.com
>

-Original Message-
From: dave.rutherf...@gmail.com [mailto:dave.rutherf...@gmail.com] On Behalf Of 
Dave Rutherford
Sent: Monday, 24 May 2010 6:26 PM
To: Pitt, David
Cc: bug-bash@gnu.org
Subject: Re: Failed bash -r command returns 0 exit status

On Mon, May 24, 2010 at 02:48, Pitt, David  wrote:
>        However executing "/bin/ls && /bin/ls" under a restricted shell 
> returns a zero exit
>        status. This is not expected (at least not by me!). Zero exit 
> status is returned with
>        any list of commands, e.g. "/bin/ls && :".

That one would, since the second command is 'true'. Replace it with 'false' and 
you should see an exit status of 1.

As for "/bin/ls && /bin/ls", since neither command runs neither one fails, 
either.  'Tis a bit of a head-scratcher, though.

Dave

"This e-mail and any attachments to it (the "Communication") is, unless 
otherwise stated, confidential,  may contain copyright material and is for the 
use only of the intended recipient. If you receive the Communication in error, 
please notify the sender immediately by return e-mail, delete the Communication 
and the return e-mail, and do not read, copy, retransmit or otherwise deal with 
it. Any views expressed in the Communication are those of the individual sender 
only, unless expressly stated to be those of Australia and New Zealand Banking 
Group Limited ABN 11 005 357 522, or any of its related entities including ANZ 
National Bank Limited (together "ANZ"). ANZ does not accept liability in 
connection with the integrity of or errors in the Communication, computer 
virus, data corruption, interference or delay arising from or in respect of the 
Communication."



RE: Failed bash -r command returns 0 exit status

2010-05-24 Thread Pitt, David
Thanks Chet!


David Pitt | Developer/Designer
TI SD Risk Systems | Technology Solution Delivery | OTSS
>Level 3 Core A 833 Bourke Street Docklands VIC 3008
>Australia and New Zealand Banking Group Ltd | www.anz.com
>

-Original Message-
From: Chet Ramey [mailto:chet.ra...@case.edu] 
Sent: Monday, 24 May 2010 11:23 PM
To: Pitt, David
Cc: bug-bash@gnu.org; chet.ra...@case.edu
Subject: Re: Failed bash -r command returns 0 exit status

On 5/24/10 2:48 AM, Pitt, David wrote:

> Bash Version: 4.1
> Patch Level: 0
> Release Status: release
> 
> Description:
> Prohibited restricted shell command doesn't always return 
> non-zero exit
> status.
> 
> Executing "/bin/ls" under a restricted shell returns a 
> non-zero exit
> status, as expected.
> 
> However executing "/bin/ls && /bin/ls" under a restricted 
> shell returns a zero exit
> status. This is not expected (at least not by me!). Zero exit 
> status is returned with
> any list of commands, e.g. "/bin/ls && :".
>  
> I need to know whether a command list executed under a 
> restricted shell
> succeeded or failed.

I will tighten up the return status when restricted commands fail for
the next version of bash.

Chet

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.edu
http://cnswww.cns.cwru.edu/~chet/

"This e-mail and any attachments to it (the "Communication") is, unless 
otherwise stated, confidential,  may contain copyright material and is for the 
use only of the intended recipient. If you receive the Communication in error, 
please notify the sender immediately by return e-mail, delete the Communication 
and the return e-mail, and do not read, copy, retransmit or otherwise deal with 
it. Any views expressed in the Communication are those of the individual sender 
only, unless expressly stated to be those of Australia and New Zealand Banking 
Group Limited ABN 11 005 357 522, or any of its related entities including ANZ 
National Bank Limited (together "ANZ"). ANZ does not accept liability in 
connection with the integrity of or errors in the Communication, computer 
virus, data corruption, interference or delay arising from or in respect of the 
Communication."



bash(1): please fix a few typos and some style stuff

2010-07-30 Thread David Prévot
Package: bash
Version: 4.1-3
Severity: wishlist
Tags: patch upstream

Hi Chet, Matthias,

While updating the French translation of Bash manual pages (for the
manpages-fr-extra package), I spotted a few typos and other style stuff.
Please find attached a patch against the version shipped in the source
package also available online [1] (bash-manpage.patch) and a patch
against the version shipped in the binary debian package
(bash-manpage-debian.patch with a pair of added modifications).

[1] ftp://ftp.cwru.edu/pub/bash/bash.1.gz

Please also consider modifying unsupported by po4a alternative stuff like:

> .if t \fB|  &  ;  (  )  <  >  space  tab\fP
> .if n \fB|  & ; ( ) < > space tab\fP

around line 470 to something equivalent like:

> .if t .ss 24
> \fB|  & ; ( ) < > space tab\fP
> .if t .ss 12

The same remark applies for the copyright around line 50 and some other
stuff along the document. I'm willing to try and help on this stuff if
you are to consider it. Note that we will be glad if you decided to
sheep directly the French translation of manual pages upstream.

Cheers

David

--- bash.1	2010-07-29 15:42:42.0 -0400
+++ bash-orig.1	2010-07-29 15:34:02.0 -0400
@@ -410,7 +410,7 @@
 .PP
 .B Bash
 attempts to determine when it is being run with its standard input
-connected to a a network connection, as if by the remote shell
+connected to a network connection, as by the remote shell
 daemon, usually \fIrshd\fP, or the secure shell daemon \fIsshd\fP.
 If
 .B bash
@@ -929,7 +929,7 @@
 below).
 The file descriptors can be utilized as arguments to shell commands
 and redirections using standard word expansions.
-The process id of the shell spawned to execute the coprocess is
+The process ID of the shell spawned to execute the coprocess is
 available as the value of the variable \fINAME\fP_PID.
 The \fBwait\fP
 builtin command may be used to wait for the coprocess to terminate.
@@ -1192,7 +1192,7 @@
 In the context where an assignment statement is assigning a value
 to a shell variable or array index, the += operator can be used to
 append to or add to the variable's previous value.
-When += is applied to a variable for which the integer attribute has been
+When += is applied to a variable for which the \fIinteger\fP attribute has been
 set, \fIvalue\fP is evaluated as an arithmetic expression and added to the
 variable's current value, which is also evaluated.
 When += is applied to an array variable using compound assignment (see
@@ -1352,13 +1352,13 @@
 This variable is read-only.
 .TP
 .B BASHPID
-Expands to the process id of the current \fBbash\fP process.
+Expands to the process ID of the current \fBbash\fP process.
 This differs from \fB$$\fP under certain circumstances, such as subshells
 that do not require \fBbash\fP to be re-initialized.
 .TP
 .B BASH_ALIASES
 An associative array variable whose members correspond to the internal
-list of aliases as maintained by the \fBalias\fP builtin
+list of aliases as maintained by the \fBalias\fP builtin.
 Elements added to this array appear in the alias list; unsetting array
 elements cause aliases to be removed from the alias list.
 .TP
@@ -1814,7 +1814,7 @@
 with value
 .if t \f(CWt\fP,
 .if n "t",
-it assumes that the shell is running in an emacs shell buffer and disables
+it assumes that the shell is running in an Emacs shell buffer and disables
 line editing.
 .TP
 .B FCEDIT
@@ -2219,8 +2219,8 @@
 not arrive.
 .TP
 .B TMPDIR
-If set, \fBBash\fP uses its value as the name of a directory in which
-\fBBash\fP creates temporary files for the shell's use.
+If set, \fBbash\fP uses its value as the name of a directory in which
+\fBbash\fP creates temporary files for the shell's use.
 .TP
 .B auto_resume
 This variable controls how the shell interacts with the user and
@@ -2595,7 +2595,7 @@
 expanded and that value is used in the rest of the substitution, rather
 than the value of \fIparameter\fP itself.
 This is known as \fIindirect expansion\fP.
-The exceptions to this are the expansions of ${!\fIprefix\fP*} and
+The exceptions to this are the expansions of ${\fB!\fP\fIprefix\fP\fB*\fP} and
 ${\fb!\fp\finame\fp[...@\fp]} described below.
 The exclamation point must immediately follow the left brace in order to
 introduce indirection.
@@ -2655,7 +2655,7 @@
 .TP
 ${\fIparameter\fP\fB:\fP\fIoffset\fP\fB:\fP\fIlength\fP}
 .PD
-\fBSubstring Expansion.\fP
+\fBSubstring Expansion\fP.
 Expands to up to \fIlength\fP characters of \fIparameter\fP
 starting at the character specified by \fIoffset\fP.
 If \fIlength\fP is omitted, expands to the substring of
@@ -2689,7 +2689,7 @@
 .TP
 ${\fb!\fp\fiprefix\fp...@\fp}
 .PD
-\fBNames matching prefix.\fP
+\fBNames matching prefix\fP.
 Expands to the names of variables whose names begin with \fIprefix\fP,
 separated by the first character of the
 .SM
@@ -2703,7 +2703,7 @@
 .TP
 ${\fB!\fP\fIname\fP[\fI*\fP]}
 .PD
-\fBList of array key

Wildcard expansion in $PATH

2005-02-14 Thread David Lechnyr
It would be nice if one day BASH could recognize wildcards in the PATH 
statement.  E.g.,

export PATH=/sbin:/usr/sbin:/usr/pkg/*/bin
While it may seem unnecessary to some, it is becoming a more frequently 
demanded feature.  E.g., a new feature of the MANPATH variable as of 
man 1.5p is:  "MANPATH items can now be set with an asterisk, which 
indicates that all possible dir-names are to be searched."  Such a 
feature for BASH would be quite nifty :-)

If there is a patch for this, I'd appreciate anyone pointing me in the 
appropriate direction.

Thanks,
- David

___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


bash crashed after experimenting with $TERM

2005-05-02 Thread David Kaasen
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib 
-I/store/include -g -O2
uname output: Linux hagbart.nvg.ntnu.no 2.4.29 #1 Tue Jan 25 14:20:00 CET 2005 
i686 athlon i386 GNU/Linux
Machine Type: i686-pc-linux-gnu

Bash Version: 2.04
Patch Level: 0
Release Status: release

Description:
   Bash crashed when experimenting with setting $TERM to various
   values (for trying with the "screen" command).

Repeat-By:
   The "screen" command is aliased as follows: alias screen='case "$TERM" in 
(rxvt*|xterm*) export TERM=xterm ; echo xterm ; ;; (*) export TERM=vt100 ; echo 
vt100 ; ;; esac ; screen'
   It's version 3.09.11 (FAU) 14-Feb-02.
   This is a screendump from the session:
   """
   [EMAIL PROTECTED] kaasen]$ echo $TERM
   xterm
   [EMAIL PROTECTED] kaasen]$ screen
   xterm
   [screen is terminating]
   [EMAIL PROTECTED] kaasen]$ export TERM=abc
   [EMAIL PROTECTED] kaasen]$ screen
   vt100
   (~/.bash_profile)
   Screen begynner 2005-05-02 09.57.10
   [EMAIL PROTECTED] kaasen]$ exit

   [screen is terminating]
   [EMAIL PROTECTED] kaasen]$ export TERM=rxvt-colour

   malloc: unknown:0: assertion botched
   free: called with already freed block argument
   last command: export TERM=rxvt-colour
   Stopping myself...
   """


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Vorbildliche Aktion

2005-05-15 Thread david . lin
Lese selbst:
http://www.npd.de/npd_info/deutschland/2004/d1204-24.html


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: question on retrieving map(-A) value w/indirect name

2018-03-17 Thread David Margerison
On 17 March 2018 at 20:40, L A Walsh  wrote:
>
> I seebut that begs the question, how do you access an array's
> members using a var holding the array's name?
>
> I wanted to be able to do something like have a set of
> values in an assoc. map, and pass the name to a generic
> processing routine with the map name as a param, like:
>
> sub processSrvState() {
>my stat=${1:?}
>if [[ ${!stat[cur_up]} == ${!stat[max_up]} &&
>  ${!stat[cur_down]} == ${!stat[max_down]} ]]; then
>  ...
>fi

processSrvState() {
  local cur_up="$1[cur_up]"
  local max_up="$1[max_up]"
  if [[ "${!cur_up}" == "${!max_up}" ]] ; then
echo ok
  fi
}

declare -A foo=([cur_up]=11 [max_up]=11)

processSrvState foo

# note that the array name must not conflict with any keys



Re: question on retrieving map(-A) value w/indirect name

2018-03-17 Thread David Margerison
On 17 March 2018 at 11:50, L A Walsh  wrote:
>
> I'm a bit confused ...
> If I have assoc-array:
>
>  declare -A foo=([one]=11 [two]=22)
>
> and am passing name in another var, like "fee"
>
>  fee=foo
>
> I tried echoing the val:
>
>  echo ${!fee[one]}
>
> but got nothing -- tried a few other syntaxes.

I think this does what you want ...

$ declare -A foo=([one]=11 [two]=22)
$ fee=foo[one]
$ echo "${!fee}"
11



Re: PDF documentation output not readable (3.2.6 GNU Parallel)

2018-05-10 Thread David Margerison
On 11 May 2018 at 07:35, Quentin L'Hours  wrote:
> On 2018-05-10 02:24 PM, Greg Wooledge wrote:
>> Oh... well, it's not in the manual (bash(1)).  It's in this
>> other thing called the "Bash Reference Manual", apparently at
>> 
>
> This is the html version of the info documentation if I'm not mistaken, You
> can see it from your terminal with:
> info bash 'Basic Shell Features' 'Shell Commands' 'GNU Parallel'

Yes, on Debian stable I can see that text after installing the bash-doc package
and running:
  info -f bash -n "GNU Parallel".



Re: suggestion for improvement - help pwd

2018-06-02 Thread David Margerison
On 3 June 2018 at 09:54, Chet Ramey  wrote:
> On 6/2/18 2:19 PM, jefg...@protonmail.com wrote:
>> Dear Sir or Madam,
>>
>> I'd like to make a somewhat meticulous suggestion for improvement in the 
>> output of 'help pwd'.
>> On line 6, the word 'directory' is not properly indented.
>
> It looks fine to me:

And it looks broken to me.

I tested on

$ echo $BASH_VERSION
4.3.30(1)-release

interactively, and then by running this command

$ help pwd >filename

Viewing the created file in an editor that makes tab and space
characters individually visible reveals that the line in question is
indented using a mixture of spaces and tabs.

Whereas every other line is indented by space characters only.

So in many situations, affected by font and tab-width settings,
the line 6 will therefore fail to match the indentation of the other lines.

It looks like if an extra four leading spaces were added to that line,
then it would correctly align always.

I hope this helps :)



Looking for key-bound auto-completed arg buffer in readline/bash.

2019-07-08 Thread David Weeks

Hello All,

I first wrote the help-b...@gnu.org list, thinking this was already a 
feature, but I've not found it.


On occasion, I have to clean up some seriously broken filenames, full of 
control chars, code points, and whatnot, filenames that my perl script 
gives up on.  That script cleans 99%, and I could forever tweak it for 
yet-another-exception-case, but they are really random.  So I fix them 
by hand.


Because they are full of PITA chars, I use auto-complete to uh, 
auto-complete them.  Which is a PITA, cause I'm having to escape and 
otherwise manually code point these PITA chars.  So if once I've 
auto-completed the first case for mv -nv, I'd like to repeat it, and 
then fix the copy as needed.  Right now, I have to do the exact same 
thing twice, to get something that's already on hand, the just 
auto-completed arg.


So that variable is in memory, and if we had as a feature, a persistent 
buffer that logged those completions, that was bound to a key-sequence, 
say C-TAB, then once I've picked/auto-completed the filename, I can 
C-TAB to paste/yank in the second instance, suitable for renaming the 
PITA filename.  (Remember too, these are one and only commands, so 
history is useless.)


This doesn't seem like it would be hard to do (famously dangerous 
words), and present it here.



David Weeks
Information Developer

--
Making technology useful.




patchset to optionally disable function exports

2014-09-25 Thread David Galos
I understand that some people might find function exports useful, but
there is also some utility in being able to turn it off.

I've added a configure flag, --disable-function-export which prevents
bash from attempting to parse environment variables that look like
functions upon startup. The default behavior is still to allow
function exports. You can note that this actually adds symmetry with
array exports for which there is already a flag that does the same
thing.

Let me know if there is interest in this, or what I might need to do
further to get this accepted.


Thanks,
Dave
From 9f8f691304329618556c8c33dfc0b30cd10fcf26 Mon Sep 17 00:00:00 2001
From: David Galos 
Date: Thu, 25 Sep 2014 13:27:43 -0400
Subject: [PATCH 1/4] add a flag to enable or disable function import on
 startup

---
 configure.in | 6 ++
 variables.c  | 4 
 2 files changed, 10 insertions(+)

diff --git a/configure.in b/configure.in
index d7e0998..c48bf07 100644
--- a/configure.in
+++ b/configure.in
@@ -186,6 +186,7 @@ opt_single_longdoc_strings=yes
 opt_casemod_attrs=yes
 opt_casemod_expansions=yes
 opt_extglob_default=no
+opt_function_export=yes
 
 dnl options that affect how bash is compiled and linked
 opt_static_link=no
@@ -206,6 +207,7 @@ if test $opt_minimal_config = yes; then
 	opt_net_redirs=no opt_progcomp=no opt_separate_help=no
 	opt_multibyte=yes opt_cond_regexp=no opt_coproc=no
 	opt_casemod_attrs=no opt_casemod_expansions=no opt_extglob_default=no
+	opt_function_export=no
 fi
 
 AC_ARG_ENABLE(alias, AC_HELP_STRING([--enable-alias], [enable shell aliases]), opt_alias=$enableval)
@@ -241,6 +243,7 @@ AC_ARG_ENABLE(single-help-strings, AC_HELP_STRING([--enable-single-help-strings]
 AC_ARG_ENABLE(strict-posix-default, AC_HELP_STRING([--enable-strict-posix-default], [configure bash to be posix-conformant by default]), opt_strict_posix=$enableval)
 AC_ARG_ENABLE(usg-echo-default, AC_HELP_STRING([--enable-usg-echo-default], [a synonym for --enable-xpg-echo-default]), opt_xpg_echo=$enableval)
 AC_ARG_ENABLE(xpg-echo-default, AC_HELP_STRING([--enable-xpg-echo-default], [make the echo builtin expand escape sequences by default]), opt_xpg_echo=$enableval)
+AC_ARG_ENABLE(function-export, AC_HELP_STRING([--enable-function-export], [allow bash to treat certain environment variables as functions]), opt_function_export=$enableval)
 
 dnl options that alter how bash is compiled and linked
 AC_ARG_ENABLE(mem-scramble, AC_HELP_STRING([--enable-mem-scramble], [scramble memory on calls to malloc and free]), opt_memscramble=$enableval)
@@ -333,6 +336,9 @@ fi
 if test $opt_casemod_expansions = yes; then
 AC_DEFINE(CASEMOD_EXPANSIONS)
 fi
+if test $opt_function_export = yes; then
+AC_DEFINE(FUNCTION_EXPORT)
+fi
 
 if test $opt_memscramble = yes; then
 AC_DEFINE(MEMSCRAMBLE)
diff --git a/variables.c b/variables.c
index 92a5a10..f33f66c 100644
--- a/variables.c
+++ b/variables.c
@@ -349,7 +349,11 @@ initialize_shell_variables (env, privmode)
 
   /* If exported function, define it now.  Don't import functions from
 	 the environment in privileged mode. */
+#if defined (FUNCTION_EXPORT)
   if (privmode == 0 && read_but_dont_execute == 0 && STREQN ("() {", string, 4))
+#else
+  if (0)
+#endif
 	{
 	  string_length = strlen (string);
 	  temp_string = (char *)xmalloc (3 + string_length + char_index);
-- 
2.1.0.rc2.206.gedb03e5

From 0f6618bfd1a714b10205b20bb682e4de21a2a7f0 Mon Sep 17 00:00:00 2001
From: David Galos 
Date: Thu, 25 Sep 2014 13:41:35 -0400
Subject: [PATCH 2/4] merge configure.in changes to configure.ac

---
 configure.ac | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/configure.ac b/configure.ac
index 97e8e04..2b116f8 100644
--- a/configure.ac
+++ b/configure.ac
@@ -193,6 +193,7 @@ opt_casemod_expansions=yes
 opt_extglob_default=no
 opt_dircomplete_expand_default=no
 opt_globascii_default=no
+opt_function_export=yes
 
 dnl options that affect how bash is compiled and linked
 opt_static_link=no
@@ -213,7 +214,7 @@ if test $opt_minimal_config = yes; then
 	opt_net_redirs=no opt_progcomp=no opt_separate_help=no
 	opt_multibyte=yes opt_cond_regexp=no opt_coproc=no
 	opt_casemod_attrs=no opt_casemod_expansions=no opt_extglob_default=no
-	opt_globascii_default=no
+	opt_globascii_default=no opt_function_export=no
 fi
 
 AC_ARG_ENABLE(alias, AC_HELP_STRING([--enable-alias], [enable shell aliases]), opt_alias=$enableval)
@@ -251,6 +252,7 @@ AC_ARG_ENABLE(single-help-strings, AC_HELP_STRING([--enable-single-help-strings]
 AC_ARG_ENABLE(strict-posix-default, AC_HELP_STRING([--enable-strict-posix-default], [configure bash to be posix-conformant by default]), opt_strict_posix=$enableval)
 AC_ARG_ENABLE(usg-echo-default, AC_HELP_STRING([--enable-usg-echo-default], [a synonym for --enable-xpg-echo-default]), opt_xpg_echo=$enableval)
 AC_ARG_ENABLE(xpg-echo-default, AC_HELP_STRING([--enable-xpg-echo-default], [make the echo builtin expand escape sequences b

Re: REGRESSION: shellshock patch rejects valid function names

2014-09-29 Thread David Korn
I fixed the bug in ksh that allows you delete a special builtin.



On Mon, Sep 29, 2014 at 5:25 PM, Dan Douglas  wrote:

> Just a few points to add.
>
> On Monday, September 29, 2014 04:29:52 PM Stephane Chazelas wrote:
> > 2014-09-29 09:04:00 -0600, Eric Blake:
> > [...]
> > > > "The function is named fname; the application shall ensure that it
> is a
> > > > name (see XBD Name) and that it is not the name of a special built-in
> utility."
> > > >
> > > >
>
> http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_05
>
> This doesn't normally matter because POSIX requires special builtins to
> take
> precedence over functions during command search, so even if you have such a
> function defined it is impossible to call. Bash doesn't use the correct
> search
> order however.
>
> Mksh has the reverse bug. It allows defining the function (wrongly) but
> then
> calls the special builtin anyway (correctly).
>
> Another bug is in ksh93 whose `builtin` allows disabling special builtins
> (which according to the manual, shouldn't work).
>
> $ ksh -c 'builtin -d set; function set { echo test; }; set'
> test
>
> Bash's "enable" correctly disallows that.
>
> > I agree the requirement is on the application, and I can't see
> > why POSIX should force a shell to reject a function whose name
> > doesn't contain a valid identifier.
> > ...
>
> Another thing you can do in bash is bypass its command name check by using
> a
> null zeroth word.
>
> $ { function } { echo test; }; <() }; }
> test
>
> Ordinarily } would be uncallable, but apparently since bash only checks the
> command name of the first word, calling with e.g. `<() }` or `$() }` works.
>
> --
> Dan Douglas


The restricted shell can be easily circumvented.

2015-04-04 Thread David Bonner
Bash Bug Report
Configuration Information [Automatically generated, do not change]:Machine: 
x86_64OS: linux-gnuCompiler: gccCompilation CFLAGS:  -DPROGRAM='bash' 
-DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' 
-DCONF_MACHTYPE='x86_64-pc-linux-gnu' -DCONF_VENDOR='p$uname output: Linux 
LFS-BUILD 3.16.0-23-generic #31-Ubuntu SMP Tue Oct 21 17:56:17 UTC 2014 x86_64 
x86_64 x86_64 GNU/LinuxMachine Type: x86_64-pc-linux-gnu
Bash Version: 4.3Patch Level: 30Release Status: release
Description:The restricted shell opened by calling rbash or bash with 
the -r or --restricted option can be easily circumvented with the
command 'chroot / bash' making the restricted shell useless because anyone can 
get out of it with this command.
Repeat-By:1:Open a restricted shell2:Test with 'cd ..'
3:Use 'chroot / bash'4:Test that you are no longer restricted with 
'chroot / bash'
  

first trap call behaviour

2015-07-28 Thread David Waddell
Hi
Just a quick query re. the a behavior of trap when called from function, 
not sure if it's a bug or inconsistency or intentional.

Basically it seems (without set -o errtrace)

-  an ERR trap can be set from within a function when no ERR trap is 
currently defined.

-  ERR trap can then not be changed or cleared unless cleared from 
global scope (ie cannot be cleared within a function).

-  With set -o errtrace, the subsequent calls do succeed in changing 
the trap, as might be expected.

I'm just puzzled at the fact that I can set the ERR trap within a function 
the first time, but not subsequently.

   Example script and output below

Thanks
David



Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu' 
-DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_GNU_SOURCE -DRECYCLES_PIDS 
-DDEFAULT_PATH_VALUE='/usr/local/bin:/usr/bin'  -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic
uname output: Linux oam02.bfs.openwave.com 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 
29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-redhat-linux-gnu

Bash Version: 4.2
Patch Level: 46
Release Status: release

Example script and output :

  oam02$ cat test.sh
#!/bin/bash

trap_1() {
   trap  'echo this is trap 1' ERR
}

trap_off() {
   trap ''  ERR
}

trap_2() {
   trap 'echo this is  trap 2'  ERR
}


trap_1
trap -p ERR

trap_2
trap -p ERR

trap_off
trap_2
trap -p ERR

trap ""  ERR
trap_2
trap -p ERR

oam02$ ./test.sh
trap -- 'echo this is trap 1' ERR
trap -- 'echo this is trap 1' ERR
trap -- 'echo this is trap 1' ERR
trap -- 'echo this is  trap 2' ERR


Bogus value of variable BASH

2016-02-08 Thread David Hunt
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib
-D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -Wall
uname output: Linux unknown 3.13.0-77-generic #121-Ubuntu SMP Wed Jan
20 10:50:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release

Description:
On my notebook running Ubuntu 14.04.3 LTS /bin/sh points to dash, not 
bash.
To get sh behaviour from bash I use the command `exec -a sh /bin/bash'.
When I do so bash sets BASH to /bin/sh, which it demonstrably is not.
The situation:
none@unknown:~$ ls -l /bin/bash /bin/sh
-rwxr-xr-x 1 root root 1021112 Oct  7  2014 /bin/bash
lrwxrwxrwx 1 root root   4 Feb 19  2014 /bin/sh -> dash

Repeat-By:
The demonstration:
none@unknown:~$ exec -a sh /bin/bash
sh-4.3$ [ "$BASH" -ef "/proc/$$/exe" ] || declare -p BASH
declare -- BASH="/bin/sh"

Fix:
I'm clueless.



Re: Bogus value of variable BASH

2016-02-12 Thread David Hunt
I'm not trying to write scripts that rely on the value of BASH. I only
discoverd the discrepancy while investigating the potential usefulness
of various variables set by the shell. Perhaps I just don't get the
point of having the variable in the first place.

Also I'm not very fond of Ubuntu's pointing /bin/sh to dash rather
than bash. That seems sacrilegious, in my opinion.

David Hunt

On 2/9/16, Chet Ramey  wrote:
> On 2/8/16 7:09 PM, David Hunt wrote:
>
>> Bash Version: 4.3
>> Patch Level: 11
>> Release Status: release
>>
>> Description:
>>  On my notebook running Ubuntu 14.04.3 LTS /bin/sh points to dash, not
>> bash.
>>  To get sh behaviour from bash I use the command `exec -a sh /bin/bash'.
>>  When I do so bash sets BASH to /bin/sh, which it demonstrably is not.
>
> Bash sets the BASH variable from $0.  If $0 is a full pathname, it's used
> directly, otherwise it's looked up in $PATH.  That algorithm is not
> foolproof, and one way to fool it is to disguise the program name.
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
>



Bug in syntax checking causes unintended running of a function

2016-04-18 Thread David Maas
Hi! I found a bug in bash, I've checked versions 4.1 (centos 6.7), 4.2
(centos 7.2), and 4.3.30 (from the ftp site). The bug is that if you do a
double parenthesis math thing with the wrong syntax, the script runs the
function during what I assume is syntax checking. Demonstration script is
attached.


-- Script --

#!/bin/bash
#Should be avg=((avg+6))

function neverrunme
{
avg=0
avg=(($avg+6))
echo -n "This function was never called. Bash version:"
/bin/bash --version | head -1
}

echo "Welcome to this demonstration."

-- Output --


[dm5284@juphub ~]$ ./test-file.sh
./test-file.sh: line 7: syntax error near unexpected token `('
./test-file.sh: line 7: `avg=(($avg+6))'
This function was never called. Bash version:GNU bash, version
4.2.46(1)-release (x86_64-redhat-linux-gnu)
./test-file.sh: line 10: syntax error near unexpected token `}'
./test-file.sh: line 10: `}'


test-file.sh
Description: Bourne shell script


Bug in syntax checking causes unintended running of a function

2016-04-19 Thread David Maas
Running the echo and other contents of the function really doesn't seem
like the correct behavior. If the function isn't called, then its contents
shouldn't be executed.

Hypothetically, what if the author was partway through writing a backup
script that removes backed up data? The behavior of bash in this instance
could cause a serious problem.

- David

On Mon, Apr 18, 2016 at 8:38 PM, konsolebox > wrote:

> On Tue, Apr 19, 2016 at 3:52 AM, David Maas  > wrote:
> > Hi! I found a bug in bash, I've checked versions 4.1 (centos 6.7), 4.2
> > (centos 7.2), and 4.3.30 (from the ftp site). The bug is that if you do a
> > double parenthesis math thing with the wrong syntax, the script runs the
> > function during what I assume is syntax checking. Demonstration script is
> > attached.
> >
> >
> > -- Script --
> >
> > #!/bin/bash
> > #Should be avg=((avg+6))
> >
> > function neverrunme
> > {
> > avg=0
> > avg=(($avg+6))
> > echo -n "This function was never called. Bash version:"
> > /bin/bash --version | head -1
> > }
> >
> > echo "Welcome to this demonstration."
> >
> > -- Output --
> >
> >
> > [dm5284@juphub ~]$ ./test-file.sh
> > ./test-file.sh: line 7: syntax error near unexpected token `('
> > ./test-file.sh: line 7: `avg=(($avg+6))'
> > This function was never called. Bash version:GNU bash, version
> > 4.2.46(1)-release (x86_64-redhat-linux-gnu)
> > ./test-file.sh: line 10: syntax error near unexpected token `}'
> > ./test-file.sh: line 10: `}'
>
> It didn't run the function. The function-syntax-checking scope simply
> ended in `avg=(($avg+6))`.
>


Re: Bug in syntax checking causes unintended running of a function

2016-04-20 Thread David Maas
So if you really want my opinion, the shell should be aware that it's in a
function. You could possibly implement this by keeping track of the parent
pid. Another solution would be to not check the syntax of the function
until the function is actually run. I wouldn't do strict posix soley
because that's what PHP does and doing that makes php programs stop on
errors that really should be recoverable. Thats not the most wonderful
reason however. Regarding comments that it's not a function, of course it's
a function! It says function at the beginning. Nagging implementation
problems do not make it not a function.

The argument that syntax errors should actually be checked beforehand is a
good one, however that's not how bash is currently implemented in non-posix
mode. This is definetely a bug, because:
- The script errors out about a syntactically valid ending }
- The script is confused about what is and isn't a function
- This issue can lead to programmer unanticipated behavior.

I really feel that programming languages shouldn't have "gotchas" - things
that work contrary to the way you would expect something universal (like a
function with a scope) to work. Anyway if you guys don't want to fix it, I
doubt that many people would notice, but it is definitely a bug!

Thank you for discussing it,
David

On Wed, Apr 20, 2016 at 12:55 AM, konsolebox  wrote:

> On Tue, Apr 19, 2016 at 10:45 PM, David Maas  wrote:
> > Running the echo and other contents of the function really doesn't seem
> like
> > the correct behavior. If the function isn't called, then its contents
> > shouldn't be executed.
>
> Choose: Should the shell stop execution or not?  Can you give a theory how
> a
> shell can make sure that an ending brace is the real ending brace of a
> function
> when a syntax error happens?  (In all possible cases.)
>
> > Hypothetically, what if the author was partway through writing a backup
> > script that removes backed up data? The behavior of bash in this instance
> > could cause a serious problem.
>
> That's bad scripting practice IMO.  You don't test script you just wrote
> with
> real data.  Syntax errors only happen once, unless you don't fix them right
> away, or if you don't know how to use `eval` when you _have_ to use it.
> (Please avoid quoting this obvious thing about `eval` again.)
>


Re: Bug in syntax checking causes unintended running of a function

2016-04-20 Thread David Maas
Incidentally, is it possible that somehow )) is simply interpreted the same
as } in this situation? It would also explain the perceived behavior.

On Wed, Apr 20, 2016 at 12:55 AM, konsolebox  wrote:

> On Tue, Apr 19, 2016 at 10:45 PM, David Maas  wrote:
> > Running the echo and other contents of the function really doesn't seem
> like
> > the correct behavior. If the function isn't called, then its contents
> > shouldn't be executed.
>
> Choose: Should the shell stop execution or not?  Can you give a theory how
> a
> shell can make sure that an ending brace is the real ending brace of a
> function
> when a syntax error happens?  (In all possible cases.)
>
> > Hypothetically, what if the author was partway through writing a backup
> > script that removes backed up data? The behavior of bash in this instance
> > could cause a serious problem.
>
> That's bad scripting practice IMO.  You don't test script you just wrote
> with
> real data.  Syntax errors only happen once, unless you don't fix them right
> away, or if you don't know how to use `eval` when you _have_ to use it.
> (Please avoid quoting this obvious thing about `eval` again.)
>


Re: Bug in syntax checking causes unintended running of a function

2016-04-20 Thread David Maas
Fair enough.

On Wed, Apr 20, 2016 at 8:44 AM, Greg Wooledge  wrote:

> On Wed, Apr 20, 2016 at 08:30:48AM -0700, David Maas wrote:
> > So if you really want my opinion, the shell should be aware that it's in
> a
> > function.
>
> Agreed, unless it's really hard to do.
>
> > You could possibly implement this by keeping track of the parent
> > pid.
>
> Nonsense.  Function calls do not create a child process.  But the
> issue here has nothing to do with function calls in the first place.
> It's about parsing the script.
>
> You wanted this:
>
> a() {
>   an egregious syntax error
>   b
> }
>
> c
>
> To be parsed like this:
>
> c
>
> But bash parsed it like this:
>
>   b
> }
> c
>
> > Another solution would be to not check the syntax of the function
> > until the function is actually run.
>
> Then how do you know where the function definition ends?  You have to
> parse the function definition at the time you are reading that piece
> of the script.
>
> Could the parser's error handling be improved?  Certainly.  But I won't
> be the one to write it, and I don't feel it's fair to demand that Chet
> write it either.  At some point, the script programmer has to take
> responsibility for the script's errors.
>


Process substitution can leak CTLESC (0x01) in output

2017-02-16 Thread David Simmons

[ Re-sending... it doesn't look like this went through the first time. ]

Bash uses 0x01 (CTLESC) and 0x7F (CTLNUL) bytes within command word 
strings that are passed around internally.  If either of these bytes 
appear in the parser input, they are escaped with an extra 0x01 
(CTLESC), but such escaping is reverted before final use.


When I use ANSI-C quoting to represent these bytes in a process 
substitution context, they appear to be CTLESC-encoded twice in their 
journey through bash.  For example, 7F becomes 01 7F which becomes 01 01 
01 7F, then decoded once as 01 7F before final use.  This leads to 
spurious 0x01 bytes.


Example:

--%< snip %<--
#!/bin/bash

# Expected output: 01
# Actual output:   01 01 (bad)
od -An -t x1 <(echo -n $'\x01')

# Expected output: 7f
# Actual output:   01 7f (bad)
od -An -t x1 <(echo -n $'\x7F')

# Expected output: 01
# Actual output:   01 (good)
echo -n $'\x01' | od -An -t x1

# Expected output: 7f
# Actual output:   7f (good)
echo -n $'\x7F' | od -An -t x1
--%< snip %<--

I've tested this on both 4.3.46(1)-release as provided by my OS vendor, 
and a 4.4.12(21)-release which I built from source.


David




pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David Hedlund

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -g -O2 -flto=auto -ffat-lto-objects -flto=auto 
-ffat-lto-objects -fstack-protector-strong -Wformat 
-Werror=format-security -Wall
uname output: Linux blues-System-Product-Name 5.15.0-113-generic 
#123+11.0trisquel30 SMP Wed Jun 26 05:33:28 UTC 2024 x86_64 x86_64 
x86_64 GNU/Linux

Machine Type: x86_64-pc-linux-gnu

Bash Version: 5.1
Patch Level: 16
Release Status: release

[Originally submitted to 
https://github.com/mate-desktop/mate-terminal/issues/459. A user replied 
"I don't think this should concern MATE-Terminal, it's a question of the 
shell itself IMO." - 
https://github.com/mate-desktop/mate-terminal/issues/459#issuecomment-276438]


When a directory is deleted while the user is inside it, the terminal 
should automatically return to the parent directory.


 Expected behaviour
When a directory is deleted while the user is inside it, the terminal 
should automatically return to the parent directory.


```
user@domain:~/test$ mkdir ~/test && cd ~/test && touch foo && ls
foo
user@domain:~/test$ rm -r ~/test
user@domain:~/$
```

 Actual behaviour
The terminal remains in the deleted directory's path, even though the 
directory no longer exists.


 Steps to reproduce the behaviour
Create a new directory and navigate into it:

```
user@domain:~/test$ mkdir ~/test && cd ~/test && touch foo && ls
foo
```

Delete the directory while still inside it:
```
user@domain:~/test$ rm -r ~/test
```

 MATE general version
1.26.0

 Package version
1.26.0

 Linux Distribution
Trisquel 11


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David Hedlund



On 2024-07-12 00:54, Lawrence Velázquez wrote:

On Thu, Jul 11, 2024, at 6:08 PM, David Hedlund wrote:

 Expected behaviour
When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.

```
user@domain:~/test$ mkdir ~/test && cd ~/test && touch foo && ls
foo
user@domain:~/test$ rm -r ~/test
user@domain:~/$
```

Why do you expect this behavior?  Other shells and utilities typically
do what bash does -- i.e., nothing.

(You're free to argue that bash *should* behave this way, but that's
a feature request, not a bug report.  And having bash automatically
update its working directory based on filesystem changes would open
up its own can of worms.)

Thanks, Lawrence! I found this discussion helpful and believe it would 
be a valuable feature to add. Can I submit this as a feature request?




Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David Hedlund

On 2024-07-12 04:19, David wrote:

n Thu, 11 Jul 2024 at 22:14, David Hedlund  wrote:


When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.

Hi, I disagree, and I think if you understand better why this occurs, you
will understand why knowledgable users will disagree, and you will
change your opinion.

This is a fundamental aspect of how Unix-like operating systems work,
and it will not be changed because it is very useful in other situations.
It occurs because of the designed behaviour of the 'unlink' system call.
You can read about that in 'man 2 unlink'.


 Expected behaviour
When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.
 Actual behaviour
The terminal remains in the deleted directory's path, even though the
directory no longer exists.

Your final phrase there "the directory no longer exists" is incorrect.

The directory does still exist. The 'rm' command did not destroy it.
Any processes that have already opened it can continue to use it.
The terminal is one of those processes.

Deleting any file (including your directory, because directories have
file-like behaviour in this respect, same as every other directory entry)
just removes that file object from its parent directory entries. It does
not destroy the file in any way. That means that no new processes
can access the file, because now there's no normal way to discover
that it exists, because it no longer appears in its parent directory entries.
But any process that already has the file open can continue to use it.

So your directory does not cease to exist until nothing is using it, and
even then it is not destroyed, merely forgotten entirely.

Here's more explanation:
   https://en.wikipedia.org/wiki/Rm_(Unix)#Overview

I understand. So the feature request should be an option "-b" (for 
bounce out of the directory when deleted) for example?


Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread David Hedlund



On 2024-07-12 15:10, Chet Ramey wrote:

On 7/11/24 9:53 PM, David Hedlund wrote:

Thanks, Lawrence! I found this discussion helpful and believe it 
would be a valuable feature to add. Can I submit this as a feature 
request?


I'm not going to add this. It's not generally useful for interactive
shells, and dangerous for non-interactive shells.

If this is a recurring problem for you, I suggest you write a shell
function to implement the behavior you want and run it from
PROMPT_COMMAND.

That behavior could be as simple as

pwd -P >/dev/null 2>&1 || cd ..

Do you think that it would be appropriate to submit this feature request 
to the developers of the rm command instead.


For comparision, caja (file manager in MATE) is stepping back as many 
directories as needed when it is located in a directory that is deleted 
in bash or caja.





Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread David Hedlund



On 2024-07-16 15:31, Lawrence Velázquez wrote:

On Tue, Jul 16, 2024, at 3:47 AM, David Hedlund wrote:

Do you think that it would be appropriate to submit this feature request
to the developers of the rm command instead.

How would this help?  The rm utility cannot change the working directory of the 
shell that invokes it, or of any other process.  Even if it could, that 
wouldn't help you if a different utility/application did the unlinking.

(Never mind that there are no canonical "developers of the rm command".  GNU is 
not the only implementation in the world.)


I appreciate your input. To be honest, I'm currently juggling multiple 
tasks and don't have the necessary bandwidth to fully consider this 
particular issue at the moment. Let's table this discussion for now.




For comparision, caja (file manager in MATE) is stepping back as many
directories as needed when it is located in a directory that is deleted
in bash or caja.

Behavior that is appropriate for GUI applications is not necessarily 
appropriate for CLI utilities, and vice versa.  The comparison is inapt.



Re: pwd and prompt don't update after deleting current working directory

2024-07-19 Thread David Hedlund

On 2024-07-19 07:30, Martin D Kealey wrote:


TL;DR: what you are asking for is unsafe, and should never be added to 
any published version of any shell.


On Tue, 16 Jul 2024 at 17:47, David Hedlund  wrote:

Do you think that it would be appropriate to submit this feature
request to the developers of the rm command instead.

Given that I mistakenly suggested ` "-b" (for bounce out of the 
directory when deleted)` option (which is not feasible for the rm 
command), a more appropriate feature request for the rm command 
developers might be:Consider implementing an option for rm that prevents 
its execution when the current working directory (pwd) is within the 
target directory slated for removal. This safety feature could help 
prevent accidental deletion of directories users are currently navigating.


However, I should note that I'm not particularly proficient in bash 
scripting, so there are likely downsides or complications to this 
suggestion that I haven't considered.




This suggestion hints at some serious misunderstandings.

Firstly, under normal circumstances two processes cannot interfere 
with each others' internal states (*1) - and yes, every process has a 
/separate/ current directory as part of its internal state.


/Most/ of that internal state is copied from its parent when it 
starts, which gives the illusion that the shell is changing things in 
its children, but in reality, it's setting their starting conditions, 
and cannot influence them thereafter.


Secondly, /most/ commands that you type into a shell are separate 
programs, not part of the shell. Moreover, the /terminal/ is a 
separate program from the shell, and they can only interact through 
the tty byte stream.


Thirdly, the kernel tracks the current directory on behalf of each 
process. It tracks the directory by its identity, /not/ by its name. 
(*2) This means that you can do this:


    $ mkdir /tmp/a
    $ cd /tmp/a
    $ mv ../a ../b
    $ /bin/pwd
    /tmp/b

Note that as an efficiency measure, the built-in /`pwd/` command and 
the expansion `/$PWD/` give the answer cached by the most recent /cd/, 
so this should be considered unreliable:


    $ pwd
    /tmp/a
    $ cd -P .
    $ pwd
    /tmp/b

For comparision, caja (file manager in MATE) is stepping back as
many directories as needed when it is located in a directory that
is deleted in bash or caja.


Comparing programs with dissimilar purposes is, erm, unconvincing.

Caja's /first/ purpose is to display information about a filesystem.
To make this more comprehensible to the user, it focuses on one 
directory at a time. (*3)


Critically, every time you make a change, it shows you the results 
before you can make another change.


That is pretty much the opposite of a shell.

Bash (like other shells) is primarily a scripting language and a 
command line interface, whose purpose is to invoke other commands 
(some of which may be built-ins (*4)). The shell is supposed to /do/ 
things /without/ showing you what's happened. If you want to see the 
new state of the system, you ask it to run a program such as `/pwd/` 
or `/ls/` to show you. (*5)


Currently if a program is invoked in an unlinked current directory, 
most likely it will complain but otherwise do nothing.
But if the shell were to surreptitiously change directory, a 
subsequent command invoked in an unexpected current directory could 
wreak havoc, including deleting or overwriting the wrong files or 
running the wrong programs, and with no guarantee that there will be 
any warning indications.


All that said, if you want to risk breaking your own system, feel free 
to add the relevant commands to `/PROMPT_COMMAND/` as suggested by 
other folk.


-Martin

*1: Okay, yes there are debugging facilities, but unless the target 
program is compiled with debugging support, attempting to change the 
internal state of the other program stands a fair chance of making it 
crash instead. You certainly wouldn't want "rm" to cause your 
interactive shell to crash. And there are signals, most of which 
default to making the target program /intentionally /crash.


*2: Linux's //proc/$pid/cwd/ reconstructs the path upon request. Only 
when it's deleted does it save the old path with "(deleted)" appended.


*3: It's not even clear that this focal directory is the kernel-level 
current directory of the Caja process, but it probably is. I would 
have to read the source code to verify this.


*4: mostly /regular/ built-ins that behave as if they were separate 
programs; not to be confused with /special/ built-ins, which can do 
things to the shell's internal state.


*5: Even if the shell's prompt includes its current directory - which 
isn't the default - it could be out of date by the time the user 
presses /enter/ on their next command.


Re: fg via keybind modifies tty settings

2024-10-13 Thread David Moberg
A new issue popped up with these changes. After second time of suspend and
foreground with binding, the tty will not react to control keys but instead
print them on the prompt.

Repeat-By:

| start vim
  ctrl-z (to suspend)
  ctrl-a (bound to `fg` to bring to foregound)
  ctrl-z (to suspend a 2nd time)
  # broken state (but I dont see anything changed in the output of `stty
-a`)
  # ctrl-a prints ^A (instead of executing fg)
  #  prints ^[[A (Instead of invoking history)

Issue 1: control codes are printed to prompt
Issue 2: typing `echo` tab completes correctly to `echo`, but this
debug print is printed:
```
bash: DEBUG warning: cpl_reap: deleting 33287
```

My environment: I build bash for the first time today with `configure`
`sudo make install`. And bash verision that I just built: `GNU bash,
version 5.3.0(1)-beta (x86_64-pc-linux-gnu)` (launched from terminal
running my previous bash version `GNU bash, version 5.2.21(1)-release
(x86_64-pc-linux-gnu)`)




Den lör 12 okt. 2024 kl 10:04 skrev Lawrence Velázquez :

> On Sat, Oct 12, 2024, at 2:22 AM, David Moberg wrote:
> > So this kind of bugfix is not enough to ever trigger a new release on its
> > own? (sounds wise)
>
> No, but Chet does periodically release particularly important fixes
> as patches against the most recent release.  (Those make up most of
> the commits on the "master" branch.)
>
> > Is there any cadence to releases
>
> Not that I'm aware of.
>
> > or any other reason to expect a release at
> > some point in the future?
>
> I believe Chet is getting ready to release 5.3 sooner than later.
>
> --
> vq
>


Re: fg via keybind modifies tty settings

2024-10-11 Thread David Moberg
Has anyone been able to take a more deep look? Where in the bash source
code would this happen?

I learned that fish resets these kind of settings quite often.

Den tis 24 sep. 2024 16:21Chet Ramey  skrev:

> On 9/20/24 7:23 PM, David Moberg wrote:
>
> > Bash Version: 5.2
> > Patch Level: 21
> > Release Status: release
> >
> > Description:
> >  When a process/job is suspended, foregrounded via ctrl-a as a
> > keybinding for fg, and then
> >  suspended again, the tty will be in a surprising state where no
> > input is seen (-echo)
>
> When the shell starts a job with `fg', it fetches the tty settings so it
> can restore them if the job exits or stops due to a signal. That way, a
> job that modifies the terminal settings, then crashes, doesn't leave the
> terminal in an unusable state.
>
> When you run a command from a readline key binding, it still runs in the
> context of readline obtaining a line from the keyboard -- it's just another
> key binding, like C-f. Readline doesn't reset the tty settings to run the
> binding, and bash doesn't reset them to run the command.
>
> When `fg' runs and starts the job, the shell fetches the tty settings as
> usual, but they are the tty settings readline uses when it's reading input.
> It assumes those are the `normal' settings.
>
> When vim stops due to the SIGTSTP, the shell restores what it thinks are
> the normal tty settings -- the ones it fetched after readline modified
> them. The difference you see between the `working' and `broken' settings
> is what readline does so it can read input.
>
> I will see if this can be changed by having `fg' detect whether it's
> being run from a key binding and not fetch the terminal settings in that
> case.
>
> Chet
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>


Bash debugging.

2024-10-05 Thread David Shuman
I have been working on several items as I am using bash to configure
systems.

I started wanting to log the output of my scripts.  Then I added a prefixed
message construct so detailed logs could be summarized without extraneous
debugging information (I have written an extract program using c in the
past that could also probably be written in bash (readlng a line at a time
from the log file}. The code is expected to be modified to establish
project oriented bash environment settings for the execution of the
configuration scripts.   I have a function to print the bash reverse
function trace and the accumulated arguments table (BASH_ARGV).  The
function is user callable .It can also be issued as the first statement of
a function call (a function trace option).  Most importantly the reverse
trace is in theory (currently being tested as the initial processing for
trap statements.

Does this sound potentially like a section of the bash handbook?

Tease of the content  is an example of the current bash reverse function
trace follows

#X# 2 132 bf_showTRACE 2 D 132

#D# $0=bash/functions/bf_showTRACE.sh
#D# $1=D
#D# $2=132

#D# BASH_SUBSHELL=0 TRAP={none}

#D# lCommand=bf_showTRACE
#D# lOperandCnt=2
#D# lOperand=D 132

#D# Function Back Trace
#D# LVL FUNCTION LINENO ARGC SOURCE

#D# - bf_showTRACE 132 2 bash/functions/bf_showTRACE.sh

#D# 0 bf_showTRACE 132 2 bash/functions/bf_showTRACE.sh
#D# 1 main 0 1 bash/functions/bf_showTRACE.sh
#D# 2

#D# ARGV
#D# NO VALUE
#D# 0 132
#D# 1 D
#D# 2 -X
#D# 3

The first (#X#) ;line represents an abbreviated function trace without
backtrace data.
#X# {FuncLvl} {LineNo} (FuncName) {No Args} {Args)

The next lines #D# $x={value} are debug lines that probably should be
eliminated.

The following #?# lines start
#D# - when the function is user invoked
#F# - when the function is called as the first executable line of a
function
#T# - when the function is called from a trap statement.

Display non-array BASH data
#?# BASH_SUBSHELL= ... TRAP=

Display function name (D/F) or trapped command (T)
#?# lCommand=...

Display Argument count and values
  #?# lOperandCnt=...
  #?# lOperands=...

Function Back Trace
Headings
#?# Function Back Trace
#?# LVL FUNCTION LINENO ARGC SOURCE

Invoked Function or Trapped Statement
  Possibly this should only appear for a Trapped Statement
  as for #D# and #F# this is redundant with the next line
#?# - bf_showTRACE 132 2 bash/functions/bf_showTRACE.sh

BackTrace of Function Calls
  The current Function
#?# 0 bf_showTRACE 132 2 bash/functions/bf_showTRACE.sh

  a Parent script invoked by bash (main)
#?# 1 main 0 1 bash/functions/bf_showTRACE.sh

  A dummy source file
#?# 2

Argument trace
Headings
#?# ARGV
#?# NO VALUE

Values
  function 0 above the ARGC value (2) says these are the arguments
(reverse order)
#?# 0 132
#?# 1 D

  function 1 above the ARGC value (1) says this is the argument
#?# 2 -X

  appears to be a dummy unused argument
#?# 3


Let me know what you think?

David Shuman


  1   2   >