Associative array assignment crashes Bash interpreter

2014-02-02 Thread ben
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib  
-D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security -Wall
uname output: Linux franklin 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 
07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.2
Patch Level: 45
Release Status: release

Description:
The included scripts generate a "division by zero" and "recursion level 
exceeded" errors. Confirmed in
Bash 4.2.45, 4.1.5, and 4.1.2.

Repeat-By:

-
#!/bin/bash

declare -A x
x[/etc/hadoop/conf/core-default.xml]=x

for n in n/core-default.xml
do
f="/etc/hadoop/conf/`basename $n`"
if [ -n "${x[$f]}" ]; then for m in ${foo[$n]}; do echo; done
fi
done
-

-
#!/bin/bash

declare -A x
x[/etc/hadoop/conf/core-default.xml]=x

for n in on/core-default.xml
do
f="/etc/hadoop/conf/`basename $n`"
if [ -n "${x[$f]}" ]; then for m in ${foo[$n]}; do echo; done
fi
done
-




checkwinsize should be the default

2007-12-18 Thread ben
Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2
uname output: Linux cerise 2.6.22 #3 PREEMPT Thu Dec 13 08:16:24 PST 2007 i686 
GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 3.1
Patch Level: 17
Release Status: release

Description:

When an xterm is resized while a job is running, Bash does not notice
unless the shell option "checkwinsize" is set. This behavior is rarely
(never?) desirable when Bash is being used with readline editing
enabled.


Repeat-By:

Open an xterm and run bash interactively. Type a command the wraps
past the end of the line, for example:

  echo 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"

Use C-p to view the previous line in the history. Notice that the line
is printed correctly. Use C-n to clear the line.

Run a program that takes some time, such as "sleep 30", and, during
that time, use your window manager to resize the terminal window to
have more columns.

Use C-p C-p to view the line again. Notice that the text is garbled
and useless. Curse under your breath.


Fix:

Bash should default to setting checkwinsize whenever it is started
with readline editing enabled. The bash documentation need not be
updated, as it currently says nothing about checkwinsize's default
state. Question E11 can be removed from the FAQ.




build failure with enable-static-link

2016-09-16 Thread Ben
Hello,

bash-4.4 doesn't build with --enable-static-link. Buidling with an
additional --without-bash-malloc makes it work.

/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libc.a(malloc.o):
In function `free':
(.text+0x5f20): multiple definition of `free'
./lib/malloc/libmalloc.a(malloc.o):/home/benoit/bash/bash-4.4/lib/malloc/malloc.c:1273:
first defined here
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libc.a(malloc.o):
In function `malloc':
(.text+0x5160): multiple definition of `malloc'
./lib/malloc/libmalloc.a(malloc.o):/home/benoit/bash/bash-4.4/lib/malloc/malloc.c:1258:
first defined here
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libc.a(malloc.o):
In function `realloc':
(.text+0x6400): multiple definition of `realloc'
./lib/malloc/libmalloc.a(malloc.o):/home/benoit/bash/bash-4.4/lib/malloc/malloc.c:1266:
first defined here
collect2: ld returned 1 exit status
make: *** [bash] Error 1

-- 
Benoît Dejean



PS0 issue with escape sequence

2016-09-16 Thread Ben
Hello,

using bash-4.4, setting PS0 to '\[\033[1;36m\]started at
\t\[\033[0m\]\n' makes it output PS0 with a non-printable \x01\x02
prefix and suffix.

  01 02 73 74 61 72 74 65  64 20 61 74 20 30 33 3a  |..started at 03:|
0010  31 38 3a 30 37 01 02 0a   |18:07...|

-- 
Benoît Dejean



Re: Bash source repository

2011-05-29 Thread Ben Pfaff
"Bradley M. Kuhn"  writes:

> The new repository contains everything that the current
> Savannah one does, but I put much more effort into making
> commits fine-grained, rather than merely importing the public
> releases blindly.  (For example, I did 'git mv' where it was
> obvious a move occurred, so that changes in file movements were
> properly tracked historically).

Git doesn't track file movement in any way, it just tracks file
content.  It is "git diff" and other commands for viewing history
that figure out that a file moved, based on the change that
occurred.  It is a waste of time to use "git mv" if some other
way is easier.
-- 
Ben Pfaff 
http://benpfaff.org




Re: Built-in printf Sits Awkwardly with UDP.

2011-07-21 Thread Ben Pfaff
Andre Majorel  writes:

> On 2011-07-20 14:34 +0100, Ralph Corderoy wrote:
> If standard output is a log file, log entries could remain
> latent for a very long time.
>
> The buffering mode we really want is buffered with a forced
> flush at reasonable intervals, E.G. one second after the last
> write. Unfortunately, there's no such thing in stdio.

That sounds like a bad design for a log file anyhow.  Log entries
should get flushed quickly, otherwise you lose the last few log
entries before a segfault or signal and have fewer clues about
the cause.
-- 
Ben Pfaff 
http://benpfaff.org



Re: Parallelism a la make -j / GNU parallel

2012-05-05 Thread Ben Pfaff
Colin McEwan  writes:

> I frequently find myself these days writing shell scripts, to run on
> multi-core machines, which could easily exploit lots of parallelism (eg. a
> batch of a hundred independent simulations).
>
> The basic parallelism construct of '&' for async execution is highly
> expressive, but it's not useful for this sort of use-case: starting up 100
> jobs at once will leave them competing, and lead to excessive context
> switching and paging.

Autotest testsuites, which are written in Bourne shell, do this
if you pass in a make-like "-j" option.  Have you had a look
at how they are implemented?



Buffer overflow bug in Bash

2013-12-19 Thread Ben Okopnik
Hi -

Here's a couple of scripts, stripped down to bare bones and tested on
several recent bash versions; both cause a crash, with the following errors
in all cases:

./borked1: line 6: n/core-default.xml: expression recursion level exceeded
(error token is "n/core-default.xml")
./borked2: line 6: on/core-default.xml: division by 0 (error token is
"-default.xml")

Scripts (these differ by only one character, 'for n in n/' vs. 'for n in
on/'):

--
#!/bin/bash

declare -A x
x[/etc/hadoop/conf/core-default.xml]=x

for n in n/core-default.xml
do
f="/etc/hadoop/conf/`basename $n`"
if [ -n "${x[$f]}" ]; then for m in ${foo[$n]}; do echo; done
fi
done
--

--
#!/bin/bash

declare -A x
x[/etc/hadoop/conf/core-default.xml]=x

for n in on/core-default.xml
do
f="/etc/hadoop/conf/`basename $n`"
if [ -n "${x[$f]}" ]; then for m in ${foo[$n]}; do echo; done
fi
done
------


Best regards,
-- 
Ben Okopnik


Segfault in bash 3.0

2005-11-14 Thread Ben Cotterell
bash version 3.0 crashes with a segmentation fault when running this
script:

#!/bin/bash
# Running this script crashes bash version 3.0 (and 2.05b.0(1))

function foo
{ 
local w
local c
c=([EMAIL PROTECTED] and stuff)
echo [EMAIL PROTECTED]
}

w=/tmp
foo

I tried to be helpful and investigate the bug. Here is my fix:

--- variables.c 2005-11-11 23:40:53.288909032 +
+++ ../org/bash-3.0/variables.c 2004-07-04 18:57:26.0 +0100
@@ -1643,12 +1643,6 @@
 things like `x=4 local x'. */
   if (was_tmpvar)
 var_setvalue (new_var, savestring (tmp_value));
-  else
-{
-  char *value = (char *)xmalloc (1); /* like do_assignment_internal 
*/
-  value[0] = '\0';
-  var_setvalue (new_var, value);
-}
 
   new_var->attributes = exported_p (old_var) ? att_exported : 0;
 }

Discussion of fix follows:

The crash is caused directly by a NULL string turning up in a place
where the worst that was expected was an empty string. It's when we're
expanding the [EMAIL PROTECTED] expression that we hit the problem.

So there seem to be two ways to fix it:

1. Why is the string NULL, should it be empty? Make sure it's never
NULL.

This seems most likely to be the right fix. There's code in a few places
that does this kind of thing (do_assignment_internal and assign_in_env).

The problem only seems to be caused when a global variable is shadowed
by an uninitialized local, and then used in a [EMAIL PROTECTED] expression. 

The call to make_new_variable, in make_local_variable, variables.c: 1641
is where it gets set up. If we make sure value is an empty string at
this point, we should fix the problem.

This seems to work OK, and is the fix I've gone for.

2. Handle a NULL string in make_bare_word and make_word_flags.

This also works, and is low risk, although it could affect performance I
suppose... this code is probably on a lot more code paths.


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


[Patch] Fix QNX biuld error

2006-09-23 Thread Ben Gardner

There is an error on line 29 of variables.c in bash-3.1.
It should read:
#include 
not
#include 

Attached is a patch that fixes this, although it would be easier to
fix it manually.

Ben
--- bash-3.1/variables.c2005-11-12 20:22:37.0 -0600
+++ bash-3.1.orig/variables.c   2006-09-18 10:07:21.0 -0500
@@ -26,7 +26,7 @@
 
 #if defined (qnx)
 #  if defined (qnx6)
-#include 
+#include 
 #  else
 #include 
 #  endif /* !qnx6 */
___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: Memory leak when catting(/sedding/...) large binary files with backticks

2008-05-12 Thread Ben Taylor

Chet Ramey wrote:

[EMAIL PROTECTED] wrote:

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu' 
-DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' 
-DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include 
-I./lib  -D_GNU_SOURCE  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 
-mtune=generic
uname output: Linux pmpc983.npm.ac.uk 2.6.24.4-64.fc8 #1 SMP Sat Mar 
29 09:15:49 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux

Machine Type: x86_64-redhat-linux-gnu

Bash Version: 3.2
Patch Level: 33
Release Status: release

Description:
Using echo `cat ...` on a large binary file causes lots of memory 
to be used (fine), but if you ctrl-c while it's running
it doesn't die properly and doesn't return used memory when 
finished. Originally found by screwing up a sed command (can
also reproduce bug using sed rather than cat) while trying to 
rename a group of files.


Repeat-By:
Every time
1. Find large binary data file for test (mine is ~3.2GB)
2. echo `cat filename` 3. Ctrl-C previous command while 
running (doesn't terminate)

4. When step 2 eventually returns it does not release memory


I'm not sure what you mean by `doesn't return used memory', but if you 
mean

a process's size as reported by ps or similar, that does not indicate a
memory leak.  A memory leak is memory that has been allocated by a 
program

to which it retains no handles.

malloc acts as a cache between an application and the kernel.  Memory
obtained from the kernel using malloc may, under some circumstances, be
returned to the kernel upon free, but this may not always be possible.
Memory that is not returned to the kernel by freeing pages or using sbrk
with a negative argument is retained and used to satisfy future requests.

I ran your test using valgrind to check for memory leaks (but with only
a 330 MB file), and it reported no leaks after ^C.

Chet

I was just going on what ps reported, but I assumed it was leaking on 
the basis that the memory did not report as "free" until I kill -9'd the 
relevant bash process (just kill didn't work). Once it'd been done a 
couple of times so most of the memory was consumed, it definitely had an 
adverse effect on performance - even other simple bash commands took 
several seconds to return a result, which I assume was because they were 
fighting for memory. The affected bash also didn't show any 
sub-processes using ps -auxf (shouldn't it have shown a cat process if 
it was still holding resources?).


If you guys on here reckon it's not a bug that's fine - I admit I'm not 
exactly au fait with the inner workings of bash, maybe it's working as 
intended. I just figured since it was eating my memory and not making 
that memory available to other programs when it was ^C'd (as you would 
do when you realised you'd inadvertently catted or sedded a 3GB binary 
file) that I'd report it.


Ben




functions, process substitution, bad file descriptor

2009-02-27 Thread Ben Hyde
I ran into a problem using process substitution.  A much reduced  
version is
show below.  The function f2 has the problem, the function f1 does  
not.  Are

there is some facts about the life cycle of the files created by
process substitution I don't appreciate?  - ben

bash-3.2$ ls -l /tmp/foo
-rwxr-xr-x  1 bhyde  wheel  105 Feb 27 09:13 /tmp/foo

bash-3.2$ cat /tmp/foo
#!/bin/bash
f1(){
  cat $1
  date
}
f2(){
  date
  cat $1
}
cat <(echo hi)
f1 <(echo bye)
f2 <(echo l8r)

bash-3.2$ bash --version
GNU bash, version 3.2.17(1)-release (i386-apple-darwin9.0)
Copyright (C) 2005 Free Software Foundation, Inc.

bash-3.2$ /tmp/foo
hi
bye
Fri Feb 27 09:18:45 EST 2009
Fri Feb 27 09:18:45 EST 2009
cat: /dev/fd/63: Bad file descriptor
bash-3.2$ 





Re: functions, process substitution, bad file descriptor

2009-02-28 Thread Ben Hyde

On Feb 27, 2009, at 4:02 PM, Chet Ramey wrote:

Ben wrote:

I ran into a problem using process substitution

This will be fixed in the next version.


thank you!


``The lyf so short, the craft so long to lerne.'' - Chaucer


i've noticed




words in COMPWORDS vs. words in COMPREPLY

2010-07-21 Thread Ben Pfaff
I'm trying to learn how bash completion works, so that I can
write completion functions for some utilities.

As an experiment, I wrote the following trivial completion.  It
is intended to report that the completions for the current word
are exactly the contents of the current word:

_test () {
COMPREPLY=(${COMP_WORDS[COMP_CWORD]})
}
complete -F _test test

I expected that, with the above, typing "test", followed by a
word, followed by , would cause bash just to insert a space.
This is often what happens, but I've found some exceptions that I
do not yet understand.  For example, consider the following:
test x=

When I press , I expected this to expand to:
test x=
followed by a space.

With Debian's bash 4.1-3 (on which bash --version reports
"version 4.1.5(1)-release"), this actually expands as:
test x==
followed by a space.

With Debian's bash 3.2-4 ("bash 3.2.39(1)-release"), this expands
as:
test x=x=
followed by a space.

Can anyone explain to me what bash is doing here?  I am trying to
write completion code for a utility that accepts some arguments
of the form "key1=value,key2=value,...", and this seemingly odd
behavior is making life difficult.

Thanks,

Ben.
-- 
Ben Pfaff 
http://benpfaff.org



Re: Verbatim pasting

2010-08-10 Thread Ben Pfaff
Andre Majorel  writes:

> Binding printable ASCII characters to readline functions is
> convenient but it can bite you when you paste text into a shell.

This also bites me from time to time when I cut-and-paste a
command from an editor window into a bash terminal window.  If
the line that I cut-and-paste happens to begin with a tab
character, then I get a message
Display all 3110 possibilities? (y or n)
and readline continues to interpret the rest of the line, so if
the line contains 'y' or a space before the first 'n' I get at
least one screenful of completions.

It's mildly annoying.
-- 
"Sanity is not statistical."
--George Orwell




bash feature request: pushd -v, popd -v

2005-07-15 Thread Ben Horowitz
Hi,

I love bash, and I've been using it for a number of years.  Recently,
I worked with software engineers who used tcsh primarily, where I grew
to appreciate one feature of tcsh: the ability to use the commands
pushd -v, and popd -v.

As you know, when the bash pushd and popd commands are successful,
they print the directory stack.  In tcsh, one can additionally issue
the command pushd -v, which is like the bash commands pushd followed
by dirs -v.  This feature appears not to be available in bash.

  tcsh> pushd -v /tmp
  0   /tmp
  1   /

I find this to be a good feature of tcsh because I find that the
output of dirs without the -v argument can get cluttered, especially
when there are many directories on the stack.

So, I'd like to request that this feature be added to bash, if such an
addition is feasible.  Obviously I'm going to keep using bash
regardless of whether this request is feasible. :)

Thanks,
Ben Horowitz

P.S. Here is version information - it is possible that this feature
has been added to a newer version of bash than I have:

$ bash --version
GNU bash, version 2.05b.0(1)-release (i386-pc-linux-gnu)
Copyright (C) 2002 Free Software Foundation, Inc.

$ uname --all
Linux [machine name omitted] 2.6.12.1 #2 SMP Thu Jun 23 14:01:21 PDT
2005 i686 GNU/Linux

$ gcc --version
gcc (GCC) 3.4.0


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


wait -n shouldn't collect multiple processes

2019-03-23 Thread Ben Elliston
In bash 4.4.19, wait -n will collect the exit status of multiple
processes if there are any -- not just one:

bje@bapbop:~$ sleep 10 & sleep 10 & sleep 10 & sleep 10 &
[1] 13296
[2] 13297
[3] 13298
[4] 13299
bje@bapbop:~$ wait -n
[1]   Donesleep 10
[2]   Donesleep 10
[3]-  Donesleep 10
[4]+  Donesleep 10

This makes it impossible to wait for the completion of any process and
then individually collect the exit status of each command. I think
wait -n should be guaranteed to only return one of the available
(terminated) processes. If there are multiple, they can be collected
by calling wait -n multiple times or calling wait without '-n'.

Ben



Re: wait -n shouldn't collect multiple processes

2019-03-23 Thread Ben Elliston
On Sat, Mar 23, 2019 at 11:48:33AM -0400, Chet Ramey wrote:

> What's your goal here? If you want to associate an exit status with
> a process, you're going to have to save $! and wait for each process
> in turn.

My goal is to run a small process pool where upon one process
completes, another one is started immediately. If I start (say) 10
processes and then wait on the first, I may have chosen the longest
running process.

Ben


signature.asc
Description: PGP signature


Re: wait -n shouldn't collect multiple processes

2019-03-24 Thread Ben Elliston
On Sun, Mar 24, 2019 at 11:29:41AM -0400, Chet Ramey wrote:

> > My goal is to run a small process pool where upon one process
> > completes, another one is started immediately. If I start (say) 10
> > processes and then wait on the first, I may have chosen the longest
> > running process.
> 
> OK. This sounds like a candidate for a SIGCHLD trap. You're not interested
> in a particular process's exit status, so you can keep a count of running
> processes and start new ones out of a trap handler.

Ah, but I *am* interested in exit status. Imagine trying to implement
something akin to 'make -j' in bash. Make will stop if any of the
child processes fail.

Cheers, Ben



signature.asc
Description: PGP signature


Re: wait -n shouldn't collect multiple processes

2019-03-25 Thread Ben Elliston
On Mon, Mar 25, 2019 at 10:49:32AM -0400, Chet Ramey wrote:

> This demonstrates that, despite what I said earlier, `wait -n' reaps
> one process at a time and returns its exit status.

Thanks a lot. Can I suggest that a small tweak be made to the
documentation to make this a bit clearer?

Cheers,
Ben


signature.asc
Description: PGP signature


Re: wait -n shouldn't collect multiple processes

2019-03-25 Thread Ben Elliston
On Mon, Mar 25, 2019 at 04:53:02PM -0400, Chet Ramey wrote:

> "wait  waits  for any job to terminate and returns its exit status"
> 
> Doesn't that imply a single job?

Not as clearly as saying "wait waits for a single job to terminate"
:-) I guess I'm thinking that an exxplanation about the interactive vs
non-interactive behaviour might be useful.

Cheers, Ben


signature.asc
Description: PGP signature