Unexpected tilde expansion

2016-06-01 Thread Christian
Hello,

I have been playing around with tilde expansion lately and I think i
have discovered a case where the tilde is expanded where it, according
to the documentation, should not.

When running:

$ x=~

x is set to the current users home directory as expected.

$ echo $x

/home/christian

But when executing

$ echo x=~

x=/home/christian

is returned. This case looks like a variable assignment where the
expansion is expected but it is not, for that reason there should not be
an expansion since the tilde is not at the beginning of the word nor a
variable assignment is taking place.

sh and zsh both return the expected x=~

sh -c "echo x=~"

x=~

zsh -c "echo x=~"

x=~

bash -c "echo x=~"

x=/home/christian


I my understanding of the manual incorrect or is there some hidden
interaction with echo?

Greetings,

Christian Steinhaus





Fix module loading for OpenBSD

2022-12-08 Thread Christian Weisgerber
Dynamic loading of modules is broken on OpenBSD:

bash-5.2$ enable finfo
bash:/usr/local/lib/bash/finfo: undefined symbol 'sh_optind'
bash:/usr/local/lib/bash/finfo: undefined symbol 'sh_optarg'
bash: enable: finfo: not a shell builtin

This is trivially fixed, configure simply needs to add -rdynamic to
the build flags.

The FreeBSD entry is also bizarrely obsolete.  a.out hasn't been
a thing forever; freebsdelf must have been experimental if it ever
even existed; freebsd[3-9] fails to capture freebsd10 and above--
the oldest supported FreeBSD branch is 12.x right now.  All this
also leaves bash without functioning module loading when built on
a recent FreeBSD.  A simple "freebsd*) LOCAL_LDFLAGS=-rdynamic ;;"
entry would make more sense.


--- configure.ac.orig   Thu Dec  8 16:56:15 2022
+++ configure.acThu Dec  8 16:56:38 2022
@@ -1197,16 +1197,17 @@
 dnl FreeBSD-3.x can have either a.out or ELF
 case "${host_os}" in
 freebsd[[3-9]]*)
if test -x /usr/bin/objformat && test "`/usr/bin/objformat`" = 
"elf" ; then
LOCAL_LDFLAGS=-rdynamic # allow dynamic loading
fi ;;
 freebsdelf*)   LOCAL_LDFLAGS=-rdynamic ;;  # allow dynamic loading
 dragonfly*)LOCAL_LDFLAGS=-rdynamic ;;  # allow dynamic loading
+openbsd*)  LOCAL_LDFLAGS=-rdynamic ;;  # allow dynamic loading
 midnightbsd*)  LOCAL_LDFLAGS=-rdynamic ;;  # allow dynamic loading
 esac
 
 case "$host_cpu" in
 *cray*)LOCAL_CFLAGS="-DCRAY" ;; # shell var so config.h can 
use it
 esac
 
 case "$host_cpu-$host_os" in
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



loadables/finfo: fix time_t printing

2022-12-08 Thread Christian Weisgerber
loadables/finfo.c uses the %ld format string to print time_t values.
This is wrong on OpenBSD, where time_t is long long on all platforms.

I suggest %lld and a cast to long long.
Alternatively, %jd and intmax_t could be used.

--- examples/loadables/finfo.c.orig Mon Jun 29 16:56:32 2020
+++ examples/loadables/finfo.c  Thu Dec  8 16:24:34 2022
@@ -328,17 +328,17 @@
if (flags & OPT_ASCII)
printf("%s", ctime(&st->st_atime));
else
-   printf("%ld\n", st->st_atime);
+   printf("%lld\n", (long long)st->st_atime);
} else if (flags & OPT_MTIME) {
if (flags & OPT_ASCII)
printf("%s", ctime(&st->st_mtime));
else
-   printf("%ld\n", st->st_mtime);
+   printf("%lld\n", (long long)st->st_mtime);
} else if (flags & OPT_CTIME) {
if (flags & OPT_ASCII)
printf("%s", ctime(&st->st_ctime));
else
-   printf("%ld\n", st->st_ctime);
+   printf("%lld\n", (long long)st->st_ctime);
} else if (flags & OPT_DEV)
printf("%lu\n", (unsigned long)st->st_dev);
else if (flags & OPT_INO)
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Building loadables depends on main build

2023-05-07 Thread Christian Weisgerber
Building the loadable modules depends on files created during the
main build.  However, the Makefile doesn't record any such dependency.
Running for instance "make -j10 all loadables" will fail due to a
lack of enforced sequencing.

A straightforward fix would be to make the "loadables" target depend
on ".made":

--- Makefile.in.orig
+++ Makefile.in
@@ -803,7 +803,7 @@ $(srcdir)/configure:$(srcdir)/configure.ac 
$(srcdir)/
 reconfig: force
sh $(srcdir)/configure -C
 
-loadables:
+loadables: .made
cd $(LOADABLES_DIR) && $(MAKE) $(MFLAGS) all
 
 #newversion:   mkversion
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



test -v difference between bash 5.1 and 5.2

2023-08-29 Thread Christian Schneider

Hi all,

not sure if this intended or not, but in bash 5.2-p15 one of our scripts 
is broken. it is related to test -v, that checks, if a variable is set 
together with arrays.


I condensed it to following example:
#!/bin/bash

declare -A foo
foo=(["a"]="b" ["c"]="d")
declare -a bar
bar=("a" "b" "c")
declare -a baz
baz=("foo" "bar")
for i in "${baz[@]}" ; do
echo $i
if [ ! -v "$i"[@] ] ; then
echo "$i not set"
fi
done


with bash 5.2-p15 the output of this script is
foo
foo not set
bar

so, it doesn't work with associative arrays.
Instead, with 5.1-p16 the output is
foo
bar

so, in 5.1-p16 test -v with associative array works, but not with 5.2-p15.

I don't know, if this an intended change, so please can you take a look?
If it is intended, can you please explain, why this was changed, and if 
there is an alternative for associative arrays?


BR, Christian

--
-
RADIODATA GmbH
Newtonstr. 18
12489 Berlin
Germany

Homepage:  www.radiodata.biz

USt_IdNr.: DE 195663499
WEEE-Reg.-Nr.: DE 63967380

Sitz der Gesellschaft: Berlin
Registergericht:   Amtsgericht Charlottenburg HRB  Nr.: 67865
Geschäftsführer:   Hans-Joachim Langermann, Malte Langermann



Re: test -v difference between bash 5.1 and 5.2

2023-08-30 Thread Christian Schneider
Hi all, thx for your answers. TBH, I couldn't follow all your 
explanations, but the gist, is, that it's intentional and not a bug.
We will use the ${#foo[@]} > 0 test, that is probably the right in our 
situation.
Background is, the test is in shared code, and we are doing some sanity 
checks on the data passed to the shared code.


BR, Christian


Am 29.08.23 um 20:19 schrieb Chet Ramey:

On 8/29/23 11:56 AM, Kerin Millar wrote:


One hopes that the shell programmer knows what variable types he's
using, and uses the appropriate constructs.


Some elect to source shell code masquerading as configuration data (or 
are using programs that elect to do so). Otherwise, yes, definitely.


If you find yourself needing to know whether a variable is an associative
array, an indexed array, or a scalar, check the output of the @a variable
transformation.



--
-
RADIODATA GmbH
Newtonstr. 18
12489 Berlin
Germany

Homepage:  www.radiodata.biz

USt_IdNr.: DE 195663499
WEEE-Reg.-Nr.: DE 63967380

Sitz der Gesellschaft: Berlin
Registergericht:   Amtsgericht Charlottenburg HRB  Nr.: 67865
Geschäftsführer:   Hans-Joachim Langermann, Malte Langermann



human-friendly ulimit values?

2024-02-21 Thread Christian Convey
When setting memory-size limits via "ulimits", users have to manually
convert from their intuitive units.

E.g., for limiting virtual memory to 8 gigabytes, the invocation is "ulimit
-v 8388608", rather than something like "ulimit -v 8gb".

If I were to submit a patch for this, is there any chance of it getting
accepted?


Re: human-friendly ulimit values?

2024-02-21 Thread Christian Convey
On Wed, Feb 21, 2024 at 12:17 PM Andreas Schwab  wrote:
...

> Or ulimit -v $((8*1024*1024))
>

Good point.  If a user remembers that "-v" implicitly works in 1024-byte
units, that's a good shortcut.


Allocating new fds through redirection

2011-07-20 Thread Christian Ullrich
Configuration Information [Automatically generated, do not change]:
Machine: i386
OS: freebsd8.1
Compiler: cc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i386' 
-DCONF_OSTYPE='freebsd8.1' -DCONF_MACHTYPE='i386-portbld-freebsd8.1' 
-DCONF_VENDOR='portbld' -DLOCALEDIR='/usr/local/share/locale' -DPACKAGE='bash' 
-DSHELL  -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -I/usr/local/include 
-O -pipe -march=prescott
uname output: FreeBSD infra-build.traditionsa.lu 8.1-20101216-SNAP FreeBSD 
8.1-20101216-SNAP #0: Thu Dec 16 09:27:37 UTC 2010 
r...@public-build.traditionsa.lu:/usr/obj/usr/src/sys/GENERIC  i386
Machine Type: i386-portbld-freebsd8.1

Bash Version: 4.1
Patch Level: 10
Release Status: release

Description:
bash's actual behavior differs from what the manual says regarding
the allocation of new file descriptors by redirection:

Each redirection that may be preceded by a file descriptor
number may instead be preceded by a word of the form
{varname}. In this case, for each redirection operator
except >&- and <&-, the shell will allocate a file
descriptor greater than 10 and assign it to {varname}.
If >&- or <&- is preceded by {varname}, the value of
varname defines the file descriptor to close.

On this system, the first fd created by this procedure is
number 10, which is not greater than 10.

This is probably more of a documentation bug than a bash bug.


Repeat-By:
exec {NEW}>&1
echo $NEW

Fix:
If it's a documentation bug, "greater than 9".

If it's a bash bug,

--- bash-4.1/redir.c.orig   2011-07-20 11:48:56.0 +0200
+++ bash-4.1/redir.c2011-07-20 11:49:05.0 +0200
@@ -57,7 +57,7 @@
 #  include "input.h"
 #endif

-#define SHELL_FD_BASE  10
+#define SHELL_FD_BASE  11

 int expanding_redir;


This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they 
are addressed. If you have received this email in error please notify 
the system manager. 

postmas...@tradition.ch 




Small markup error in bash.1

2012-08-04 Thread Christian Weisgerber
This turned up when I compared the output of groff(1) and mandoc(1)
(http://mdocml.bsd.lv/).

--- doc/bash.1.orig Sat Aug  4 21:34:54 2012
+++ doc/bash.1  Sat Aug  4 21:35:13 2012
@@ -2271,7 +2271,7 @@ The value of \fIp\fP determines whether or not the fra
 included.
 .IP
 If this variable is not set, \fBbash\fP acts as if it had the
-value \fB$\(aq\enreal\et%3lR\enuser\et%3lU\ensys\t%3lS\(aq\fP.
+value \fB$\(aq\enreal\et%3lR\enuser\et%3lU\ensys\et%3lS\(aq\fP.
 If the value is null, no timing information is displayed.
 A trailing newline is added when the format string is displayed.
 .PD 0
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Bash completion buggy if -o nospace, -o filenames and -W used together.

2005-08-18 Thread Christian Boltz
Hello,

I've found a bug in bash autocompletion...


Configuration Information [Automatically generated, do not change]:
Machine: i586
OS: linux
Compiler: gcc -I/usr/src/packages/BUILD/bash-3.0 
-L/usr/src/packages/BUILD/bash-3.0/../readline-5.0
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i586' 
-DCONF_OSTYPE='linux' -DCONF_MACHTYPE='i586-suse-linux' 
-DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H  -I.  -I. -I./include -I./lib   -O2 -march=i586 
-mcpu=i686 -fmessage-length=0 -Wall -g -D_GNU_SOURCE -Wall -pipe -g 
-fbranch-probabilities
uname output: Linux cboltz 2.6.11.4-21.7-default #1 Thu Jun 2 14:23:14 
UTC 2005 i686 i686 i386 GNU/Linux
Machine Type: i586-suse-linux

Bash Version: 3.0
Patch Level: 16
Release Status: release

Description:
Bash completion is buggy if -o nospace, -o filenames and -W are
used together. See the commented example below.

If you let one of the options away, the problem doesn't occour.
Also -o dirnames instead of -o filenames works well.

Repeat-By:

("" means pressing the "tab" key)
# complete -d -o nospace -o filenames -W '--foo=' foo
# foo 
dir1/ dir2/  --foo=
# foo --f   # results in "--foo\="
# foo --foo\=   # <-- why does this backslash appear?
--foo=# It prevents dirname autocompletion :-(
# foo --foo=# <-- manually removed the backslash,
dir1/ dir2/  --foo=   # now it works :-)


It would be nice if you could fix that problem ;-)


Regards,

Christian Boltz
-- 
2.-5.9.2005: Weinfest in Insheim
Bei der Landjugend: Liquid, AH-Band und Deafen Goblins
Mehr Infos: www.Landjugend-Insheim.de


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


3.2: po/ru.po encoding error

2006-10-14 Thread Christian Weisgerber
msgfmt will abort with an error about illegal UTF-8 encoding when
processing ru.po.  The Russian text in the file is actually encoded
in KOI8-R.

--- po/ru.po.orig   Sat Oct 14 19:29:17 2006
+++ po/ru.poSat Oct 14 19:29:29 2006
@@ -12,7 +12,7 @@ msgstr ""
 "Last-Translator: Evgeniy Dushistov <[EMAIL PROTECTED]>\n"
 "Language-Team: Russian <[EMAIL PROTECTED]>\n"
 "MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Type: text/plain; charset=KOI8-R\n"
 "Content-Transfer-Encoding: 8bit\n"
 "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && 
n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n"
 
-- 
Christian "naddy" Weisgerber  [EMAIL PROTECTED]


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


problems cross-compling bash-3.2

2007-08-27 Thread Christian Boon

Hello,

i want to cross compile bash-3.2 for my pxa255 arm processor and its 
working although i can't get job control working.


i configure bash with:

bash-3.2]$ ./configure  --host=arm-xscale-linux-gnueabi 
--without-bash-malloc --enable-job-control


in the output i see:

checking if opendir() opens non-directories... configure: WARNING: 
cannot check opendir if cross compiling -- defaulting to no
checking whether ulimit can substitute for getdtablesize... configure: 
WARNING: cannot check ulimit if cross compiling -- defaulting to no
checking to see if getenv can be redefined... configure: WARNING: cannot 
check getenv redefinition if cross compiling -- defaulting to yes
checking if getcwd() will dynamically allocate memory... configure: 
WARNING: cannot check whether getcwd allocates memory when 
cross-compiling -- defaulting to no
checking for presence of POSIX-style sigsetjmp/siglongjmp... configure: 
WARNING: cannot check for sigsetjmp/siglongjmp if cross-compiling -- 
defaulting to missing

missing
checking whether or not strcoll and strcmp differ... configure: WARNING: 
cannot check strcoll if cross compiling -- defaulting to no

checking for standard-conformant putenv declaration... yes
checking for standard-conformant unsetenv declaration... yes
checking for printf floating point output in hex notation... configure: 
WARNING: cannot check printf if cross compiling -- defaulting to no
checking if signal handlers must be reinstalled when invoked... 
configure: WARNING: cannot check signal handling if cross compiling -- 
defaulting to no
checking for presence of necessary job control definitions... configure: 
WARNING: cannot check job control if cross-compiling -- defaulting to 
missing

missing
checking for presence of named pipes... configure: WARNING: cannot check 
for named pipes if cross-compiling -- defaulting to missing

missing
checking whether termios.h defines TIOCGWINSZ... no
checking whether sys/ioctl.h defines TIOCGWINSZ... yes
checking for TIOCSTAT in sys/ioctl.h... no
checking for FIONREAD in sys/ioctl.h... yes
checking whether WCONTINUED flag to waitpid is unavailable or available 
but broken... configure: WARNING: cannot check WCONTINUED if cross 
compiling -- defaulting to no

checking for speed_t in sys/types.h... no
checking whether getpw functions are declared in pwd.h... yes


in the config.h i see:

/* Define if job control is unusable or unsupported. */
#define JOB_CONTROL_MISSING 1

Do i need something other packages to enable job control?
Also when i remove the line:

#define JOB_CONTROL_MISSING 1

it doesnt work:

$ builtin bg
bash: builtin: bg: not a shell builtin

How can i make job control working?
i dont know if this is the right mailing list for this.. otherwise 
please let me know which is..


thanks in advance,

Chris.






Re: problems cross-compling bash-3.2

2007-08-27 Thread Christian Boon

Mike Frysinger wrote:

On Monday 27 August 2007, Christian Boon wrote:
  

i want to cross compile bash-3.2 for my pxa255 arm processor and its
working although i can't get job control working.



cross-compiling bash is known to be broken as it'll mix your host signal defs 
into the target binary

-mike
  

is there an earlier version that isnt broken?

Chris





Re: problems cross-compling bash-3.2

2007-08-27 Thread Christian Boon

Chet Ramey wrote:
cross-compiling bash is known to be broken as it'll mix your host signal defs 
into the target binary



This is no longer true; bash-3.2 builds the signal list at invocation time
rather than compile time when cross-compiling.

Chet

  

so what do i need to have job control working?

Chris





Re: problems cross-compling bash-3.2

2007-08-27 Thread Christian Boon

Chet Ramey wrote:

Christian Boon wrote:
  

Hello,

i want to cross compile bash-3.2 for my pxa255 arm processor and its
working although i can't get job control working.



It tells you it won't be able to:

  

checking for presence of necessary job control definitions... configure:
WARNING: cannot check job control if cross-compiling -- defaulting to
missing
missing



You can look at the definition of BASH_SYS_JOB_CONTROL_MISSING in
aclocal.m4 and manually check which, if any, of the conditions is
not met.

Chet

  

This is from aclocal.m4:

AC_DEFUN(BASH_SYS_JOB_CONTROL_MISSING,
[AC_REQUIRE([BASH_SYS_SIGNAL_VINTAGE])
AC_MSG_CHECKING(for presence of necessary job control definitions)
AC_CACHE_VAL(bash_cv_job_control_missing,
[AC_TRY_RUN([
#include 
#ifdef HAVE_SYS_WAIT_H
#include 
#endif
#ifdef HAVE_UNISTD_H
#include 
#endif
#include 

Chris





Re: problems cross-compling bash-3.2

2007-08-28 Thread Christian Boon

Christian Boon wrote:

Chet Ramey wrote:

Christian Boon wrote:
 

Hello,

i want to cross compile bash-3.2 for my pxa255 arm processor and its
working although i can't get job control working.



It tells you it won't be able to:

 
checking for presence of necessary job control definitions... 
configure:

WARNING: cannot check job control if cross-compiling -- defaulting to
missing
missing



You can look at the definition of BASH_SYS_JOB_CONTROL_MISSING in
aclocal.m4 and manually check which, if any, of the conditions is
not met.

Chet

  

This is from aclocal.m4:

AC_DEFUN(BASH_SYS_JOB_CONTROL_MISSING,
[AC_REQUIRE([BASH_SYS_SIGNAL_VINTAGE])
AC_MSG_CHECKING(for presence of necessary job control definitions)
AC_CACHE_VAL(bash_cv_job_control_missing,
[AC_TRY_RUN([
#include 
#ifdef HAVE_SYS_WAIT_H
#include 
#endif
#ifdef HAVE_UNISTD_H
#include 
#endif
#include 

Chris


Does anybody know what is going wrong or what is missing?

thanks in advance,

Chris





Re: problems cross-compling bash-3.2

2007-08-29 Thread Christian Boon

Chet Ramey wrote:

Christian Boon wrote:

  

This is from aclocal.m4:

AC_DEFUN(BASH_SYS_JOB_CONTROL_MISSING,
[AC_REQUIRE([BASH_SYS_SIGNAL_VINTAGE])
AC_MSG_CHECKING(for presence of necessary job control definitions)
AC_CACHE_VAL(bash_cv_job_control_missing,
[AC_TRY_RUN([
#include 
#ifdef HAVE_SYS_WAIT_H
#include 
#endif
#ifdef HAVE_UNISTD_H
#include 
#endif
#include 

Chris

  

Does anybody know what is going wrong or what is missing?



Bash does not configure in job control when cross-compiling.  You can
remove the JOB_CONTROL_MISSING define from config.h and try building
without it, but there's no guarantee that the necessary capabilities
are present on the target system.  The macro you partially quoted above
encapsulates the required functionality.  If building without
JOB_CONTROL_MISSING defined doesn't result in a binary with job control
working, the tests in the macro give you a hint about where to look.

Chet



  

removing JOB_CONTROL_MISSING from config.h doesnt work.
but when i do:

export bash_cv_job_control_missing=present

job control is compiling and working.

Chris





INIT: Id "s0" respawning too fast: disabled for 5 minutes

2007-09-12 Thread Christian Boon

Hi,

i updated bash-3.2 to patch level 25 and i get the following error:

INIT: Entering runlevel: 1
INIT: Id "s0" respawning too fast: disabled for 5 minutes
INIT: no more processes left in this runlevel

my inittab entry is:

s0:123:respawn:/sbin/getty -L -n -l /bin/bash ttyS0 115200 linux

getty is part of busybox version 1.6.1

when i used bash 3.2 without applying any patches it worked without any 
problems


do u know what the problem can be?

my platform is an ARM PXA255 with Linux kernel 2.6.22

Chris






Home key doesn't work on properly on prompts with ANSI escapes

2007-10-29 Thread Christian Schubert
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: i686-pc-linux-gnu-gcc
Compilation 
CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686' -DCONF_OSTYPE='linux-gnu' 
-DCONF_MACHTYPE='i686-pc-linux-gnu' -DCONF_VENDOR='pc' 
-DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H   -I.  
-I. -I./include -I./lib   -Os -march=pentium-m -pipe -fomit-frame-pointer
uname output: Linux apexomobil 2.6.23 #56 SMP PREEMPT Mon Oct 15 21:52:48 CEST 
2007 i686 Genuine Intel(R) CPU T2500 @ 2.00GHz GenuineIntel GNU/Linux
Machine Type: i686-pc-linux-gnu

Bash Version: 3.2
Patch Level: 17
Release Status: release

Description:
With PS1='\e[32m$\e[m' when typing something at the prompt, then
pressing  the visible cursor is not positioned after the 
prompt ($) but behind the 8th typed character, the real cursor
is on the correct position, so when editing visible and real
command lines differ.

Example:

at the given prompt type e.g.: cd /home/apexo/
result:

$cd /home/apexo/
 ^ cursor is now here

now typing: echo  yields:
echoes "cd /home/apexo/"

I encountered this problem on the first 3.2 version I tested (which is 
quite
some time ago now ... 3.1 doesn't has this bug)
  

Repeat-By:
PS1='\e[32m$\e[m'
type at least 11 characters
press 
-> cursor is not where it is supposed to be, editing/deleting
characters now messes up the displayed command line (i.e. it
doesn't reflect what's in the buffer)




Re: Home key doesn't work on properly on prompts with ANSI escapes

2007-10-29 Thread Christian Schubert
Sorry for not reading the documentation ... it just used to work and I didn't 
expect it to break :)

Thanks for clarifying things.

Regards,
Christian


On Monday 29 October 2007 18:26:06 Bob Proulx wrote:
> Christian Schubert wrote:
> > PS1='\e[32m$\e[m'
>
> Non-printing characters need to be bracked with \[...\] to inform bash
> that they do not display.  Please see the bash documentation in the
> section on "PROMPTING".
>
> \[ begin  a sequence of non-printing characters, which could
>be used to embed a terminal  control  sequence  into  the
>prompt
> \] end a sequence of non-printing characters
>
> I believe that will fix your problem.
>
> Bob





-e does not work inside subscript when using || or && operator outside

2008-08-06 Thread Christian Jaeger
(This is a resend because the gnu.org ml server didn't accept the
sender email address that was given by my mailing setup; the first
attempt has also been sent to [EMAIL PROTECTED])

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2 -Wall
uname output: Linux novo 2.6.26 #9 SMP Sat Jul 19 20:29:30 CEST 2008 x86_64 
GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 3.2
Patch Level: 39
Release Status: release

Description:
Inside subscripts, -e handling is turned off or ignored if the || 
operator

This is reminding of the bug shown at
http://lists.gnu.org/archive/html/bug-bash/2008-01/msg00080.html

I always thought that the meaning of ( ) is to just fork, or
run the subscript as if it were run with bash -c '..' but
without the overhead of exec'ing a new shell in the forked
process. But these examples show that ( ) does not just fork.

Coming with a Perl background, I can even sort of understand
the behaviour at the above older bug report, namely that a
false-returning ( .. ) does not exit the shell, so as to let
one to examine $? manually (like $@ in perl after eval {
.. }), although I'm not exactly convinced this is such a good
idea (running if ( .. ); then echo $?; fi would still allow to
check for the reason in $?).

But now in this case I *have* to rely on the above weird
behaviour to be able to use -e at all, which makes the code
ugly, and one needs to get *two* things right to propagate
errors correctly (do not use || if one wants to check for the
error manually, and if one doesn't, *still* write an if
statement to check for $? explicitely).

Repeat-By:
$ bash -c '( set -e ; false; echo "hu?" ) || echo "false"'
hu?
# expecting "false"
$ bash -c '( set -e ; false; echo "hu?" ); if [ $? -ne 0 ]; then echo 
"false"; fi'
false

# Also:
$ bash -c '( set -e ; false; echo "hu?" ) && echo "false"'
hu?
false
# expecting no output

Fix:
I don't know how to fix this in a way to make it backwards
compatible (if that's a concern), but if there's no way, I
suggest to document that clearly, preferably in a section
"quirks" or so.






Re: -e does not work inside subscript when using || or && operator outside

2008-08-06 Thread Christian Jaeger
Sorry, I've now downloaded the list archives and by searching them 
realized that the same bug has been reported twice in the last 1.5 
years. I should have done that first; someone in the #bash irc channel 
actually pointed me to the other bug (the link to the ml archive), and 
somehow from that point on I turned off the light that said I should 
check for duplicates.


I don't have the energy for checking the
http://www.opengroup.org/onlinepubs/009695399/utilities/set.html link 
posted in response to one of the earlier reports right now. Maybe later.


Thanks anyway-
Christian.





bash 4.x filters out environmental variables containing a dot in the name

2009-06-25 Thread Christian Krause
Configuration Information [Automatically generated, do not change]:
Machine: i386
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i386'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i386-redhat-linux-gnu'
-DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_GNU_SOURCE
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m32 -march=i586 -mtune=generic
-fasynchronous-unwind-tables
uname output: Linux noname 2.6.29.4-167.fc11.i686.PAE #1 SMP Wed May 27
17:28:22 EDT 2009 i686 i686 i386 GNU/Linux
Machine Type: i386-redhat-linux-gnu

Bash Version: 4.0
Patch Level: 16
Release Status: release

Description:
During the compilation of the linux kernel (configured to user mode
linux) I've discovered the following problem of bash 4.0 (which is now
delivered by Fedora 11):

If an environmental variable contains a "." in its name, the new bash
4.0 filters out these variables. They will not be visible via "set" nor
will they be inherited to executed sub-shells or processes.

This behavior explains the compile problem:
- the kernel's make system uses make variables like
"CPPFLAGS_vmlinux.lds"
- they are exported via "export" in the Makefile
- subsequent make processes are started (via bash, since SHELL=/bin/bash
is set in the Makefile)
- they don't get the variable inherited, because the bash filters it out
- the build will fail since the called Makefile expects this variable

I've installed for testing purposes bash 3.2 on this system and the
problems were gone.


Repeat-By:

Fedora 10 (GNU bash, version 3.2.39(1)-release (i386-redhat-linux-gnu))
$ env foo.foo=bar bash
$ set |grep foo
foo.foo=bar
$ cat /proc/$$/environ |grep foo\.foo
Binary file (standard input) matches
$ bash
$ set |grep foo
foo.foo=bar

Fedora 11 (GNU bash, version 4.0.16(1)-release (i386-redhat-linux-gnu))
$ env foo.foo=bar bash
$ set |grep foo
$ cat /proc/$$/environ |grep foo\.foo
Binary file (standard input) matches
$ bash
$ set |grep foo
$ cat /proc/$$/environ |grep foo\.foo
$

This shows, that in F11:
* the environment variable foo.foo is not added to the bash's list of
environment variables accessible via "set"
* the variable is still in the process's environment
* the variable will be filtered out when the bash executes a subsequent
process

This change may also affect other make-based build systems which rely on
the fact that a make variable which is exported via the keyword "export"
in the Makefile is accessible in nested calls of make.


Best regards,
Christian




Re: bash 4.x filters out environmental variables containing a dot in the name

2009-06-26 Thread Christian Krause
Hi Chet,

Thanks for the answers. The problem is now, that this behavior of the
bash creates some real problems outside, probably with a larger impact.
Before asking the kernel developers to change parts of linux kernel's
build system, I'd like to be sure whether bash-4.x's behavior is correct
or not. Please see my comments below:

Chet Ramey wrote:
> Mike Frysinger wrote:
>> On Thursday 25 June 2009 19:17:38 Chet Ramey wrote:
>>> Christian Krause wrote:
>>>> Bash Version: 4.0
>>>> Patch Level: 16
>>>> Release Status: release
>>>>
>>>> Description:
>>>> During the compilation of the linux kernel (configured to user mode
>>>> linux) I've discovered the following problem of bash 4.0 (which is now
>>>> delivered by Fedora 11):
>>>>
>>>> If an environmental variable contains a "." in its name, the new bash
>>>> 4.0 filters out these variables. They will not be visible via "set" nor
>>>> will they be inherited to executed sub-shells or processes.
>>> Such strings are invalid shell or environment variable names.  It was a
>>> bug in bash-3.2 that it created invalid variables from the initial
>>> environment.
>> and it's a bug that bash-4 is filtering them.  not allowing them to be used 
>> in 
>> the shell is fine (echo ${vmlinux.lds}), but removing them from the 
>> environment and thus not allowing other applications to leverage them is 
>> not.  
> 
> It's not a bug.  Posix explicitly restricts environment variable names
> to consist of uppercase letters, lowercase letters, digits, and

As far as I interpret the standard (IEEE Std 1003.1, 2004 Edition), the
general definition for environment variables is something like this:

- names must not contain "="
- if it should be portable, the names should only contain characters
from the portable character set (which includes ".")

Sure, there is a restriction that variables used by the shell (and the
utilities described in the standard) should only contain the characters
 you described.

However, since not all programs belong to this set, I don't see an
explicit statement which denies the usage of e.g. "." in environmental
variables in general.

> underscores.  There is no provision for variables with invalid names
> that don't exactly exist and are just passed down to applications in
> their environment.  The environment is constructed from variables with
> the export attribute set (another thing Posix explicitly states); things
> that aren't valid variables don't get in there.

Hm, I'm not sure I can agree to this. The problem is, that for other
programs names of environmental variables containing a "." are not
invalid (although they are invalid for the bash).

As far as I understand the behavior of the bash, environmental variables
which were put in the process environmental during exec'ing a shell are
exported to subsequent processes of this shell without explicitly
"exporting" them.
So, if the bash passes variables containing in the bash's process
environment to sub-processes anyway (without any explicit "export"), I
would argue it would be more natural that the bash should not filter
them. If any other process execs itself, the new process image will have
the same environmental variables set (if not execle is used)...

Given all of these facts I still tend to say that the bash shouldn't
filter them...


Best regards,
Christian




Re: bash 4.x filters out environmental variables containing a dot in the name

2009-07-20 Thread Christian Krause
Hi Chet,

Chet Ramey wrote:
> Posix also says that "variables" are inherited from the environment.  That
> word has a very specific meaning, as was reiterated during the $@ and set -u
> discussion.  The same "variables" language is used when Posix talks about
> creating the environment for shell execution environments.
> 
> The question is whether "tolerant" just means that the shell doesn't display
> a warning message about the assignment, as it does when you use an invalid
> variable name in an assignment statement, or exit with a variable assignment
> error, or dump core with a seg fault, as in many historical versions of sh.
> It may or may not also mean that the shell passes inherited invalid variable
> names to child processes.
> 
> It seems, though, that there might be enough use for me to try and make it
> work.  While I'm not wild about creating yet another class of variable,
> there might be a way to do it simply.

What's the current status of this issue? I'd like to offer my help  for
testing the fix since I have a strong interest to get the problem (linux
kernel (UML) can't be compiled properly with bash 4.x) resolved.

Thank you very much in advance!


Best regards,
Christian




4.0 patch 25 breaks with gcc 3.3

2009-07-30 Thread Christian Weisgerber
Bash-4.0 official patch 25 adds a section that looks to the compiler
like a nested C comment.  Obviously somebody recognized this and
added #if 0 ... #endif around the whole comment.

Alas, GCC 3.3 still errors out:

cc -c  -DHAVE_CONFIG_H -DSHELL -I/usr/local/include -I. -I../.. -I../..
-I../../include -I../../lib   -O2 -pipe glob.c
glob.c:1023:69: missing terminating ' character

Apparently GCC 3.4 and later versions handle this as intended.

[I'm not subscribed to bug-bash.]
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




$() parsing still broken

2009-09-18 Thread Christian Weisgerber
Even in the latest bash, 4.0.33, $() parsing is still broken:

$ bash -c 'echo $(echo \|)'
bash: -c: line 0: unexpected EOF while looking for matching `)'
bash: -c: line 1: syntax error: unexpected end of file

And yes, this is bash built with GNU bison, not Berkeley yacc.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




Re: $() parsing still broken

2009-09-19 Thread Christian Weisgerber
Andreas Schwab:

> > Even in the latest bash, 4.0.33, $() parsing is still broken:
> >
> > $ bash -c 'echo $(echo \|)'
> > bash: -c: line 0: unexpected EOF while looking for matching `)'
> > bash: -c: line 1: syntax error: unexpected end of file
> 
> This has been fixed with patch 1, are you sure you are running the
> patched version?

Yes, I am.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




Re: $() parsing still broken

2009-09-20 Thread Christian Weisgerber
Chet Ramey:

> Christian Weisgerber wrote:
> > Even in the latest bash, 4.0.33, $() parsing is still broken:
> > 
> > $ bash -c 'echo $(echo \|)'
> > bash: -c: line 0: unexpected EOF while looking for matching `)'
> > bash: -c: line 1: syntax error: unexpected end of file
> > 
> > And yes, this is bash built with GNU bison, not Berkeley yacc.
> 
> The rest of the information from bashbug would help, since I can't
> reproduce it:

--->
Machine: amd64
OS: freebsd7.2
Compiler: cc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='amd64' -DCONF_OSTYPE='fre
ebsd7.2' -DCONF_MACHTYPE='amd64-portbld-freebsd7.2' -DCONF_VENDOR='portbld' -DLO
CALEDIR='/usr/local/share/locale' -DPACKAGE='bash' -DSHELL  -DHAVE_CONFIG_H   -I
.  -I. -I./include -I./lib  -I/usr/local/include -O2 -fno-strict-aliasing -pipe
uname output: FreeBSD lorvorc.mips.inka.de 7.2-STABLE FreeBSD 7.2-STABLE #0: Sun
 Sep 13 14:00:52 CEST 2009 na...@lorvorc.mips.inka.de:/usr/obj/usr/src/sys/G
ENERIC  amd64
Machine Type: amd64-portbld-freebsd7.2

Bash Version: 4.0
Patch Level: 33
Release Status: release
<---

--->
Machine: alpha
OS: openbsd4.6
Compiler: cc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='alpha' -DCONF_OSTYPE='ope
nbsd4.6' -DCONF_MACHTYPE='alpha-unknown-openbsd4.6' -DCONF_VENDOR='unknown' -DLO
CALEDIR='/usr/local/share/locale' -DPACKAGE='bash' -DSHELL  -DHAVE_CONFIG_H   -I
.  -I. -I./include -I./lib  -I/usr/local/include -O2 -pipe
uname output: OpenBSD kemoauc.mips.inka.de 4.6 GENERIC#223 alpha
Machine Type: alpha-unknown-openbsd4.6

Bash Version: 4.0
Patch Level: 33
Release Status: release
<---

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




Re: $() parsing still broken

2009-09-20 Thread Christian Weisgerber
Chet Ramey:

> I suppose the only real variable is the revision of bison:

2.4.1 and 2.3 on my FreeBSD and OpenBSD box, respectively.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




Re: $() parsing still broken

2009-09-20 Thread Christian Weisgerber
Chet Ramey:

> >> I suppose the only real variable is the revision of bison:
> > 
> > 2.4.1 and 2.3 on my FreeBSD and OpenBSD box, respectively.
> 
> Try 1.875 and see if the problems go away.

Red herring.  I found the problem, it is embarrassingly stupid, and
Andreas was right.  The fault lies with the official FreeBSD and
OpenBSD ports.

Since bash 4's $() parsing is broken if you build the parser with
BSD yacc, the ports now specify the use of bison by passing YACC=bison
to the configure script.  Running "bison -d parse.y" creates
parse.tab.c rather than y.tab.c, so y.tab.c is never regenerated,
and the parser fix from patch 001 is effectively not applied.

We just need to use YACC="bison -y".

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de




Wrong AC_TRY_COMPILE idiom

2017-09-25 Thread Christian Weisgerber
I'm forwarding this bug report by Robert Nagy ,
which also concerns bash 4.4:

>
Unbreak autoconf checks with clang by not using nested functions
in the checks.

Someone clearly did not read the autoconf documentation because
using the following functions with a function declaration inside
the body will end up declaring a function inside a function.

- AC_TRY_COMPILE( [], [ int main() { return 0; } ],
- AC_LANG_PROGRAM([[]], [[int main (void) { return 0; }]])],
- AC_TRY_LINK([], [int main (void) { return 0; }],

Result:

int
main ()
{
int main (void) { return 0; }
  ;
  return 0;
}

nested functions is a gcc extension which is not supported by
clang.

test.c:4:17: error: function definition is not allowed here
int main (void) { return 0; }
^
1 error generated.

This causes tests to fail in the configure scripts resulting in
missing compile and link time flags from the builds.

This resulted in weird behaviour of several software, like gnome
hanging completely due to gtk+3 not being built properly.

This change intrudces the following fixes:

- remove int main() declaration from AC_TRY_COMPILE, AC_LANG_PROGRAM, 
AC_TRY_LINK
  as it comes with a declaration already, and people misused them

- change to use AC_LANG_SOURCE when needed in case a complete source block is 
specified

<

Here's the trivial patch for bash 4.4:

--- configure.ac.orig   Wed Sep  7 22:56:28 2016
+++ configure.acMon Sep 25 19:03:03 2017
@@ -808,7 +808,7 @@
 AC_CACHE_VAL(bash_cv_strtold_broken,
[AC_TRY_COMPILE(
[#include ],
-   [int main() { long double r; char *foo, bar; r = strtold(foo, 
&bar);}],
+   [long double r; char *foo, bar; r = strtold(foo, &bar);],
bash_cv_strtold_broken=no, bash_cv_strtold_broken=yes,
[AC_MSG_WARN(cannot check for broken strtold if cross-compiling, 
defaulting to no)])
]

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



glibc [BZ #22145]: {p,ty}fds and mount namespaces

2017-10-09 Thread Christian Brauner
Hi,

We've received a bug report against glibc [1] relating to {p,t}ty file
descriptors from devpts mounts in different mount namespaces.  In case
ttyname{_r}() detects that the path for a pty slave file descriptor (e.g.
/dev/pts/4) does not exist in the caller's mount namespace or
the path exists by pure chance (because someone has e.g. opened five {p,t}tys in
the current mount namespace) but does in fact refer to a different device then
ttyname{_r}() will set/return ENODEV. On Linux the caller can treat this as a
hint that the {p,t}y file descriptor's path does not exist in the current mount
namespace. However, in case the path for the {p,t}ty file descriptor
does actually exist in the current mount namespace although the
{p,t}ty fd belongs to a devpts mount in another mount namespace seems
to confuse bash such that bash fails to initialize job control
correctly. This at least is my current analysis of the
problem.
A common scenario where this happens is with /dev/console in containers.
Usually container runtimes/managers will call openpty() on a ptmx device in the
host's mount namespace to safely allocate a {p,t}ty master-slave pair since they
can't trust the container's devpts mount after the container's init binary has
started (potentially malicious fuse mounts and what not).  The slave {p,t}ty fd
will then usually be sent to the container and bind-mounted over the container's
/dev/console which in this scenario is simply a regular file. This is especially
common with unprivileged containers where mknod() syscalls are not possible. In
this scenario ttyname{_r}() will correctly report that /dev/console does in fact
refer to a {p,t}ty device whose path exists in the current mount namespace but
whose origin is a devpts mount in a different mount namespace. Bash however
seems to not like this at all and fails to initialize job control correctly. In
case you have lxc available this is simply reproducible by creating an
unprivileged container and calling lxc-execute -n  -- bash.  If
you could look into this and whether that makes sense to you it'd be greatly
appreciated.

Fwiw, zsh does not seem to run into a problem here.

Thanks
Christian

[1]: https://sourceware.org/bugzilla/show_bug.cgi?id=22145



Re: glibc [BZ #22145]: {p,ty}fds and mount namespaces

2017-10-10 Thread Christian Brauner
On Tue, Oct 10, 2017 at 5:44 PM, Chet Ramey  wrote:
> On 10/9/17 10:37 AM, Christian Brauner wrote:
>
>> A common scenario where this happens is with /dev/console in containers.
>> Usually container runtimes/managers will call openpty() on a ptmx device in 
>> the
>> host's mount namespace to safely allocate a {p,t}ty master-slave pair since 
>> they
>> can't trust the container's devpts mount after the container's init binary 
>> has
>> started (potentially malicious fuse mounts and what not).  The slave {p,t}ty 
>> fd
>> will then usually be sent to the container and bind-mounted over the 
>> container's
>> /dev/console which in this scenario is simply a regular file. This is 
>> especially
>> common with unprivileged containers where mknod() syscalls are not possible. 
>> In
>> this scenario ttyname{_r}() will correctly report that /dev/console does in 
>> fact
>> refer to a {p,t}ty device whose path exists in the current mount namespace 
>> but
>> whose origin is a devpts mount in a different mount namespace. Bash however
>> seems to not like this at all and fails to initialize job control correctly. 
>> In
>> case you have lxc available this is simply reproducible by creating an
>> unprivileged container and calling lxc-execute -n  -- bash.  
>> If
>> you could look into this and whether that makes sense to you it'd be greatly
>> appreciated.
>
> Bash doesn't try to open /dev/console. It will, however, try to open
> /dev/tty and, if that fails, call ttyname() to get the pathname of a
> terminal device to pass to open(). The idea is that if you're started
> without a controlling terminal, the first terminal device you open becomes
> your controlling terminal. However, if that fails, job control will
> eventually be disabled -- you can't have job control without a controlling
> terminal.
>
> Under the circumstances described in the original bug report, bash attempts
> to use stderr as its controlling terminal (having already called isatty and
> been told that it's a terminal), discovers that it cannot set the process
> group on that, and disables job control. If you can't set the process group
> on what you think is your controlling terminal, you're not going to be able
> to do job control, period.

Right, this was what I found confusing at first that in fact there was
a controlling
terminal but bash didn't initialize job control. Now, I think I simply
traced it down to
some programs not being able to setsid() or not having set setsid() before
exec()ing bash in the child after a fork(). This is what is causing
job control to fail.

Thanks Chet!
Christian

>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: Bash patches format

2018-05-30 Thread Christian Weisgerber
Marty E. Plummer:

> Maintainers, I'd really like to hear your thoughts on this matter. If
> the diffs are produced as -p1 unified diffs, then downstreams who do
> convert from -p0 context won't have to, and distros who work around it
> won't either.

Speaking in my capacity as the OpenBSD packager for bash, either
way is fine.  We use the upstream patches as provided ("distribution
patches").  These are applied with -p0 by default, but it's utterly
trivial to specify -p1 if necessary.  The choice of context vs.
unified diffs is immaterial; patch(1) handles both formats.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



The loadables are built during install

2018-09-07 Thread Christian Weisgerber
There is an issue in the build framework of bash 4.4.23 (and 5.0-alpha):
"make all" does not build examples/loadables.
"make install" however recurses into examples/loadables and, since
the loadable modules aren't there, proceeds to build them before
installation.

Shouldn't the ".made" target have "loadables" as a prerequisite?

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



examples/loadables/finfo.c type problems

2018-09-07 Thread Christian Weisgerber
Compiling examples/loadables/finfo.c (bash 4.4.23, 5.0-alpha) on
OpenBSD produces various warnings about ill-matched types:

--->
finfo.c:325:20: warning: format specifies type 'long' but the argument has type 
'time_t' (aka 'long long') [-Wformat]
printf("%ld\n", st->st_atime);
~~~ ^~~~
%lld
finfo.c:330:20: warning: format specifies type 'long' but the argument has type 
'time_t' (aka 'long long') [-Wformat]
printf("%ld\n", st->st_mtime);
~~~ ^~~~
%lld
finfo.c:335:20: warning: format specifies type 'long' but the argument has type 
'time_t' (aka 'long long') [-Wformat]
printf("%ld\n", st->st_ctime);
~~~ ^~~~
%lld
finfo.c:339:18: warning: format specifies type 'int' but the argument has type 
'ino_t' (aka 'unsigned long long') [-Wformat]
printf("%d\n", st->st_ino);
~~ ^~
%llu
finfo.c:341:34: warning: format specifies type 'long' but the argument has type 
'ino_t' (aka 'unsigned long long') [-Wformat]
printf("%d:%ld\n", st->st_dev, st->st_ino);
   ~~~ ^~
   %llu
5 warnings generated.
<---

I was thinking about how to fix those, but then I noticed existing bad
casts in the code, e.g.,

printf("%ld\n", (long) st->st_size);

which potentially truncate values.

I don't know if the example loadables are considered to be more
than, well, rough examples, so I'm uncertain if this should even
be considered a problem.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



5.0alpha: tests/test1.sub is unportable

2018-09-08 Thread Christian Weisgerber
I'm not sure what the new tests/test1.sub in bash 5.0alpha is
intended to test, but it fails on OpenBSD because /dev/fd/* are
actual character devices there, so test -p /dev/fd/6 will always
be unsuccessful.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: examples/loadables/finfo.c type problems

2018-09-09 Thread Christian Weisgerber
Chet Ramey:

> > printf("%ld\n", (long) st->st_size);
> > 
> > which potentially truncate values.
> 
> Pretty much all the systems bash runs on these days have 64-bit longs.
> How big a file do you have? But the fix is the same as above.

32-bit platforms (IA-32, ARMv7) are still around.  And BSD has had
64-bit off_t on 32-bit architectures for about a quarter century.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



bash 5.0 ships with very old autoconf macros

2019-02-12 Thread Christian Weisgerber
The bash 5.0 release still ships with very old autoconf macros to
detect gettext.  In aclocal.m4, the copy of gettext.m4 and the
supporting lib-link.m4 are from gettext-0.12 dating from 2003.

In particular, the included version of AC_LIB_LINKFLAGS_BODY cannot
detect shared libraries on OpenBSD.  (It checks for *.so; OpenBSD
only has fully numbered libraries: *.so.0.0, etc.)

These macros should be updated to newer versions from a recent
release of gettext.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



5.0: CPPFLAGS doesn't propagate to loadables

2019-02-12 Thread Christian Weisgerber
There is a small omission in bash 5.0's build infrastructure:
The CPPFLAGS variable doesn't propagate to the actual compiler flags
used to build the loadables.  In practice, this means that the seq
loadable will fail to build on operating systems that have libintl
outside the default paths (e.g. OpenBSD with GNU gettext in /usr/local):

cc -fPIC -DHAVE_CONFIG_H -DSHELL -DDEV_FD_STAT_BROKEN -O2 -pipe 
-Wno-parentheses -Wno-format-security -I. -I.. -I../.. -I../../lib 
-I../../builtins -I.  -I../../include -I/usr/obj/bash-5.0.2/bash-5.0 
-I/usr/obj/bash-5.0.2/bash-5.0/lib  -I/usr/obj/bash-5.0.2/bash-5.0/builtins  -c 
-o seq.o seq.c
In file included from seq.c:32:
In file included from ../../bashintl.h:30:
../../include/gettext.h:27:11: fatal error: 'libintl.h' file not found
# include 
  ^~~
1 error generated.

Trivial fix:

Index: examples/loadables/Makefile.in
--- examples/loadables/Makefile.in.orig
+++ examples/loadables/Makefile.in
@@ -76,7 +76,7 @@ INTL_BUILDDIR = ${LIBBUILD}/intl
 INTL_INC = @INTL_INC@
 LIBINTL_H = @LIBINTL_H@
 
-CCFLAGS = $(DEFS) $(LOCAL_DEFS) $(LOCAL_CFLAGS) $(CFLAGS)
+CCFLAGS = $(DEFS) $(LOCAL_DEFS) $(LOCAL_CFLAGS) $(CPPFLAGS) $(CFLAGS)
 
 #
 # These values are generated for configure by ${topdir}/support/shobj-conf.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Crash when moving full-width glyphs across lines

2019-12-16 Thread Christian Dürr
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -march=x86-64 -mtune=generic -O2 -pipe -fno-plt 
-DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/bin' 
-DSTANDARD_UTILS_PATH='/usr/bin' -DSYS_BASHRC='/etc/bash.bashrc' 
-DSYS_BASH_LOGOUT='/etc/bash.bash_logout' -DNON_INTERACTIVE_LOGIN_SHELLS 
-Wno-parentheses -Wno-format-security
uname output: Linux archhq 5.4.2-arch1-1 #1 SMP PREEMPT Thu, 05 Dec 2019 
12:29:40 + x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 5.0
Patch Level: 11
Release Status: release

Description:
Bash will crash when moving full-width unicode glyphs like `こ` across line
boundaries.

Repeat-By:
Paste `https://こんにち` into bash and add whitespace before it until it is
in the next line. Then start deleting that whitespace until it is on the
previous line again. It should crash as soon as only `https://` is on the
original line.



Re: y.tab.c inclusion within the source tree

2014-09-28 Thread Christian Weisgerber
Mark Goldfinch:

> Can someone clarify to me why y.tab.c is included within the bash source
> tree if it is generated from parse.y?
> 
> If one looks in the FreeBSD ports tree, they're deliberately taking the
> initiative to touch parse.y to ensure that y.tab.c is always rebuilt.

They also have a dependency on the bison port, because parse.y does
not build correctly with FreeBSD's yacc(1).  You end up with a bash
that has broken $(...) parsing.  Same issue on OpenBSD, where the
port doesn't touch parse.y because there is no need to.

> If y.tab.c's timestamp ends up being newer than parse.y,

Why would this happen?

> a patch which (correctly) only patches parse.y,

... will cause parse.y to have a newer timestamp.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: Patch file bash42-049 is broken

2014-09-28 Thread Christian Weisgerber
Deron Meranda:

> I was wondering if anybody was going to address the problem with 4.2 patch
> 49 ?
> 
> It is still corrupted on the FTP server.  There are a few lines that appear
> to have been deleted out of the middle of the patch file.

Indeed.

> Not only is there a critical line of code missing, but the the 'patch'
> command will also fail when used with the --fuzz=0 option -- which is
> something that rpmbuild (Fedora, etc) uses.

That's GNU patch.  OpenBSD's patch just fails with it.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Help output has bad indentation

2015-01-21 Thread Christian Weisgerber
The output of "help " suffers from various indentation problems.
E.g., an excerpt from "help history":

...
  -aappend history lines from this session to the history file
  -nread all history lines not already read from the history file
  -rread the history file and append the contents to the history
list
  -wwrite the current history to the history file
and append them to the history list

  -pperform history expansion on each ARG and display the result
without storing it in the history list
  -sappend the ARGs to the history list as a single entry
...

The problem is that the documentation in builtins/*.def is written
starting in column 1 and includes tab characters, but the help
output is then indented by BASE_INDENT (4) characters.

Affects at least 4.2.x, 4.3.x.
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: Help output has bad indentation

2015-01-21 Thread Christian Weisgerber
Christian Weisgerber:

> The output of "help " suffers from various indentation problems.

PS:
I ran *.def through expand(1), which is one way to fix the problem,
but this also reveals that some help texts run over the 80-column
limit when indented by four characters: mapfile, read, test, ...

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Bash does not exit on non-interactive "Bad substitution" errors

2015-08-04 Thread Christian Neukirchen
Hi,

I noticed that the following script keeps executing (and outputs
"oops") in Bash 4.3.39(1)-release and 2.05b.13(1)-release, in
difference to dash-0.5.8, mksh-R51, busybox sh v1.23.2, ksh-2012.08.01,
and zsh-5.0.8:

echo ${x!y}
echo oops

According to IEEE Std 1003.1, 2.8.1 Consequences of Shell Errors
"An expansion error is one that occurs when the shell expansions
defined in wordexp are carried out (for example, "${x!y}", because '!'
is not a valid operator)" and should result in "Shall exit" from a
non-interactive script.

In particular, this also happens when the expansion error occurs in a
line consisting of an "exec"-statement, where evaluation usually *never*
continues (e.g. bash correctly exits when the command is not found).

I think to avoid further damage due to badly set variables, bash
should exit in this case as well.

Thanks,
-- 
Christian Neukirchenhttp://chneukirchen.org



Add -L to kill builtin to be more compatible to procps version of kill

2015-10-01 Thread Christian Ehrhardt
Hi,
I read and worked on a fix reported to ubuntu regarding this issue.
https://bugs.launchpad.net/hundredpapercuts/+bug/1488939

Eventually it came down to bash builtin vs procps verison of kill having
different options.
I proposed a solution making it more similar to the procps verision in a
way I considered not too invasive.

But since I don't know the bash startegy/guidelines regarding compatibility
across various platforms I wanted to submit it here as a request for
comments as well.

No mailer set up yet on this machine, lets hope gmail doesn't hack this
into pieces :-)

Kind Regards,
Christian
From: Christian Ehrhardt 

Fixing some confusion of the bash builtin kill not behaving as the procps
kill which one can see in the manpages by adding a -L option mapping to
the already existing code behind -l.

Signed-off-by: Christian Ehrhardt 
---

[diffstat]
 builtins/kill.def |8 
 doc/bashref.html  |6 +++---
 doc/bashref.info  |   16 
 doc/bashref.texi  |   10 +-
 4 files changed, 20 insertions(+), 20 deletions(-)

[diff]
=== modified file 'builtins/kill.def'
--- builtins/kill.def	2014-03-03 22:52:05 +
+++ builtins/kill.def	2015-09-30 13:28:20 +
@@ -22,7 +22,7 @@
 
 $BUILTIN kill
 $FUNCTION kill_builtin
-$SHORT_DOC kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
+$SHORT_DOC kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l|-L [sigspec]
 Send a signal to a job.
 
 Send the processes identified by PID or JOBSPEC the signal named by
@@ -32,8 +32,8 @@
 Options:
   -s sig	SIG is a signal name
   -n sig	SIG is a signal number
-  -l	list the signal names; if arguments follow `-l' they are
-	assumed to be signal numbers for which names should be listed
+  -l|-L	list the signal names; if arguments follow `-l' they are
+		assumed to be signal numbers for which names should be listed
 
 Kill is a shell builtin for two reasons: it allows job IDs to be used
 instead of process IDs, and allows processes to be killed if the limit
@@ -107,7 +107,7 @@
 {
   word = list->word->word;
 
-  if (ISOPTION (word, 'l'))
+  if (ISOPTION (word, 'l') || ISOPTION (word, 'L'))
 	{
 	  listing++;
 	  list = list->next;

=== modified file 'doc/bashref.html'
--- doc/bashref.html	2014-04-07 22:47:44 +
+++ doc/bashref.html	2015-09-30 13:27:40 +
@@ -9838,7 +9838,7 @@
 kill
 
  kill [-s sigspec] [-n signum] [-sigspec] jobspec or pid
-kill -l [exit_status]
+kill -l|-L [exit_status]
 
 
 Send a signal specified by sigspec or signum to the process
@@ -9847,8 +9847,8 @@
 SIGINT (with or without the SIG prefix)
 or a signal number; signum is a signal number.
 If sigspec and signum are not present, SIGTERM is used.
-The `-l' option lists the signal names.
-If any arguments are supplied when `-l' is given, the names of the
+The `-l' or `-L' options list the signal names.
+If any arguments are supplied when `-l' or `-L' are given, the names of the
 signals corresponding to the arguments are listed, and the return status
 is zero.
 exit_status is a number specifying a signal number or the exit

=== modified file 'doc/bashref.info'
--- doc/bashref.info	2014-04-07 22:47:44 +
+++ doc/bashref.info	2015-09-30 13:27:41 +
@@ -6700,20 +6700,20 @@
 
 `kill'
   kill [-s SIGSPEC] [-n SIGNUM] [-SIGSPEC] JOBSPEC or PID
-  kill -l [EXIT_STATUS]
+  kill -l|-L [EXIT_STATUS]
 
  Send a signal specified by SIGSPEC or SIGNUM to the process named
  by job specification JOBSPEC or process ID PID.  SIGSPEC is either
  a case-insensitive signal name such as `SIGINT' (with or without
  the `SIG' prefix) or a signal number; SIGNUM is a signal number.
  If SIGSPEC and SIGNUM are not present, `SIGTERM' is used.  The
- `-l' option lists the signal names.  If any arguments are supplied
- when `-l' is given, the names of the signals corresponding to the
- arguments are listed, and the return status is zero.  EXIT_STATUS
- is a number specifying a signal number or the exit status of a
- process terminated by a signal.  The return status is zero if at
- least one signal was successfully sent, or non-zero if an error
- occurs or an invalid option is encountered.
+ `-l' or `-L' options list the signal names.  If any arguments are
+ supplied when `-l' or `-L' are given, the names of the signals
+ corresponding to the arguments are listed, and the return status
+ is zero.  EXIT_STATUS is a number specifying a signal number or the
+ exit status of a process terminated by a signal.  The return status
+ is zero if at least one signal was successfully sent, or non-zero if
+ an error occurs or an invalid option is encountered.
 
 `wait'
   wait [-n] [JOBSPEC or PID ...]

=== modified 

Re: Add -L to kill builtin to be more compatible to procps version of kill

2015-11-09 Thread Christian Ehrhardt
Hi everybody,
there was no further response since then.

I checked if the patch would cleanly apply to the bash git as of today.
Other than a 7 line offset it does.

I didn't find a savannah bug page for bash to open a more formal request.

So I wanted to ask if you need more to consider the former patch for
inclusion?



On Thu, Oct 1, 2015 at 8:29 PM, Chet Ramey  wrote:

> On 9/30/15 10:07 AM, Christian Ehrhardt wrote:
> > Hi,
> > I read and worked on a fix reported to ubuntu regarding this issue.
> > https://bugs.launchpad.net/hundredpapercuts/+bug/1488939
> >
> > Eventually it came down to bash builtin vs procps verison of kill having
> > different options.
> > I proposed a solution making it more similar to the procps verision in a
> > way I considered not too invasive.
>
> It shouldn't be too bad to make -L equivalent to -l.
>
> Chet
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>


Parse error with consecutive case statements inside $()

2016-03-29 Thread Christian Franke

Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: cygwin
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash.exe' -DCONF_HOSTTYPE='i686' 
-DCONF_OSTYPE='cygwin' -DCONF_MACHTYPE='i686-pc-cygwin' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H -DRECYCLES_PIDS   -I. 
-I/usr/src/bash-4.3.42-4.i686/src/bash-4.3 
-I/usr/src/bash-4.3.42-4.i686/src/bash-4.3/include 
-I/usr/src/bash-4.3.42-4.i686/src/bash-4.3/lib  -DWORDEXP_OPTION -ggdb 
-O2 -pipe -Wimplicit-function-declaration 
-fdebug-prefix-map=/usr/src/bash-4.3.42-4.i686/build=/usr/src/debug/bash-4.3.42-4 
-fdebug-prefix-map=/usr/src/bash-4.3.42-4.i686/src/bash-4.3=/usr/src/debug/bash-4.3.42-4
uname output: CYGWIN_NT-6.1-WOW Alien2 2.4.1(0.293/5/3) 2016-01-24 11:24 
i686 Cygwin

Machine Type: i686-pc-cygwin

Bash Version: 4.3
Patch Level: 42
Release Status: release

Description:
If consecutive case statements are inside of $(...), the parser 
misinterprets first ')' from second case statement as the end of command 
substitution.


The possibly related patch bash43-042 is already included in this version.


Repeat-By:

$ cat bug.sh
x=$(
  case 1 in
1) echo 1
  esac
  case 2 in
2) echo 2
  esac
)
echo "$x"


$ bash -xv bug.sh
x=$(
  case 1 in
1) echo 1
  esac
  case 2 in
2) echo 2
bug.sh: command substitution: line 13: syntax error: unexpected end of file
bug.sh: line 7: syntax error near unexpected token `esac'
bug.sh: line 7: ` esac'


$ dash bug.sh
1
2


Workarounds:
- append semicolon behind first 'esac', or
- insert any command line between the case statements, or
- use `...` instead of $(...)


--
Christian Franke




Bash bind bug: infinite loop with memory leak in Readline

2016-08-10 Thread Christian Klomp
Hi,

I found a problem with the binding of key sequences to macros that
results in bash allocating all the memory until the process is killed.
It happens when a key is mapped to itself, e.g., `bind '"3":"3"'`. Now
when '3' is typed the while loop inside the readline_internal_charloop
function will never end.

Besides manually, the flaw can be triggered via programs like expect,
screen and tmux, and works over SSH. When scripted it can result in
significant resource usage which might ultimately result in a denial
of service (although impact can be considered low since users need to
be authenticated and mitigation should be fairly easy, e.g., via
ulimit).

As far as I can tell every distribution and every Bash version is
affected by this (oldest I tried is Damn Small Linux 0.4.10 with Bash
2.05b) and it also happens on the new Windows Ubuntu subsystem.

I've written a preliminary patch that prevents triggering the flaw.
The first part prevents adding problematic macros and the second part
will escape the loop if a problematic key mapping occurs (I'm not sure
this second part is necessary assuming the first part properly
prevents adding any problematic mapping, but I've included it since
I'm not sure whether there are other ways to add mappings). So the
underlying problem is not addressed by the patch.

It also doesn't catch every case, e.g., `bind '"3":"123"'` will still
result in an infinite loop but in that case accumulation of memory is
insignificant.

Since I am not known with the internals of Bash/Readline and I am not
a C programmer I've refrained myself from digging further.

Regards,
Christian Klomp


# Bash 4.3.46:
--- ../bash-4.3/lib/readline/bind.c2013-04-06 23:46:38.0 +0200
+++ lib/readline/bind.c2016-08-09 21:55:56.500772211 +0200
@@ -313,6 +313,14 @@
  const char *keyseq, *macro;
  Keymap map;
 {
+  /* don't add macro if input is the same as output
+ in order to avoid infinite loop. */
+  if (*keyseq == *macro)
+{
+  _rl_init_file_error ("keyseq cannot be the same as macro");
+  return -1;
+}
+
   char *macro_keys;
   int macro_keys_len;

--- ../bash-4.3/lib/readline/readline.c2016-08-09 17:14:28.0 +0200
+++ lib/readline/readline.c2016-08-09 21:55:08.596984135 +0200
@@ -957,6 +957,14 @@
 {
   rl_executing_keyseq[rl_key_sequence_length] = '\0';
   macro = savestring ((char *)map[key].function);
+
+  /* avoid infinite loop if key and macro are equal. */
+  if (key == *macro)
+{
+  _rl_abort_internal ();
+  return -1;
+}
+
   _rl_with_macro_input (macro);
   return 0;
 }


# Bash 4.4-beta-2 (only line numbers differ):
--- ../bash-4.4-beta2/lib/readline/bind.c2016-01-25 16:33:57.0 +0100
+++ lib/readline/bind.c2016-08-09 22:06:42.602407370 +0200
@@ -337,6 +337,14 @@
  const char *keyseq, *macro;
  Keymap map;
 {
+  /* don't add macro if input is the same as output
+ in order to avoid infinite loop. */
+  if (*keyseq == *macro)
+{
+  _rl_init_file_error ("keyseq cannot be the same as macro");
+  return -1;
+}
+
   char *macro_keys;
   int macro_keys_len;

--- ../bash-4.4-beta2/lib/readline/readline.c2016-04-20
21:53:52.0 +0200
+++ lib/readline/readline.c2016-08-09 22:07:58.824249663 +0200
@@ -996,6 +996,14 @@
 {
   rl_executing_keyseq[rl_key_sequence_length] = '\0';
   macro = savestring ((char *)map[key].function);
+
+  /* avoid infinite loop if key and macro are equal. */
+  if (key == *macro)
+{
+  _rl_abort_internal ();
+  return -1;
+}
+
   _rl_with_macro_input (macro);
   return 0;
 }



Re: Bash bind bug: infinite loop with memory leak in Readline

2016-09-22 Thread Christian Klomp
2016-09-19 3:41 GMT+02:00 Chet Ramey :
> Yes, you've triggered an infinite loop with the key binding.  One of the
> strengths of macros is that the expansion is not simply a string -- it can
> be used as shorthand for a complex key sequence, so simply disallowing
> the general case is not really an option.  I don't think anyone is going
> to deliberately bind a key to itself, so I'm not sure putting code in to
> solve that specific case, while leaving the general case unsolved, is
> worthwhile.
>
> Maybe the thing to do is to abort at some maximum macro nesting level, sort
> of like bash does with $FUNCNEST.
>

Yes, I agree that my code is too generic and the case a bit
far-fetched. There indeed appear to be more issues. For instance
binding "loop" to "looper" will also trigger it. Still quite unlikely
but more in the direction of something a user might expect to work.

Limiting the nesting level sounds like a straightforward solution.
Although I'm not sure whether exposing a configurable level is
necessary. Do people actually nest their macros that deep that no
reasonably safe upper bound can be set?



Re: Magnitude of Order "For Loop" performance deltas based on syntax change

2016-09-24 Thread Christian Franke

Tom McCurdy wrote:

Bash Version: 4.3
Patch Level: 11
Release Status: release

Description:
Two nearly identical "For Loop" setups have large deltas in 
performance. See test program below. Was confirmed in IRC chat room by 
multiple users. Input is very noticeable with around 100,000 values.


The script was originally used to take an input file where the first 
line would determine how many int values were to follow. The remaining 
lines each had values that were sorted, and then the smallest 
difference between any two values were found.


Repeat-By:

Using script below comment/uncomment each method to test.
On my machine
Method-1 finishes in around 2 seconds
Method-2 finishes in around 72 seconds


I tested the scripts on Win10 with the current Cygwin x64 version of 
bash (4.3.46(7)):


Method-1: ~4 seconds
Method-2: ~24 seconds

Further observations:

- Method-1 could be slowed down significantly by modifications of the 
first two assignments in the for loop:


1) Swap current/next assignments:

 next=${Pi[$i+1]}
 current=${Pi[$i]}

Result: ~26 seconds

2) Do these assignments with 'let' statements instead:

 let current=Pi[i]
 let next=Pi[i+1]

Result: ~25 seconds

3) Do both:

 let next=Pi[i+1]
 let current=Pi[i]

Result: ~49 seconds


- Method-2 could be significantly speed up if the order of the array 
accesses is reversed:


  for (( i=0; iThere is at least some (IMO unexpected) performance degradation if 
arrays are accessed not sequentially.



Thanks,
Christian




4.4: crash in redir10 test; use after free?

2016-11-01 Thread Christian Weisgerber
Running the bash 4.4 regression test suite on OpenBSD/amd64, I noticed
a crash in the redir tests.  Specifically, running redir10.sub with
bash 4.4 causes it to die with a bus error most of the time.

Program terminated with signal 10, Bus error.

#0  0x1c9ad0634009 in find_pipeline (pid=97028, alive_only=1,
jobp=0x7f7ea514) at jobs.c:1481
1481  if (p->pid == pid && ((alive_only == 0 && PRECYCLED(p) == 0) 
|| PALIVE(p)))
(gdb) p last_procsub_child
$1 = (PROCESS *) 0x1c9d2b698ca0
(gdb) p *last_procsub_child
$2 = {next = 0xdfdfdfdfdfdfdfdf, pid = -538976289, status = -538976289,
  running = -538976289,
  command = 0xdfdfdfdfdfdfdfdf }
(gdb) p /x last_procsub_child->pid 
$3 = 0xdfdfdfdf

This looks like a use after free() since OpenBSD's malloc fills
some of the freed memory with 0xdf.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: 4.4: crash in redir10 test; use after free?

2016-11-02 Thread Christian Weisgerber
Chet Ramey:

> > Running the bash 4.4 regression test suite on OpenBSD/amd64, I noticed
> > a crash in the redir tests.  Specifically, running redir10.sub with
> > bash 4.4 causes it to die with a bus error most of the time.
> 
> Thanks for the report.  I can't reproduce this,

Here's the backtrace:

#0  0x0d78f3634009 in find_pipeline (pid=11813, alive_only=1,   
jobp=0x7f7f61b4) at jobs.c:1481 
#1  0x0d78f36340f5 in find_process (pid=11813, alive_only=1,
jobp=0x7f7f61b4) at jobs.c:1506
#2  0x0d78f3637c53 in waitchld (wpid=-1, block=0) at jobs.c:3531
#3  0x0d78f363795c in sigchld_handler (sig=20) at jobs.c:3411
#4  
#5  0x0d7b431cc78b in ofree (argpool=0xd7be5448350, p=0xd7b4d731360)
at /usr/src/lib/libc/stdlib/malloc.c:1085
#6  0x0d7b431ccc8b in free (ptr=0xd7b0d3aa3e0)
at /usr/src/lib/libc/stdlib/malloc.c:1416
#7  0x0d78f3633a97 in discard_pipeline (chain=0xd7b0d3aa3e0) at jobs.c:1232
#8  0x0d78f364a3c5 in process_substitute (string=0xd7afbf56490 "echo x",
open_for_read_in_child=0) at subst.c:5812

* In process_substitute(), discard_pipeline(last_procsub_child)
  is called.
* discard_pipeline() frees last_procsub_child.
* free() is interrupted by a signal.
* The signal handler eventually calls find_pipeline(), which accesses
  the just-freed memory last_procsub_child points to.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



$$'...' parsing bug?

2017-01-30 Thread Christian Weisgerber
This came up on comp.unix.shell: 
There appears to be a parsing problem in bash where the sequence
$$'...' is treated as $'...', and $$"..." as $"...", when inside
$(...).

$ echo 'x\nx'
x\nx
$ echo $'x\nx'
x
x 
$ echo $$'x\nx'
86293x\nx
$ echo $(echo $'x\nx')
x x
$ echo $(echo $$'x\nx')
x x

This is with bash 4.4.12... but a quick check shows the same behavior
with 4.2.37.
-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



bash 3.2 fails to detect bzero in configure stage

2007-08-08 Thread Hans-Christian Egtvedt
K 1
| #define HAVE_SELECT 1
| #define HAVE_TCGETPGRP 1
| #define HAVE_UNAME 1
| #define HAVE_ULIMIT 1
| #define HAVE_WAITPID 1
| #define HAVE_RENAME 1
| #define HAVE_BCOPY 1
| /* end confdefs.h.  */
| /* Define bzero to an innocuous variant, in case  declares bzero.
|For example, HP-UX 11i  declares gettimeofday.  */
| #define bzero innocuous_bzero
| 
| /* System header to define __stub macros and hopefully few prototypes,
| which can conflict with char bzero (); below.
| Prefer  to  if __STDC__ is defined, since
|  exists even on freestanding compilers.  */
| 
| #ifdef __STDC__
| # include 
| #else
| # include 
| #endif
| 
| #undef bzero
| 
| /* Override any GCC internal prototype to avoid an error.
|Use char because int might match the return type of a GCC
|builtin and then its argument prototype would still apply.  */
| #ifdef __cplusplus
| extern "C"
| #endif
| char bzero ();
| /* The GNU C library defines this for functions which it implements
| to always fail with ENOSYS.  Some functions are actually named
| something starting with __ and the normal name is an alias.  */
| #if defined __stub_bzero || defined __stub___bzero
| choke me
| #endif
| 
| int
| main ()
| {
| return bzero ();
|   ;
|   return 0;
| }
configure:12901: result: no

-- 
Best regards,
Hans-Christian Egtvedt



___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: bash 3.2 fails to detect bzero in configure stage

2007-08-08 Thread Hans-Christian Egtvedt
On Wed, 2007-08-08 at 10:39 +0200, Hans-Christian Egtvedt wrote:
> I am using uClibc 0.9.29 and GCC 4.1.2 to compile bash 3.2 (with the 17
> patches released on web). And I have a problem that configure does not
> detect bzero properly, and thus resulting in a crash later when
> compiling.

I am a bit unsure if it is uClibc fault or configure for bash, but I
have included a patch for bash which fixes my problem.

I just remove the check and code for bzero in bash. After greping around
in the source code, I saw that there already has been done numerous
changes to use memset instead.



-- 
Mvh
Hans-Christian Egtvedt
diff -upr bash-3.2.orig/config.h.in bash-3.2/config.h.in
--- bash-3.2.orig/config.h.in	2006-09-12 22:00:54.0 +0200
+++ bash-3.2/config.h.in	2007-08-08 14:15:12.0 +0200
@@ -504,9 +504,6 @@
 /* Define if you have the bcopy function.  */
 #undef HAVE_BCOPY
 
-/* Define if you have the bzero function.  */
-#undef HAVE_BZERO
-
 /* Define if you have the confstr function.  */
 #undef HAVE_CONFSTR
 
diff -upr bash-3.2.orig/configure.in bash-3.2/configure.in
--- bash-3.2.orig/configure.in	2006-09-26 17:05:45.0 +0200
+++ bash-3.2/configure.in	2007-08-08 14:14:36.0 +0200
@@ -702,7 +702,7 @@ AC_CHECK_FUNCS(dup2 eaccess fcntl getdta
 AC_REPLACE_FUNCS(rename)
 
 dnl checks for c library functions
-AC_CHECK_FUNCS(bcopy bzero confstr fnmatch \
+AC_CHECK_FUNCS(bcopy confstr fnmatch \
 		getaddrinfo gethostbyname getservbyname getservent inet_aton \
 		memmove pathconf putenv raise regcomp regexec \
 		setenv setlinebuf setlocale setvbuf siginterrupt strchr \
diff -upr bash-3.2.orig/CWRU/misc/sigstat.c bash-3.2/CWRU/misc/sigstat.c
--- bash-3.2.orig/CWRU/misc/sigstat.c	2002-04-17 19:41:40.0 +0200
+++ bash-3.2/CWRU/misc/sigstat.c	2007-08-08 14:11:36.0 +0200
@@ -86,7 +86,7 @@ int	sig;
 init_signames()
 {
 	register int i;
-	bzero(signames, sizeof(signames));
+	memset(signames, 0, sizeof(signames));
 
 #if defined (SIGHUP)		/* hangup */
   	signames[SIGHUP] = "SIGHUP";
diff -upr bash-3.2.orig/lib/sh/oslib.c bash-3.2/lib/sh/oslib.c
--- bash-3.2.orig/lib/sh/oslib.c	2001-12-06 19:26:21.0 +0100
+++ bash-3.2/lib/sh/oslib.c	2007-08-08 14:11:44.0 +0200
@@ -170,23 +170,6 @@ bcopy (s,d,n)
 }
 #endif /* !HAVE_BCOPY */
 
-#if !defined (HAVE_BZERO)
-#  if defined (bzero)
-#undef bzero
-#  endif
-void
-bzero (s, n)
- char *s;
- int n;
-{
-  register int i;
-  register char *r;
-
-  for (i = 0, r = s; i < n; i++)
-*r++ = '\0';
-}
-#endif
-
 #if !defined (HAVE_GETHOSTNAME)
 #  if defined (HAVE_UNAME)
 #include 
___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Bash 2.05b patch for 896776 - (CVE-2014-6271) ?

2014-09-26 Thread Jean-Christian de Rivaz

Hello,

While this can seem completely obsolete, I still have machines running 
bash 2.05b (Debian etch). I worry about upgrading to bash 3.x because of 
some backward compatibility issue.
It there any reason why there was no patch for bash 2.05b ? The test 
command below show that the bug also affect this version:


j$ bash --version
GNU bash, version 2.05b.0(1)-release (i386-pc-linux-gnu)
Copyright (C) 2002 Free Software Foundation, Inc.
j$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
vulnerable
this is a test

Best Regards,

Jean-Christian



Re: Bash 2.05b patch for 896776 - (CVE-2014-6271) ?

2014-09-26 Thread Jean-Christian de Rivaz

Le 26. 09. 14 16:47, Chet Ramey a écrit :

On 9/26/14, 4:53 AM, Jean-Christian de Rivaz wrote:

Hello,

While this can seem completely obsolete, I still have machines running bash
2.05b (Debian etch). I worry about upgrading to bash 3.x because of some
backward compatibility issue.
It there any reason why there was no patch for bash 2.05b ? The test
command below show that the bug also affect this version:

j$ bash --version
GNU bash, version 2.05b.0(1)-release (i386-pc-linux-gnu)
Copyright (C) 2002 Free Software Foundation, Inc.
j$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
vulnerable
this is a test

Here's one.  Two, actually, one for each CVE.


Hi Chet,

Applied without problem and there fixed the issues, as fare as I can 
test it.


$ bash --version
GNU bash, version 2.05b.0(1)-release (i386-pc-linux-gnu)
Copyright (C) 2002 Free Software Foundation, Inc.
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

Thanks you very much for those patches :-)

Best Regards,

Jean-Christian



Re: Bash 2.05b patch for 896776 - (CVE-2014-6271) ?

2014-09-26 Thread Jean-Christian de Rivaz

Le 26. 09. 14 18:55, Steve Simmons a écrit :

These patches build and run without problem in our initial bash2 tests. However, I notice 
that both the version number reported by ./bash --version and doing ./bash followed by 
echo $BASH_VERSION both report "2.05b.0(1)-release". All versions that I've 
tested of bash3 and bash4 report their patchlevel in the third field. If I manually 
update patchlevel.h to change from 0 to 9, the version is reported as 
'2.05b.((1)-release'. Bug?

Steve
I have observer that too. I searched into the source code and found that 
while
the mkversion.sh script have a patchlevel option, the Makefile.in do not 
seem

to be designed to use that feature prior to the 3.x version.

I wonder if adding a patch level in the version string might break code that
parse it to workaround some compatibility issues. On my side, I am fine with
a sticky version string for that old setup.

Jean-Christian




Re: demonstration of CVE-2014-7186 ShellShock vulnerability

2014-09-27 Thread Jean-Christian de Rivaz

Le 27. 09. 14 07:53, Eric Blake a écrit :

[...]

So, to FULLY test whether you are still vulnerable to ShellShock, we
must come up with a test that proves that NO possible function body
assigned to a valid shell variable name can EVER cause bash to invoke
the parser without your consent.  For that, I use this (all on one line,
even if my mailer wrapped it):

$ bash -c "export f=1 g='() {'; f() { echo 2;}; export -f f; bash -c
'echo \$f \$g; f; env | grep ^f='"

which is sufficient to test that both normal variables and functions can
both be exported, AND show you whether there is a collision in the
environment.  Ideally, you would see the following result (immune to
shell-shock):

1 () {
2
f=1

It may also be possible that your version of bash decided to prohibit
function exports (either permanently, or at least by default while still
giving you a new knob to explicitly turn them back on), in which case
you'd see something like this (also immune to shell-shock):

1 () {
f: bash: f: command not found
f=1

But if you see something like the following, YOU ARE STILL VULNERABLE:

bash: g: line 1: syntax error: unexpected end of file
bash: error importing function definition for `g'
1
2
f=1
f=() { echo 2

By the way, this vulnerable output is what I get when using bash 4.3.26
with no additional patches, which proves upstream bash IS still
vulnerable to CVE-2014-7186, _because_ it invokes the parser on
arbitrary untrusted input that can be assigned to a normal variable.

Red Hat's approach of using 'BASH_FUNC_foo()=() {' instead of 'foo=() {'
as the means for exporting functions is immune to all possible
ShellShock attacks, even if there are other parser bugs, because it is
no longer possible to mix normal variables with arbitrary content with
exported functions in the environment (ALL normal variable contents pass
through unmolested, with no attempt to run the parser on them).  In
conclusion, I hope Chet will be providing additional patches soon (both
a patch to fix the CVE-2014-7186 parser bug, and a patch to
once-and-for-all move exported functions into a distinct namespace).


I suppose that you are talking about the variables-affix.patch  and
parser-oob.patch posted at http://seclists.org/oss-sec/2014/q3/712. 
There are now
into the last 4.2.37(1) in Debian and effectively show immunity to your 
test. But
bash-4.1.13(1), bash-3.1.19(1), and 2.05b.0(1) with all officials 
patches fails
the same test. Hope that patches will soon be available for all bash 
versions.


Jean-Christian