Re: Bash-4.2-rc2 available for FTP
On Wednesday, February 02, 2011 08:56:24 Chet Ramey wrote: > The second release candidate of bash-4.2 is now available with the URL > > ftp://ftp.cwru.edu/pub/bash/bash-4.2-rc2.tar.gz - braces.c:mkseq() is using an intmax_t type for the length in the asprintf call when it needs to be an int. a quick check of a 32bit system shows that sizeof(intmax_t) is 8 bytes which means the output most likely will get screwed up (since it'll be interpreted in the C library as 2 arguments). - lib/glob/smatch.c needs externs.h for mbsmbchar. seems like externs.h could do with including bashtypes.h/command.h/general.h too since it needs basic types from all of those. - lib/glob/smatch.c seems its STR defines could be unified with stuff in general.h - seems like lib/sh/snprintf.c should be including some header for isnan and isinf (maybe math.h ?) otherwise, some quick smoke tests show it seems to be working OK so far ... -mike signature.asc Description: This is a digitally signed message part.
Re: Bash-4.2-rc2 available for FTP
On Wednesday, February 02, 2011 21:49:38 Chet Ramey wrote: > On 2/2/11 6:27 PM, Mike Frysinger wrote: > > - lib/glob/smatch.c needs externs.h for mbsmbchar. seems like externs.h > > could do with including bashtypes.h/command.h/general.h too since it > > needs basic types from all of those. > > Or an extern declaration for mbsmbchar, to avoid having to include other > files. that defeats the whole point of having a single extern line. changing the header and matching func definition wouldnt automatically catch random externs sprinkled over the tree and could result in an arbitrarily crashing binary that showed no build errors or warnings. -mike signature.asc Description: This is a digitally signed message part.
Re: Do more testing before a release?
On Wednesday, February 16, 2011 23:51:16 Clark J. Wang wrote: > I know little about open source development process (and control?). I just > don't know where to get the bash code (like CVS, SVN respository) before > it's released. I think it's better to make it open to more people so > everyone can help review and test before a stable release. the 4.2 rc1 was announced on the list and garnered testing/feedback -mike signature.asc Description: This is a digitally signed message part.
empty quotes break pattern replacements in bash-4.2
this simple code no longer works in bash-4.2: $ f=abc; echo ${f##""a} abc same goes for ${f//""a} and ${f%%""c}, and perhaps more operations removing the quotes, or quoting the single char in question, makes it work: $ f=abc; echo ${f##a} ${f##"a"} bc bc the original bug report uses variables in the pattern and quotes them to avoid expansion of globs and such. but if the variable happened to be empty, things no longer worked correctly. -mike signature.asc Description: This is a digitally signed message part.
Re: empty quotes break pattern replacements in bash-4.2
On Friday, February 18, 2011 23:17:11 Chet Ramey wrote: > On 2/18/11 9:06 PM, Mike Frysinger wrote: > > this simple code no longer works in bash-4.2: > > $ f=abc; echo ${f##""a} > > abc > > same goes for ${f//""a} and ${f%%""c}, and perhaps more operations > > One more: everything that calls getpattern(). > > > removing the quotes, or quoting the single char in question, makes it > > work: $ f=abc; echo ${f##a} ${f##"a"} > > bc bc > > > > the original bug report uses variables in the pattern and quotes them to > > avoid expansion of globs and such. but if the variable happened to be > > empty, things no longer worked correctly. > > Try this patch. seems to do the trick. i'll have the original reporter (Ulrich Müller) verify on his end too. -mike signature.asc Description: This is a digitally signed message part.
Re: configure fails with gcc 4.6.0 LTO
On Saturday, March 19, 2011 17:33:36 Chet Ramey wrote: > On 3/18/11 9:42 PM, Zeev Tarantov wrote: > > x86_64-pc-linux-gnu-gcc: error: \: No such file or directory > > x86_64-pc-linux-gnu-gcc: error: \: No such file or directory > > x86_64-pc-linux-gnu-gcc: error: \: No such file or directory > > x86_64-pc-linux-gnu-gcc: error: \: No such file or directory > > lto-wrapper: > > /usr/x86_64-pc-linux-gnu/gcc-bin/4.6.0-pre/x86_64-pc-linux-gnu-gcc > > returned 1 exit status > > /usr/lib/gcc/x86_64-pc-linux-gnu/4.6.0-pre/../../../../x86_64-pc-linu > > x-gnu/bin/ld: lto-wrapper failed > > It would be helpful to see the exact command that caused this error, in > its unexpanded state. it's right above (this is a snippet from config.log). however, i'm not sure looking into this bug report is useful. we already told this guy to stop using gcc-4.6 and using ridiculous CFLAGS/LDFLAGS if he wasnt going to assist in figuring out the bugs. -mike signature.asc Description: This is a digitally signed message part.
Re: configure fails with gcc 4.6.0 LTO
On Mon, Mar 21, 2011 at 8:32 AM, Greg Wooledge wrote: > On Sat, Mar 19, 2011 at 09:52:05PM +, Zeev Tarantov wrote: >> configure:3122: checking for C compiler default output file name >> configure:3144: x86_64-pc-linux-gnu-gcc -g -O2 -flto >> -DDEFAULT_PATH_VALUE='"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"' >> -DSTANDARD_UTILS_PATH='"/bin:/usr/bin:/sbin:/usr/sbin"' >> -DSYS_BASHRC='"/etc/bash/bashrc"' >> -DSYS_BASH_LOGOUT='"/etc/bash/bash_logout"' >> -DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -Wl,-flto >> conftest.c >&5 >> x86_64-pc-linux-gnu-gcc: error: \: No such file or directory >> x86_64-pc-linux-gnu-gcc: error: \: No such file or directory >> x86_64-pc-linux-gnu-gcc: error: \: No such file or directory >> x86_64-pc-linux-gnu-gcc: error: \: No such file or directory >> lto-wrapper: >> /usr/x86_64-pc-linux-gnu/gcc-bin/4.6.0-pre/x86_64-pc-linux-gnu-gcc >> returned 1 exit status > > I'm not sure how much of the distortion we're seeing here is being > caused by a mail user agent, versus how much is caused by the experimental > gcc he's using, etc. > > My guess, without knowing anything about this version of gcc, is that > the command that ./configure is supposed to execute is really something > like: > > x86_64-pc-linux-gnu-gcc -g -O2 -flto \ > -DDEFAULT_PATH_VALUE='"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"' > \ > -DSTANDARD_UTILS_PATH='"/bin:/usr/bin:/sbin:/usr/sbin"' \ > > and so on, where the backslashes are supposed to be at the ends of the > lines, to indicate continuation. And somewhere along the way, they're > being doubled so that they're becoming literal words. > > imadev:/tmp$ /net/appl/gcc-3.3/bin/gcc -g -O2 -DFOO=BAR \\ -DQWERTY=UIOP \\ > hello.c > gcc: \: No such file or directory > gcc: \: No such file or directory > > Kinda like that. > > Maybe it's gcc 4.6-prewhatever that's doing it. Maybe he's actually using > some sort of "build system wrapper" that's broken. I don't know. I just > recognize the symptom, not the cause. the Gentoo ebuild double quotes the flags in env CPPFLAGS so that gcc sets the strings correctly when compiling. the failure only happens when LDFLAGS contains -flto which says to me that gcc doesnt parse arguments the same as when executing other helpers. LDFLAGS=-flto CPPFLAGS=-DD=\'\"\"\' ./configure but still not a bash issue -mike
Re: [Bash-announce] german misspelling
On Wed, Mar 23, 2011 at 2:51 PM, Ralf Thielow wrote: > i've found a german misspelling in command "cd". > Where can i fix it? please use bug-bash, not bash-announce a bit odd announce isnt moderated so only people like Chet can announce ... -mike
Re: RFE: make [[ compatible with [ 'Option'?
On Mon, Mar 28, 2011 at 2:25 PM, Greg Wooledge wrote: > In any case, I see no benefit to changing how [[ works. A change would > just cause more confusion. and probably break many existing scripts -mike
heredocs incorrectly printed in for ((...)) loop
seems to be just like the bug fixed in bash41-006, but with a diff for loop style. simple example: f() { for (( :; :; )) ; do cat < signature.asc Description: This is a digitally signed message part.
Re: heredocs incorrectly printed in for ((...)) loop
On Tuesday, April 19, 2011 16:38:57 Chet Ramey wrote: > > seems to be just like the bug fixed in bash41-006, but with a diff for > > loop style. simple example: > > Try the attached patch and let me know the results. It fixes this > case and a couple of others. sorry for the delay ... that patch does indeed seem to fix the issue. thanks! -mike signature.asc Description: This is a digitally signed message part.
Re: Bash source repository
On Sunday, May 29, 2011 22:18:33 Bradley M. Kuhn wrote: > It's been two years since this discussion began and there have been requests older than that. you just found the most "recent". -mike signature.asc Description: This is a digitally signed message part.
Re: Permission denied to execute script that is world executable
On Saturday, June 18, 2011 16:37:18 John Williams wrote: > Is this a bash bug, or intentional behavior? it's coming from the kernel, not bash post the output of `mount` and make sure that it doesnt have the "noexec" flag -mike signature.asc Description: This is a digitally signed message part.
Re: bug: return doesn't accept negative numbers
On Monday, August 08, 2011 21:20:29 Chet Ramey wrote: > On 8/8/11 8:53 AM, Eric Blake wrote: > > However, you are on to something - since bash allows 'exit -1' as an > > extension, it should similarly allow 'return -1' as the same sort of > > extension. The fact that bash accepts 'exit -1' and 'exit -- -1', but > > only 'return -- -1', is the real point that you are complaining about. > > That's a reasonable extension to consider for the next release of bash. i posted a patch for this quite a while ago. not that it's hard to code. -mike signature.asc Description: This is a digitally signed message part.
Re: Purge History of rm commands
On Monday, September 19, 2011 01:18:02 Roger wrote: > I'm stumped on this as my history is in the format of: > > $ tail ~/.bash_history > #1316296633 > man bash > #1316296664 > bash -xv > #1316372056 > screen -rd > #1316375930 > exit > #1316392889 > exit > > Is there a method of purging the history off all rm commands with such a > file format? I've tried using history | find | grep | sed, but the > history doesn't accept more then one history command line number. so put it into a for loop ? > I'm guessing the next easiest method is to learn awk/gawk so I can edit the > above .bash_history file. gawk '{ c = $0; getline; if ($1 != "rm") { print c; print; } }' .bash_history > The easiest method, just open the history file with VI/VIM and start > deleting the 20 or so lines... which I'll likely start doing now. ;-) > > ... or did I miss something? i rarely use `history`, so i cant suggest any improvements there -mike signature.asc Description: This is a digitally signed message part.
[patch] fix parallel build issues with parse.y
the current yacc rules allow multiple runs to generate the same files. usually this doesn't come up as the generated files are shipped in the tarball, but when you modify parse.y (applying a patch or developing or whatever), you can hit this problem. simple way of showing this: make -j y.tab.{c,h} a correct system would not show the yacc parser running twice :) simple patch is to have the .h file depend on the .c file, and have the .h file itself issue a dummy rule (to avoid make thinking things changed). -mike --- a/Makefile.in +++ b/Makefile.in @@ -579,16 +579,17 @@ # old rules GRAM_H = parser-built -y.tab.o: y.tab.c ${GRAM_H} command.h ${BASHINCDIR}/stdc.h input.h +y.tab.o: y.tab.h y.tab.c ${GRAM_H} command.h ${BASHINCDIR}/stdc.h input.h ${GRAM_H}: y.tab.h @-if test -f y.tab.h ; then \ cmp -s $@ y.tab.h 2>/dev/null || cp -p y.tab.h $@; \ fi -y.tab.c y.tab.h: parse.y +y.tab.c: parse.y # -if test -f y.tab.h; then mv -f y.tab.h old-y.tab.h; fi $(YACC) -d $(srcdir)/parse.y touch parser-built # -if cmp -s old-y.tab.h y.tab.h; then mv old-y.tab.h y.tab.h; else cp -p y.tab.h ${GRAM_H}; fi +y.tab.h: y.tab.c ; @true # experimental new rules - work with GNU make but not BSD (or OSF) make #y.tab.o: y.tab.c y.tab.h
[patch] fix parallel build issues with syntax.c
the current code generates a bunch of local libraries in subdirs and then links bash against that. those subdirs sometimes need version.h. so they have a rule to change back up to the parent dir and build version.h (which is fine). the trouble is that the top level objects and the subdirs are allowed to build in parallel, so it's possible for multiple children to see that version.h is not available and that it needs to be created, so they all do. there is even more trouble is that version.h depends on all the top level sources, some of which are compiled (like syntax.c). so these parallel children all kick off a job to generate syntax.c which in turn requires the mksyntax helper executable. obviously multiple processes rm-ing, compiling, and linking the same files quickly falls apart. so tweak the subdirs to all depend on the .build target which in turn depends on all of these top level files being generated. now the subdirs won't try and recursively enter the top level. (noticed by David James) -mike --- a/Makefile.in +++ b/Makefile.in @@ -597,6 +598,11 @@ # $(YACC) -d $(srcdir)/parse.y # -if cmp -s old-y.tab.h y.tab.h; then mv old-y.tab.h y.tab.h; fi +# Subdirs will often times want version.h, so they'll change back up to +# the top level and try to create it. This causes parallel build issues +# so just force top level sanity before we descend. +$(LIBDEP): .build + $(READLINE_LIBRARY): config.h $(READLINE_SOURCE) @echo making $@ in ${RL_LIBDIR} @( { test "${RL_LIBDIR}" = "${libdir}" && exit 0; } || \
[patch] builtins: fix parallel build between top level targets
the top level Makefile will recurse into the defdir for multiple targets (libbuiltins.a, common.o, bashgetopt.o, builtext.h), and since these do not have any declared interdependencies, parallel makes will recurse into the subdir and build the respective targets. nothing depends on common.o or bashgetopt.o, so those targets don't get used normally. this leaves libbuiltins.a and builtext.h. at a glance, this shouldn't be a big deal, but when we look closer, there's a subtle failure lurking. most of the objects in the defdir need to be generated which means they need to build+link the local mkbuiltins helper. the builtext.h header also needs to be generated by the mkbuiltins helper. so when the top level launches a child for libbuiltins.a and a child for builtext.h, we can hit a race condition where the two try to generate mkbuiltins, and the build randomly fails. so update libbuiltins.a to depend on builtext.h. this should be fairly simple since it's only a single target. --- a/Makefile.in +++ b/Makefile.in @@ -674,7 +674,7 @@ $(RM) $@ ./mksyntax$(EXEEXT) -o $@ -$(BUILTINS_LIBRARY): $(BUILTIN_DEFS) $(BUILTIN_C_SRC) config.h ${BASHINCDIR}/memalloc.h version.h +$(BUILTINS_LIBRARY): $(BUILTIN_DEFS) $(BUILTIN_C_SRC) config.h ${BASHINCDIR}/memalloc.h ${DEFDIR}/builtext.h version.h @(cd $(DEFDIR) && $(MAKE) $(MFLAGS) DEBUG=${DEBUG} libbuiltins.a ) || exit 1 # these require special rules to circumvent make builtin rules signature.asc Description: This is a digitally signed message part.
Re: Encrypted bashrc?
On Friday 11 November 2011 00:48:59 Clark J. Wang wrote: > In my company all the people share a few of Solaris servers which use NIS > to manage user accounts. The bad thing is that some servers' root passwords > are well known so anybody can easily su to my account to access my files. > To protect some private info in my bashrc I want to encrypt it. Any one has > a good solution for that? if they have root, they have access to all memory and devices. including your terminal where you enter the passphrase/key, or the memory where the file is decrypted/read. encrypting the files will make things harder, but won't make it inaccessible to people who really want it. if you want to protect private information, don't put it on a remote server. -mike signature.asc Description: This is a digitally signed message part.
Re: How to directly modify $@?
On Sunday 20 November 2011 11:54:42 Pierre Gaston wrote: > On Sun, Nov 20, 2011 at 6:43 PM, Peng Yu wrote: > > Hi, > > > > I don't see if there is a way to directly modify $@. I know 'shift'. > > But I'm wondering if there is any other way to modify $@. > > > > ~$ 1=x > > -bash: 1=x: command not found > > ~$ @=(a b c) > > -bash: syntax error near unexpected token `a' > > you need to use the set builtin: > set -- a b c yep to pop items off the end: shift [n] to add items to the end: set -- "$@" a b c to add items to the start: set -- a b c "$@" to extract slices: set -- "${@:[:]}" e.g. set -- a b c set -- "${@:2:1}" # this sets $@ to (b) with those basics, you should be able to fully manipulate $@ -mike signature.asc Description: This is a digitally signed message part.
Re: Bash git repository on savannah
On Wednesday 23 November 2011 23:23:43 Chet Ramey wrote: > I spent a little while messing around with git over the past couple of > days, and ended up updating the bash git repository on savannah > (http://git.savannah.gnu.org/cgit/bash.git to browse the sources). > Bash-4.2 patch 20 is the head of the tree, and there's a branch > containing the `direxpand' patches that I've posted here. Each > bash-4.2 patch is in there as a separate commit. i thought you were maintaining a private CVS somewhere ? if so, would it be possible to create a git tree from that ? if you want someone else to take care of the details, i can help ... i've converted a bunch of projects in the past over to git from CVS/SVN. -mike signature.asc Description: This is a digitally signed message part.
Re: Bash git repository on savannah
On Friday 25 November 2011 22:28:49 Chet Ramey wrote: > On 11/24/11 12:36 PM, Mike Frysinger wrote: > > On Wednesday 23 November 2011 23:23:43 Chet Ramey wrote: > >> I spent a little while messing around with git over the past couple of > >> days, and ended up updating the bash git repository on savannah > >> (http://git.savannah.gnu.org/cgit/bash.git to browse the sources). > >> Bash-4.2 patch 20 is the head of the tree, and there's a branch > >> containing the `direxpand' patches that I've posted here. Each > >> bash-4.2 patch is in there as a separate commit. > > > > i thought you were maintaining a private CVS somewhere ? if so, would it > > be possible to create a git tree from that ? if you want someone else > > to take care of the details, i can help ... i've converted a bunch of > > projects in the past over to git from CVS/SVN. > > Thanks. I have bash sources going back a number of years, and I can get > them into git. It will just take me a while. > > The question is what to do with that tree once it's assembled. push it to the git repo on svannah you cited above ? do you plan on continuing development on the internal tree, or would you be switching to git fulltime ? -mike signature.asc Description: This is a digitally signed message part.
Re: Bash git repository on savannah
On Sunday 27 November 2011 21:31:16 Chet Ramey wrote: > On 11/26/11 2:56 AM, Mike Frysinger wrote: > >> Thanks. I have bash sources going back a number of years, and I can get > >> them into git. It will just take me a while. > >> > >> The question is what to do with that tree once it's assembled. > > > > push it to the git repo on svannah you cited above ? > > Probably. I just have to figure out how to get git to do what I want. if you have questions, feel free to post and i'll try to follow up. i know learning curve can be high at first. > > do you plan on continuing development on the internal tree, or would you > > be switching to git fulltime ? > > I don't think I'll push every change to git as soon as it happens, but > I'm thinking about fairly frequent commits to a `bash-devel' sort of > tree. The question is whether or not enough people would be interested > in that to make the frequency worth it. i would ;) -mike signature.asc Description: This is a digitally signed message part.
Re: popd always has return status 0
On Thursday 01 December 2011 19:01:50 james.cuze...@lyraphase.com wrote: > Description: > popd does not appear to return a nonzero exit status when the directory > stack is empty anymore. works for me: $ echo $BASH_VERSION ; popd ; echo $? 4.2.20(1)-release bash: popd: directory stack empty 1 as does your popall func -mike signature.asc Description: This is a digitally signed message part.
Re: return values of bash scripts
On Tuesday 20 December 2011 17:18:16 kc123 wrote: > For example, my script below called crond.sh: > ... > content=`ps auxw | grep [c]rond| awk '{print $11}'` > ... > and output is: > CONTENT: /bin/bash /bin/bash crond > > Why are there 2 extra arguments printed (/bin/bash) ? because you grepped your own script named "crond.sh" make the awk script smarter, or use pgrep -mike signature.asc Description: This is a digitally signed message part.
Re: bash, echo or openssl bug?
On Tuesday 03 January 2012 08:48:27 nick humphrey wrote: > Description: > i dont know if the bug is a bash bug or openssl or echo, but when i > echo a string and pipe it to openssl, the > output comes on the same line as the prompt instead of a new line. it makes > the output hard to read because it is prepended > to the prompt text, e.g. mySecretPasswordtcadmin@buildserver: ~$ > > Repeat-By: > 1. run the following code in bash terminal: > echo OHBjcWNLNGlQaVF5 | openssl base64 -d > > 2. the output in the bash terminal looks like this: > mySecretPasswordtcadmin@buildserver: ~$ there is no bug in any of these packages. openssl doesn't include a trailing new line. > 3. the output SHOULD look like this: > mySecretPassword > tcadmin@buildserver: ~$ then add it yourself: $ echo OHBjcWNLNGlQaVF5 | openssl base64 -d; echo $ out=$(echo OHBjcWNLNGlQaVF5 | openssl base64 -d); echo "${out}" ... many other ways ... -mike signature.asc Description: This is a digitally signed message part.
excess braces ignored: bug or feature ?
can't tell if this is a bug or a feature. FOO= BAR=bar : ${FOO:=${BAR} echo $FOO i'd expect an error, or FOO to contain those excess braces. instead, FOO is just "bar". -mike signature.asc Description: This is a digitally signed message part.
Re: RFE: allow bash to have libraries (was bash 4.2 breaks source finding libs in lib/filename...)
On Wednesday 29 February 2012 17:53:21 Linda Walsh wrote: > Eric Blake wrote: > > On 02/29/2012 12:26 PM, Linda Walsh wrote: > >>> Any pathname that contains a / should not be subject to PATH searching. > > > > Agreed - as this behavior is _mandated_ by POSIX, for both sh(1) and for > > execlp(2) and friends. > > Is it that you don't read english as a first language, or are you just > trying to be argumentative?' i'm guessing irony is lost on you ad hominem attacks have no business on this list or any other project. if you can't handle that, then please go away. -mike signature.asc Description: This is a digitally signed message part.
Re: I think I may have found a possible dos attack vector within bash.
On Tuesday 20 March 2012 15:55:18 Chet Ramey wrote: > or the even simpler > > f() > { > f | f & > > } > f i like the variant that uses ":" instead of "f": :(){ :|:& };: -mike signature.asc Description: This is a digitally signed message part.
Re: UTF-8 regression in bash version 4.2
On Tuesday 27 March 2012 08:08:33 Pierre Gaston wrote: > On Tue, Mar 27, 2012 at 3:00 PM, Joachim Schmitz wrote: > > dennis.birkh...@rwth-aachen.de wrote: > > > > > > >> Bash Version: 4.2 > >> Patch Level: 24 > >> Release Status: release > > > > Interesting, seems the announcements dor patches 21-24 have gotten lost? > > they were posted on the mailing list, maybe the relay to the group failed i recall seeing them, as does the archive: http://lists.gnu.org/archive/html/bug-bash/2012-03/threads.html Bash-4.2 Official Patch 24, Chet Ramey, 2012/03/12 Bash-4.2 Official Patch 23, Chet Ramey, 2012/03/12 Bash-4.2 Official Patch 22, Chet Ramey, 2012/03/12 Bash-4.2 Official Patch 21, Chet Ramey, 2012/03/12 -mike signature.asc Description: This is a digitally signed message part.
Re: status on $[arith] for eval arith vsl $((arith))??
On Saturday 07 April 2012 16:45:55 Linda Walsh wrote: > Is it an accidental omission from the bash manpage? it's in the man page. read the "Arithmetic Expansion" section. -mike signature.asc Description: This is a digitally signed message part.
string replace with multibyte chars and extglob fails with bash-4.2
first set your locale to something unicode based: export LC_ALL=en_US.UTF-8 then try the simple script (from Ulrich Müller): $ cat test.sh shopt -s extglob text="aaaäöü" echo "${text} ${text//?aa} ${text//\aaa}" with bash-4.1_p2, i get: aaaäöü äöü äöü but with bash-4.2_p8 ... 4.2_p24 (just what i have locally): aaaäöü aaaäöü aaaäöü seems like a bug to me -mike signature.asc Description: This is a digitally signed message part.
[patch] fix building when readline is disabled
if you disable readline, the complete.def code fails to build. simple patch below (not sure if it's correct, but at least gets the conversation going). -mike --- a/builtins/complete.def +++ b/builtins/complete.def @@ -49,6 +49,8 @@ $END #include +#ifdef READLINE + #include #include "../bashtypes.h" @@ -867,3 +869,5 @@ compopt_builtin (list) return (ret); } + +#endif signature.asc Description: This is a digitally signed message part.
Re: [patch] fix building when readline is disabled
On Monday 23 April 2012 18:57:05 Chet Ramey wrote: > On 4/23/12 12:22 AM, Mike Frysinger wrote: > > if you disable readline, the complete.def code fails to build. simple > > patch below (not sure if it's correct, but at least gets the > > conversation going). > > How did you disable readline? Running configure --disable-readline and > building as usual works for me. You might want to run `make clean' before > rebuilding. i suspect you have readline files still being included and it appears to work. let's look at vanilla bash-4.2: $ tar xf bash-4.2.tar.gz $ cd bash-4.2 $ ./configure --disable-readline $ make ... rm -f complete.o ./mkbuiltins -D . complete.def gcc -c -DHAVE_CONFIG_H -DSHELL -I. -I.. -I.. -I../include -I../lib -I.-g -O2 complete.c || ( rm -f complete.c ; exit 1 ) rm -f complete.c ... so, let's go into that dir and run it by hand: $ cd builtins $ ./mkbuiltins -D . complete.def $ strace -f -eopen gcc -c -DHAVE_CONFIG_H -DSHELL -I. -I.. -I.. -I../include -I../lib -I.-g -O2 complete.c |& grep readline.h ... [pid 12453] open("../lib/readline/readline.h", O_RDONLY|O_NOCTTY) = 4 ... if you were to clean out your readline code first, you'd see the build error i'm seeing instead of the local readline code getting implicitly used even though it was explicitly disabled. -mike signature.asc Description: This is a digitally signed message part.
Re: [patch] fix building when readline is disabled
On Monday 23 April 2012 20:08:26 Chet Ramey wrote: > On 4/23/12 7:40 PM, Mike Frysinger wrote: > > On Monday 23 April 2012 18:57:05 Chet Ramey wrote: > >> On 4/23/12 12:22 AM, Mike Frysinger wrote: > >>> if you disable readline, the complete.def code fails to build. simple > >>> patch below (not sure if it's correct, but at least gets the > >>> conversation going). > >> > >> How did you disable readline? Running configure --disable-readline and > >> building as usual works for me. You might want to run `make clean' > >> before rebuilding. > > > > i suspect you have readline files still being included and it appears to > > work. > > OK, so you've stripped the local readline copy out of the source tree? yes > Then configured it to build with a system readline library installation > that you remove? the system doesn't have readline at all > > let's look at vanilla bash-4.2: > > $ tar xf bash-4.2.tar.gz > > $ cd bash-4.2 > > $ ./configure --disable-readline > > $ make > > ... > > rm -f complete.o > > ./mkbuiltins -D . complete.def > > gcc -c -DHAVE_CONFIG_H -DSHELL -I. -I.. -I.. -I../include -I../lib -I. > >-g -O2 complete.c || ( rm -f complete.c ; exit 1 ) rm -f complete.c > > ... > > > > so, let's go into that dir and run it by hand: > > $ cd builtins > > $ ./mkbuiltins -D . complete.def > > $ strace -f -eopen gcc -c -DHAVE_CONFIG_H -DSHELL -I. -I.. -I.. > > -I../include -I../lib -I.-g -O2 complete.c |& grep readline.h ... > > [pid 12453] open("../lib/readline/readline.h", O_RDONLY|O_NOCTTY) = 4 > > ... > > > > if you were to clean out your readline code first, you'd see the build > > error i'm seeing instead of the local readline code getting implicitly > > used even though it was explicitly disabled. > > What does "clean out your readline code" mean? Disabled means that it > doesn't end up in the resulting binary, and the bash binary is not linked > against readline. There aren't any link errors because the builtins are > in a library, and no other bash code calls any function in complete.c, so > complete.o is not linked out of libbuiltins.a. What kind of "build error" > are you seeing? without a readline.h header available (system or local copy), the build fails. there's plenty of READLINE and HISTORY ifdefs in the other files, so this looks like one place that got missed. imo, there should be no attempts to include a readline header if support is disabled. -mike signature.asc Description: This is a digitally signed message part.
Re: [patch] fix building when readline is disabled
On Tuesday 24 April 2012 08:23:04 Chet Ramey wrote: > On 4/24/12 12:00 AM, Mike Frysinger wrote: > >> OK, so you've stripped the local readline copy out of the source tree? > > > > yes > > > >> Then configured it to build with a system readline library installation > >> that you remove? > > > > the system doesn't have readline at all > > Why? because it's a small system which has no need for things like readline. i don't think this is a terribly unusual use case. -mike signature.asc Description: This is a digitally signed message part.
Re: [patch] fix building when readline is disabled
On Tuesday 24 April 2012 15:49:57 Chet Ramey wrote: > On 4/24/12 10:46 AM, Mike Frysinger wrote: > > On Tuesday 24 April 2012 08:23:04 Chet Ramey wrote: > >> On 4/24/12 12:00 AM, Mike Frysinger wrote: > >>>> OK, so you've stripped the local readline copy out of the source tree? > >>> > >>> yes > >>> > >>>> Then configured it to build with a system readline library > >>>> installation that you remove? > >>> > >>> the system doesn't have readline at all > >> > >> Why? > > > > because it's a small system which has no need for things like readline. > > i don't think this is a terribly unusual use case. > > Sure, there are systems that don't want or need the readline library > installed or things linked to it. That's not unusual. It's not what > we're talking about here. > > What we're talking about is removing about 1 MB of source code > (lib/readline) from the bash source tree *wherever you're building it*. > This doesn't have anything to do with readline being built or installed. > I don't have any doubt that you encountered a build error when you removed > lib/readline from the bash source tree. What I'm wondering is what you > thought you would gain by doing it. the local copy is stripped in order to detect cases (which often happens in other packages) where headers/funcs are implicitly included and used even when a feature is turned off. like in this case where the local readline headers are used even though i disabled readline. -mike signature.asc Description: This is a digitally signed message part.
Re: Severe Bash Bug with Arrays
On Thursday 26 April 2012 23:47:39 Linda Walsh wrote: > Anything else you wanna tell me NEVER/ALWAYS to do? try ALWAYS being polite. but i guess that'll NEVER happen. oh well, thankfully kmail can auto-mute based on sender. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Thursday 03 May 2012 16:12:17 John Kearney wrote: > I tend to do something more like this > > function runJobParrell { > local mjobCnt=${1} && shift > jcnt=0 > function WrapJob { > "${@}" > kill -s USR2 $$ > } neat trick. all my parallel loops tend to have a fifo of depth N where i push on pids and when it gets full, wait for the first one. it works moderately well, except for when a slow job in the pipe chokes and the parent doesn't push anymore in until that clears. > function JobFinised { > jcnt=$((${jcnt}-1)) : $(( --jcnt )) or a portable version: : $(( jcnt -= 1 )) > while [ $# -gt 0 ] ; do > while [ ${jcnt} -lt ${mjobCnt} ]; do > jcnt=$((${jcnt}+1)) same math suggestion as above -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 08:55:42 Chet Ramey wrote: > On 5/3/12 2:49 PM, Colin McEwan wrote: > > What I would really *like* would be an extension to the shell which > > implements the same sort of parallelism-limiting / 'process pooling' > > found in make or 'parallel' via an operator in the shell language, > > similar to '&' which has semantics of *possibly* continuing > > asynchronously (like '&') if system resources allow, or waiting for the > > process to complete (';'). > > I think the combination of asynchronous jobs and `wait' provides most of > what you need. The already-posted alternatives look like a good start to a > general solution. > > If those aren't general enough, how would you specify the behavior of a > shell primitive -- operator or builtin -- that does what you want? i wish there was a way to use `wait` that didn't block until all the pids returned. maybe a dedicated option, or a shopt to enable this, or a new command. for example, if i launched 10 jobs in the background, i usually want to wait for the first one to exit so i can queue up another one, not wait for all of them. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 12:44:32 Greg Wooledge wrote: > On Fri, May 04, 2012 at 12:41:03PM -0400, Mike Frysinger wrote: > > i wish there was a way to use `wait` that didn't block until all the pids > > returned. maybe a dedicated option, or a shopt to enable this, or a new > > command. > > wait takes arguments. yes, and it'll wait for all the ones i specified before returning > > for example, if i launched 10 jobs in the background, i usually want to > > wait for the first one to exit so i can queue up another one, not wait > > for all of them. > > Do you mean "for *any* one of them", or literally "for the first one"? any. maybe `wait -1` will translate into waitpid(-1, ...). -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 13:46:32 Andreas Schwab wrote: > Mike Frysinger writes: > > i wish there was a way to use `wait` that didn't block until all the pids > > returned. maybe a dedicated option, or a shopt to enable this, or a new > > command. > > > > for example, if i launched 10 jobs in the background, i usually want to > > wait for the first one to exit so i can queue up another one, not wait > > for all of them. > > If you set -m you can trap on SIGCHLD while waiting. awesome, that's a good mitigation #!/bin/bash set -m cnt=0 trap ': $(( --cnt ))' SIGCHLD for n in {0..20} ; do ( d=$(( RANDOM % 10 )) echo $n sleeping $d sleep $d ) & : $(( ++cnt )) if [[ ${cnt} -ge 10 ]] ; then echo going to wait wait fi done trap - SIGCHLD wait it might be a little racy (wrt checking cnt >= 10 and then doing a wait), but this is good enough for some things. it does lose visibility into which pids are live vs reaped, and their exit status, but i more often don't care about that ... -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 15:02:27 John Kearney wrote: > Am 04.05.2012 20:53, schrieb Mike Frysinger: > > On Friday 04 May 2012 13:46:32 Andreas Schwab wrote: > >> Mike Frysinger writes: > >>> i wish there was a way to use `wait` that didn't block until all the > >>> pids returned. maybe a dedicated option, or a shopt to enable this, > >>> or a new command. > >>> > >>> for example, if i launched 10 jobs in the background, i usually want to > >>> wait for the first one to exit so i can queue up another one, not wait > >>> for all of them. > >> > >> If you set -m you can trap on SIGCHLD while waiting. > > > > awesome, that's a good mitigation > > > > #!/bin/bash > > set -m > > cnt=0 > > trap ': $(( --cnt ))' SIGCHLD > > for n in {0..20} ; do > > > > ( > > > > d=$(( RANDOM % 10 )) > > echo $n sleeping $d > > sleep $d > > > > ) & > > > > : $(( ++cnt )) > > > > if [[ ${cnt} -ge 10 ]] ; then > > > > echo going to wait > > wait > > > > fi > > > > done > > trap - SIGCHLD > > wait > > > > it might be a little racy (wrt checking cnt >= 10 and then doing a wait), > > but this is good enough for some things. it does lose visibility into > > which pids are live vs reaped, and their exit status, but i more often > > don't care about that ... > > That won't work I don't think. seemed to work fine for me > I think you meant something more like this? no. i want to sleep the parent indefinitely and fork a child asap (hence the `wait`), not busy wait with a one second delay. the `set -m` + SIGCHLD interrupted the `wait` and allowed it to return. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 16:17:02 Chet Ramey wrote: > On 5/4/12 2:53 PM, Mike Frysinger wrote: > > it might be a little racy (wrt checking cnt >= 10 and then doing a wait), > > but this is good enough for some things. it does lose visibility into > > which pids are live vs reaped, and their exit status, but i more often > > don't care about that ... > > What version of bash did you test this on? Bash-4.0 is a little different > in how it treats the SIGCHLD trap. bash-4.2_p28. wait returns 145 (which is SIGCHLD). > Would it be useful for bash to set a shell variable to the PID of the just- > reaped process that caused the SIGCHLD trap? That way you could keep an > array of PIDs and, if you wanted, use that variable to keep track of live > and dead children. we've got associative arrays now ... we could have one which contains all the relevant info: declare -A BASH_CHILD_STATUS=( ["pid"]=1234 ["status"]=1# WEXITSTATUS() ["signal"]=13 # WTERMSIG() ) makes it easy to add any other fields people might care about ... -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Friday 04 May 2012 15:25:25 John Kearney wrote: > Am 04.05.2012 21:13, schrieb Mike Frysinger: > > On Friday 04 May 2012 15:02:27 John Kearney wrote: > >> Am 04.05.2012 20:53, schrieb Mike Frysinger: > >>> On Friday 04 May 2012 13:46:32 Andreas Schwab wrote: > >>>> Mike Frysinger writes: > >>>>> i wish there was a way to use `wait` that didn't block until all the > >>>>> pids returned. maybe a dedicated option, or a shopt to enable this, > >>>>> or a new command. > >>>>> > >>>>> for example, if i launched 10 jobs in the background, i usually want > >>>>> to wait for the first one to exit so i can queue up another one, not > >>>>> wait for all of them. > >>>> > >>>> If you set -m you can trap on SIGCHLD while waiting. > >>> > >>> awesome, that's a good mitigation > >>> > >>> #!/bin/bash > >>> set -m > >>> cnt=0 > >>> trap ': $(( --cnt ))' SIGCHLD > >>> for n in {0..20} ; do > >>> ( > >>> d=$(( RANDOM % 10 )) > >>> echo $n sleeping $d > >>> sleep $d > >>> ) & > >>> : $(( ++cnt )) > >>> if [[ ${cnt} -ge 10 ]] ; then > >>> echo going to wait > >>> wait > >>> fi > >>> done > >>> trap - SIGCHLD > >>> wait > >>> > >>> it might be a little racy (wrt checking cnt >= 10 and then doing a > >>> wait), but this is good enough for some things. it does lose > >>> visibility into which pids are live vs reaped, and their exit status, > >>> but i more often don't care about that ... > >> > >> That won't work I don't think. > > > > seemed to work fine for me > > > >> I think you meant something more like this? > > > > no. i want to sleep the parent indefinitely and fork a child asap (hence > > the `wait`), not busy wait with a one second delay. the `set -m` + > > SIGCHLD interrupted the `wait` and allowed it to return. > > The functionality of the code doesn't need SIGCHLD, it still waits till > all the 10 processes are finished before starting the next lot. not on my system it doesn't. maybe a difference in bash versions. as soon as one process quits, the `wait` is interrupted, a new one is forked, and the parent goes back to sleep until another child exits. if i don't `set -m`, then i see what you describe -- the wait doesn't return until all 10 children exit. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Saturday 05 May 2012 04:28:50 John Kearney wrote: > Am 05.05.2012 06:35, schrieb Mike Frysinger: > > On Friday 04 May 2012 15:25:25 John Kearney wrote: > >> Am 04.05.2012 21:13, schrieb Mike Frysinger: > >>> On Friday 04 May 2012 15:02:27 John Kearney wrote: > >>>> Am 04.05.2012 20:53, schrieb Mike Frysinger: > >>>>> On Friday 04 May 2012 13:46:32 Andreas Schwab wrote: > >>>>>> Mike Frysinger writes: > >>>>>>> i wish there was a way to use `wait` that didn't block until all > >>>>>>> the pids returned. maybe a dedicated option, or a shopt to enable > >>>>>>> this, or a new command. > >>>>>>> > >>>>>>> for example, if i launched 10 jobs in the background, i usually > >>>>>>> want to wait for the first one to exit so i can queue up another > >>>>>>> one, not wait for all of them. > >>>>>> > >>>>>> If you set -m you can trap on SIGCHLD while waiting. > >>>>> > >>>>> awesome, that's a good mitigation > >>>>> > >>>>> #!/bin/bash > >>>>> set -m > >>>>> cnt=0 > >>>>> trap ': $(( --cnt ))' SIGCHLD > >>>>> for n in {0..20} ; do > >>>>> > >>>>> ( > >>>>> > >>>>> d=$(( RANDOM % 10 )) > >>>>> echo $n sleeping $d > >>>>> sleep $d > >>>>> > >>>>> ) & > >>>>> > >>>>> : $(( ++cnt )) > >>>>> > >>>>> if [[ ${cnt} -ge 10 ]] ; then > >>>>> > >>>>> echo going to wait > >>>>> wait > >>>>> > >>>>> fi > >>>>> > >>>>> done > >>>>> trap - SIGCHLD > >>>>> wait > >>>>> > >>>>> it might be a little racy (wrt checking cnt >= 10 and then doing a > >>>>> wait), but this is good enough for some things. it does lose > >>>>> visibility into which pids are live vs reaped, and their exit status, > >>>>> but i more often don't care about that ... > >>>> > >>>> That won't work I don't think. > >>> > >>> seemed to work fine for me > >>> > >>>> I think you meant something more like this? > >>> > >>> no. i want to sleep the parent indefinitely and fork a child asap > >>> (hence the `wait`), not busy wait with a one second delay. the `set > >>> -m` + SIGCHLD interrupted the `wait` and allowed it to return. > >> > >> The functionality of the code doesn't need SIGCHLD, it still waits till > >> all the 10 processes are finished before starting the next lot. > > > > not on my system it doesn't. maybe a difference in bash versions. as > > soon as one process quits, the `wait` is interrupted, a new one is > > forked, and the parent goes back to sleep until another child exits. if > > i don't `set -m`, then i see what you describe -- the wait doesn't > > return until all 10 children exit. > > Just to clarify what I see with your code, with the extra echos from me > and less threads so its shorter. that's not what i was getting. as soon as i saw the echo of SIGCHLD, a new "sleeping" would get launched. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Saturday 05 May 2012 23:25:26 John Kearney wrote: > Am 05.05.2012 06:28, schrieb Mike Frysinger: > > On Friday 04 May 2012 16:17:02 Chet Ramey wrote: > >> On 5/4/12 2:53 PM, Mike Frysinger wrote: > >>> it might be a little racy (wrt checking cnt >= 10 and then doing a > >>> wait), but this is good enough for some things. it does lose > >>> visibility into which pids are live vs reaped, and their exit status, > >>> but i more often don't care about that ... > >> > >> What version of bash did you test this on? Bash-4.0 is a little > >> different in how it treats the SIGCHLD trap. > > > > bash-4.2_p28. wait returns 145 (which is SIGCHLD). > > > >> Would it be useful for bash to set a shell variable to the PID of the > >> just- reaped process that caused the SIGCHLD trap? That way you could > >> keep an array of PIDs and, if you wanted, use that variable to keep > >> track of live and dead children. > > > > we've got associative arrays now ... we could have one which contains all > > the relevant info: > > declare -A BASH_CHILD_STATUS=( > > ["pid"]=1234 > > ["status"]=1# WEXITSTATUS() > > ["signal"]=13 # WTERMSIG() > > ) > > > > makes it easy to add any other fields people might care about ... > > Is there actually a guarantee that there will be 1 SIGCHLD for every > exited process. > Isn't it actually a race condition? when SIGCHLD is delivered doesn't matter. the child stays in a zombie state until the parent calls wait() on it and gets its status. so you can have `wait` return one child's status at a time. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Sunday 06 May 2012 03:25:27 John Kearney wrote: > Am 06.05.2012 08:28, schrieb Mike Frysinger: > > On Saturday 05 May 2012 23:25:26 John Kearney wrote: > >> Am 05.05.2012 06:28, schrieb Mike Frysinger: > >>> On Friday 04 May 2012 16:17:02 Chet Ramey wrote: > >>>> On 5/4/12 2:53 PM, Mike Frysinger wrote: > >>>>> it might be a little racy (wrt checking cnt >= 10 and then doing a > >>>>> wait), but this is good enough for some things. it does lose > >>>>> visibility into which pids are live vs reaped, and their exit status, > >>>>> but i more often don't care about that ... > >>>> > >>>> What version of bash did you test this on? Bash-4.0 is a little > >>>> different in how it treats the SIGCHLD trap. > >>> > >>> bash-4.2_p28. wait returns 145 (which is SIGCHLD). > >>> > >>>> Would it be useful for bash to set a shell variable to the PID of the > >>>> just- reaped process that caused the SIGCHLD trap? That way you could > >>>> keep an array of PIDs and, if you wanted, use that variable to keep > >>>> track of live and dead children. > >>> > >>> we've got associative arrays now ... we could have one which contains > >>> all the relevant info: > >>> declare -A BASH_CHILD_STATUS=( > >>> ["pid"]=1234 > >>> ["status"]=1# WEXITSTATUS() > >>> ["signal"]=13 # WTERMSIG() > >>> ) > >>> > >>> makes it easy to add any other fields people might care about ... > >> > >> Is there actually a guarantee that there will be 1 SIGCHLD for every > >> exited process. > >> Isn't it actually a race condition? > > > > when SIGCHLD is delivered doesn't matter. the child stays in a zombie > > state until the parent calls wait() on it and gets its status. so you > > can have `wait` return one child's status at a time. > > but I think my point still stands > trap ': $(( cnt-- ))' SIGCHLD > is a bad idea, you actually need to verify how many jobs are running not > just arbitrarily decrement a counter, because your not guaranteed a trap > for each process. I mean sure it will normally work, but its not > guaranteed to work. if `wait` setup BASH_CHILD_STATUS, then you wouldn't need the SIGCHLD trap at all. you could just do `wait`, get the info from BASH_CHILD_STATUS as to what child exactly was just reaped, and then proceed. as to the underlying question, since it's possible for bash itself to receive stacked SIGCHLDs, there's no reason it wouldn't be able to execute the trap the right number of times. -mike signature.asc Description: This is a digitally signed message part.
Re: Parallelism a la make -j / GNU parallel
On Monday 07 May 2012 09:08:33 Chet Ramey wrote: > On 5/5/12 12:28 AM, Mike Frysinger wrote: > > On Friday 04 May 2012 16:17:02 Chet Ramey wrote: > >> On 5/4/12 2:53 PM, Mike Frysinger wrote: > >>> it might be a little racy (wrt checking cnt >= 10 and then doing a > >>> wait), but this is good enough for some things. it does lose > >>> visibility into which pids are live vs reaped, and their exit status, > >>> but i more often don't care about that ... > >> > >> What version of bash did you test this on? Bash-4.0 is a little > >> different in how it treats the SIGCHLD trap. > > > > bash-4.2_p28. wait returns 145 (which is SIGCHLD). > > I wonder if you were running in Posix mode. Posix says yes, i think that is what i was doing `sh ./test.sh` -mike signature.asc Description: This is a digitally signed message part.
bash crashes when forking jobs and dynamically switching posix mode
in light of the recent discussion, i thought i could switch posix mode on/off on the fly so that i restricted myself to this mode only when using `wait`. unfortunately, that randomly crashes bash :). simple test case: $ cat test.sh #!/bin/bash max=20 num=0 set -m #set -o posix trap ': $(( --num ))' CHLD while : ; do sleep 0.$(( $RANDOM % 10 ))s & : $(( ++num )) if [[ $num -ge $max ]] ; then set -o posix wait set +o posix fi done $ bash --version | head -1 GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) $ ./test.sh malloc: ../bash/execute_cmd.c:3555: assertion botched free: called with already freed block argument Aborting...Aborted (core dumped) i've also seen various corruption like: *** glibc detected *** /bin/bash: malloc(): memory corruption (fast): 0x01a1ee90 *** or: *** glibc detected *** /bin/bash: double free or corruption (fasttop): 0x013ea130 *** === Backtrace: = /lib64/libc.so.6(+0x773e5)[0x7fe20e8a03e5] /bin/bash(pop_stream+0x5d)[0x419f8d] /bin/bash[0x44e479] /bin/bash(run_unwind_frame+0x22)[0x44e5e2] /bin/bash(parse_string+0x131)[0x4677b1] /bin/bash(xparse_dolparen+0x65)[0x424125] /bin/bash[0x447938] /bin/bash[0x448a26] /bin/bash[0x449f5c] /bin/bash(expand_string_assignment+0x6a)[0x44a0ca] /bin/bash[0x44429a] /bin/bash[0x44465f] /bin/bash[0x44b2ed] /bin/bash(execute_command_internal+0x1755)[0x42bff5] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash[0x42f4b4] /bin/bash(execute_command_internal+0xa06)[0x42b2a6] /bin/bash[0x42f593] /bin/bash(execute_command_internal+0xa06)[0x42b2a6] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash[0x42f55e] /bin/bash(execute_command_internal+0xa06)[0x42b2a6] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash[0x42f55e] /bin/bash(execute_command_internal+0xa06)[0x42b2a6] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash[0x42f55e] /bin/bash(execute_command_internal+0xa06)[0x42b2a6] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash(execute_command_internal+0x12bf)[0x42bb5f] /bin/bash(execute_command+0x4e)[0x42ee2e] /bin/bash(reader_loop+0x8c)[0x418fcc] /bin/bash(main+0xdb9)[0x417919] /lib64/libc.so.6(__libc_start_main+0xed)[0x7fe20e84a3ed] /bin/bash[0x4181dd] -mike signature.asc Description: This is a digitally signed message part.
Re: bash crashes when forking jobs and dynamically switching posix mode
On Friday 11 May 2012 13:37:58 Chet Ramey wrote: > On 5/11/12 11:43 AM, Mike Frysinger wrote: > > in light of the recent discussion, i thought i could switch posix mode > > on/off on the fly so that i restricted myself to this mode only when > > using `wait`. > > unfortunately, that randomly crashes bash :). simple test case: > > Did you try this with bash-4.2? bash-4.2_p28 was where i first noticed the crashes (those tracebacks from glibc). the first crash was on an ubuntu bash-4.1. -mike signature.asc Description: This is a digitally signed message part.
Re: bash crashes when forking jobs and dynamically switching posix mode
On Friday 11 May 2012 14:25:08 Chet Ramey wrote: > On 5/11/12 11:43 AM, Mike Frysinger wrote: > > in light of the recent discussion, i thought i could switch posix mode > > on/off on the fly so that i restricted myself to this mode only when > > using `wait`. > > > unfortunately, that randomly crashes bash :). simple test case: > I ran through around 20,000 children on RHEL5 before I quit, and it didn't > crash using bash-4.2.28. However, there is a change in the development > branch that does a better job of not running the SIGCHLD trap in a signal > handling context. That's the only change I can see, though it should not > matter because the posix mode toggling means that code isn't executed. hrm, i'll see if i can't get something that crashes better for me -mike signature.asc Description: This is a digitally signed message part.
Re: Shell comment ignored
On Thursday 16 August 2012 20:36:45 Keith Clifford wrote: > some_var=# What I thought was a comment. > > The '#' is eaten by the variable assignment so that some_var gets a null > value and the rest of the line is not treated like a command. this is correct (if sometimes confusing to people) behavior. you need to have some byte in there that breaks up the tokens -- be it a tab, space, or newline. -mike signature.asc Description: This is a digitally signed message part.
Re: bash 4.x filters out environmental variables containing a dot in the name
On Tuesday 21 August 2012 07:45:49 Ondrej Oprala wrote: > unless this bug is already fixed in some way yes, please retest with very latest bash-4.2 and the released patchsets -mike signature.asc Description: This is a digitally signed message part.
BASH_COMMAND is incorrect when working with subshells and error traps
consider this simple code: $ cat test.sh #!/bin/bash trap 'echo $BASH_COMMAND; exit 1' ERR set -e true (false) true when run, we see: $ ./test.sh true 1 this can be confusing when utilized with automatic backtraces :( even when using errtrace and debugtrace, BASH_COMMAND is incorrect -mike signature.asc Description: This is a digitally signed message part.
Re: BASH_COMMAND is incorrect when working with subshells and error traps
On Wednesday 22 August 2012 12:30:11 Mike Frysinger wrote: > consider this simple code: > > $ cat test.sh > #!/bin/bash > trap 'echo $BASH_COMMAND; exit 1' ERR > set -e > true > (false) > true > > when run, we see: > $ ./test.sh > true 1 err, i tweaked my shell script slightly so this output would not be ambiguous, but forgot to post the updated shell script. #!/bin/bash trap 'echo $BASH_COMMAND; exit 1' ERR set -e true 1 (false) true 2 -mike signature.asc Description: This is a digitally signed message part.
[PATCH] bash: speed up `read -N`
Rather than using 1 byte reads, use the existing cache read logic. This could be sped up more, but this change is not as invasive and should (hopefully) be fairly safe. Signed-off-by: Mike Frysinger --- builtins/read.def | 21 - externs.h | 1 + lib/sh/zread.c| 15 +-- 3 files changed, 30 insertions(+), 7 deletions(-) diff --git a/builtins/read.def b/builtins/read.def index e32dec7..81a1b3f 100644 --- a/builtins/read.def +++ b/builtins/read.def @@ -457,7 +457,10 @@ read_builtin (list) interrupt_immediately++; terminate_immediately++; - unbuffered_read = (nchars > 0) || (delim != '\n') || input_is_pipe; + if ((nchars > 0) && !input_is_tty && ignore_delim) +unbuffered_read = 2; + else if ((nchars > 0) || (delim != '\n') || input_is_pipe) +unbuffered_read = 1; if (prompt && edit == 0) { @@ -505,10 +508,18 @@ read_builtin (list) print_ps2 = 0; } - if (unbuffered_read) - retval = zread (fd, &c, 1); - else - retval = zreadc (fd, &c); + switch (unbuffered_read) + { + case 2: + retval = zreadcn (fd, &c, nchars - nr); + break; + case 1: + retval = zread (fd, &c, 1); + break; + default: + retval = zreadc (fd, &c); + break; + } if (retval <= 0) { diff --git a/externs.h b/externs.h index 09244fa..a5ad645 100644 --- a/externs.h +++ b/externs.h @@ -479,6 +479,7 @@ extern ssize_t zread __P((int, char *, size_t)); extern ssize_t zreadretry __P((int, char *, size_t)); extern ssize_t zreadintr __P((int, char *, size_t)); extern ssize_t zreadc __P((int, char *)); +extern ssize_t zreadcn __P((int, char *, int)); extern ssize_t zreadcintr __P((int, char *)); extern void zreset __P((void)); extern void zsyncfd __P((int)); diff --git a/lib/sh/zread.c b/lib/sh/zread.c index 5db21a9..af7d02b 100644 --- a/lib/sh/zread.c +++ b/lib/sh/zread.c @@ -101,15 +101,18 @@ static char lbuf[128]; static size_t lind, lused; ssize_t -zreadc (fd, cp) +zreadcn (fd, cp, len) int fd; char *cp; + int len; { ssize_t nr; if (lind == lused || lused == 0) { - nr = zread (fd, lbuf, sizeof (lbuf)); + if (len > sizeof (lbuf)) + len = sizeof (lbuf); + nr = zread (fd, lbuf, len); lind = 0; if (nr <= 0) { @@ -123,6 +126,14 @@ zreadc (fd, cp) return 1; } +ssize_t +zreadc (fd, cp) + int fd; + char *cp; +{ + return zreadcn (fd, cp, sizeof (lbuf)); +} + /* Don't mix calls to zreadc and zreadcintr in the same function, since they use the same local buffer. */ ssize_t -- 1.7.12.4
Re: No such file or directory
On Tuesday 01 January 2013 15:10:00 Chet Ramey wrote: > On 1/1/13 2:49 PM, Aharon Robbins wrote: > > Michael Williamson wrote: > >> I have a complaint. Apparently, when unknowingly attempting to run a > >> 32-bit executable file on a 64-bit computer, bash gives the error > >> message "No such file or directory". That error message is baffling and > >> frustratingly unhelpful. Is it possible for bash to provide a better > >> error message in this case? > > > > It's not Bash. That is the error returned from the OS in errno when > > it tries to do an exec(2) of the file. Bash merely translates the > > error into words. > > FWIW, the file in question that's not found is either the 32-bit version > of the loader or one of the required 32-bit libraries, not the binary > itself. it's the ldso missing. if a lib was missing, the ldso would spit out a useful message telling you exactly which lib could not be found. at least, that's the standard Linux (glibc/uclibc/etc...) behavior. $ ./a.out: error while loading shared libraries: libfoo.so: cannot open shared object file: No such file or directory -mike signature.asc Description: This is a digitally signed message part.
Re: No such file or directory
On Wednesday 02 January 2013 09:47:10 Roman Rakus wrote: > On 01/02/2013 03:31 PM, Eric Blake wrote: > > On 01/02/2013 07:28 AM, Michael Williamson wrote: > >> Thanks for your explanation. Now I have another question. > >> Why is the error message for ENOENT simply > >> "No such file or directory", when the man page for execve > >> has this complete description: > >> "The file filename or a script or ELF interpreter does not exist, > >> or a shared library needed for file or interpreter cannot be > >> found."? > > > > Because that's what strerror() in your libc reports. It's not bash's > > fault if libc produces a shorter message (correct in the common case) > > than what the man pages says is possible in the more comprehensive case. > > I think the best would be if kernel provides more informations. that ship has already sailed -mike signature.asc Description: This is a digitally signed message part.
Re: No such file or directory
On Wednesday 02 January 2013 07:07:49 Roman Rakus wrote: > On 01/02/2013 02:25 AM, Mike Frysinger wrote: > > On Tuesday 01 January 2013 15:10:00 Chet Ramey wrote: > >> On 1/1/13 2:49 PM, Aharon Robbins wrote: > >>> Michael Williamson wrote: > >>>> I have a complaint. Apparently, when unknowingly attempting to run a > >>>> 32-bit executable file on a 64-bit computer, bash gives the error > >>>> message "No such file or directory". That error message is baffling > >>>> and frustratingly unhelpful. Is it possible for bash to provide a > >>>> better error message in this case? > >>> > >>> It's not Bash. That is the error returned from the OS in errno when > >>> it tries to do an exec(2) of the file. Bash merely translates the > >>> error into words. > >> > >> FWIW, the file in question that's not found is either the 32-bit version > >> of the loader or one of the required 32-bit libraries, not the binary > >> itself. > > > > it's the ldso missing. if a lib was missing, the ldso would spit out a > > useful message telling you exactly which lib could not be found. at > > least, that's the standard Linux (glibc/uclibc/etc...) behavior. > > > > $ ./a.out: error while loading shared libraries: libfoo.so: cannot open > > shared object file: No such file or directory > > The patch stated in > http://lists.gnu.org/archive/html/bug-bash/2002-03/msg00052.html is > applied in Fedora. > Chet, is it possible to apply it to source? seems like over kill when a slightly tweaked message would be sufficient. if you can stat(command), then include a fragment like "(bad interpreter?)". if we start supporting the ELF format, then people will want to add their own file formats (OS X's dylib also comes to mind). -mike signature.asc Description: This is a digitally signed message part.
interrupted system call when using named pipes on FreeBSD
this is somewhat a continuation of this thread: http://lists.gnu.org/archive/html/bug-bash/2008-10/msg00091.html i've gotten more or less the same report in Gentoo: http://bugs.gentoo.org/447810 the simple test case is: $ cat test.sh #!/bin/bash while :; do (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& while read x ; do : ; done < <(echo foo) done execute `./test.sh` and we see failures pretty much all the time. a simple patch to workaround/fix the issue by Yuta SATOH: --- bash-4.2/redir.c +++ bash-4.2/redir.c @@ -632,7 +632,9 @@ } else { - fd = open (filename, flags, mode); + do { + fd = open (filename, flags, mode); + } while ((fd < 0) && (errno == EINTR)); #if defined (AFS) if ((fd < 0) && (errno == EACCES)) { but we're not sure if this is the route to take ? seems like if bash is handling SIGCHLD, there's no avoiding this sort of check. -mike signature.asc Description: This is a digitally signed message part.
Re: [PATCH] bash: speed up `read -N`
On Monday 19 November 2012 19:46:17 Mike Frysinger wrote: > Rather than using 1 byte reads, use the existing cache read logic. > This could be sped up more, but this change is not as invasive and > should (hopefully) be fairly safe. ping ... -mike signature.asc Description: This is a digitally signed message part.
Re: [PATCH] bash: speed up `read -N`
On Friday 18 January 2013 07:49:22 Chet Ramey wrote: > On 1/18/13 1:31 AM, Mike Frysinger wrote: > > On Monday 19 November 2012 19:46:17 Mike Frysinger wrote: > >> Rather than using 1 byte reads, use the existing cache read logic. > >> This could be sped up more, but this change is not as invasive and > >> should (hopefully) be fairly safe. > > > > ping ... > > It's been in the devel branch for a month. ok. i don't watch the repo (didn't realize it was active even), and didn't see a mention on the list that it was picked up. -mike signature.asc Description: This is a digitally signed message part.
Re: interrupted system call when using named pipes on FreeBSD
On Friday 18 January 2013 07:55:00 Chet Ramey wrote: > On 1/18/13 1:30 AM, Mike Frysinger wrote: > > this is somewhat a continuation of this thread: > > http://lists.gnu.org/archive/html/bug-bash/2008-10/msg00091.html > > > > i've gotten more or less the same report in Gentoo: > > http://bugs.gentoo.org/447810 > > > > the simple test case is: > > $ cat test.sh > > #!/bin/bash > > while :; do > > (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& (:)& > > while read x ; do : ; done < <(echo foo) > > done > > > > execute `./test.sh` and we see failures pretty much all the time. > > > > a simple patch to workaround/fix the issue by Yuta SATOH: > > --- bash-4.2/redir.c > > +++ bash-4.2/redir.c > > @@ -632,7 +632,9 @@ > > } > >else > > { > > - fd = open (filename, flags, mode); > > + do { > > + fd = open (filename, flags, mode); > > + } while ((fd < 0) && (errno == EINTR)); > > #if defined (AFS) > >if ((fd < 0) && (errno == EACCES)) > > { > > > > but we're not sure if this is the route to take ? seems like if bash is > > handling SIGCHLD, there's no avoiding this sort of check. > > Why is open returning -1/EINTR when the SIGCHLD handler is installed with > SA_RESTART? The intent is that opens get restarted even when bash handles > SIGCHLD. i saw some signal() usage, but if it should all be using sigaction/SA_RESTART, i'll have them look along those lines to see if there are signals not being registered correctly (or if the kernel sucks and isn't handling SA_RESTART correctly). -mike signature.asc Description: This is a digitally signed message part.
Re: Q on Bash's self-documented POSIX compliance...
On Sunday 27 January 2013 03:22:35 Pierre Gaston wrote: > On Sun, Jan 27, 2013 at 5:52 AM, John Kearney wrote: > > Am 27.01.2013 01:37, schrieb Clark WANG: > >> On Sat, Jan 26, 2013 at 1:27 PM, Linda Walsh wrote: > >>> I noted on the bash man page that it says it will start in posix > >>> compliance mode when started as 'sh' (/bin/sh). > >>> > >>> What does that mean about bash extensions like arrays and > >>> use of [[]]? > >>> > >>> Those are currently not-POSIX (but due to both Bash and Ksh having > >>> them, some think that such features are part of POSIX now)... > >>> > >>> If you operate in POSIX compliance mode, what guarantee is there that > >>> you can take a script developed with bash, in POSIX compliance mode, > >>> and run it under another POSIX compliant shell? > >>> > >>> Is it such that Bash can run POSIX compliant scripts, BUT, cannot be > >>> (easily) used to develop such, as there is no way to tell it to > >>> only use POSIX? > >>> > >>> If someone runs in POSIX mode, should bash keep arbitrary bash-specific > >>> extensions enabled? > >>> > >>> I am wondering about the rational, but also note that some people > >>> believe they are running a POSIX compatible shell when they use > >>> /bin/sh, but would get rudely surprised is another less feature-full > >>> shell were dropped in as a replacement. > >> > >> I think every POSIX compatible shell has its own extensions so there's > >> no guarantee that a script which works fine in shell A would still work > >> in shell B even if both A and B are POSIX compatible unless the script > >> writer only uses POSIX compatible features. Is there a pure POSIX shell > >> without adding any extensions? > > > > dash is normally a better gauge of how portable your script is, than > > bash in posix mode. > > It is, but it still has a couple of extensions over the standard right. and there's optional parts of the standard that some shells implement and others omit. > As for the rationale, making it strictly compatible in order to test > scripts probably requires quite some more work and I bet Chet would > not be against a --lint option or something like that but it may not > be his primary objective. some tool to analyze would be nice -mike signature.asc Description: This is a digitally signed message part.
[PATCH 2/2] use AC_CHECK_TOOL with `ar` rather than AC_CHECK_PROG
The TOOL variant will automatically search for a $host prefixed program (e.g. x86_64-pc-linux-gnu) rather than looking for `ar` only. This is useful when cross-compiling and it matches the behavior of the other tools that configure relies on (e.g. cc & ranlib). Signed-off-by: Mike Frysinger --- configure.ac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index ef49e0b..cbe84cf 100644 --- a/configure.ac +++ b/configure.ac @@ -625,7 +625,7 @@ dnl END READLINE and HISTORY LIBRARY SECTION dnl programs needed by the build and install process AC_PROG_INSTALL -AC_CHECK_PROG(AR, ar, , ar) +AC_CHECK_TOOL(AR, ar) dnl Set default for ARFLAGS, since autoconf does not have a macro for it. dnl This allows people to set it when running configure or make test -n "$ARFLAGS" || ARFLAGS="cr" -- 1.8.0.2
[PATCH 1/2] drop -i file
Guessing it was added by accident while testing. Signed-off-by: Mike Frysinger --- -i | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 -i diff --git a/-i b/-i deleted file mode 100644 index e69de29..000 -- 1.8.0.2
random crashes when read interrupted by signal & clean up modifies terminal
Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu Compiler: x86_64-pc-linux-gnu-gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' - DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' - DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL - DHAVE_CONFIG_H -I. -I./include -I. -I./include -I./lib -DCPPFLAGS_TEST - DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' -DSTANDARD_UTILS_PATH='/bin:/usr/bin:/sbin:/usr/sbin' - DSYS_BASHRC='/etc/bash/bashrc' -DSYS_BASH_LOGOUT='/etc/bash/bash_logout' - DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -O2 -march=amdfam10 -pipe -g -Wimplicit-function-declaration uname output: Linux vapier 3.7.1 #1 SMP PREEMPT Mon Dec 17 23:01:27 EST 2012 x86_64 AMD Phenom(tm) II X4 980 Processor AuthenticAMD GNU/Linux Machine Type: x86_64-pc-linux-gnu Bash Version: 4.2 Patch Level: 42 Release Status: release Description: this simple testcase (distilled from [1]): $ cat test.sh #!/bin/bash cleanup() { tput rmcup exit 0 } trap cleanup SIGINT while ! read -t0.01 -n1; do : ; done run it in a loop like so: $ while ./test.sh ; do :; done then hold down ctrl+c. eventually it'll crash. [1] https://455990.bugs.gentoo.org/attachment.cgi?id=338214 -mike signature.asc Description: This is a digitally signed message part.
Re: random crashes when read interrupted by signal & clean up modifies terminal
On Friday 08 February 2013 10:04:29 Chet Ramey wrote: > > Machine Type: x86_64-pc-linux-gnu > > > > Bash Version: 4.2 > > Patch Level: 42 > > Release Status: release > > > > Description: > > > > this simple testcase (distilled from [1]): > > Can you reproduce this using the latest devel branch code? it does not crash, however the devel branch appears to be misbehaving, so i'm not sure the test is valid with bash-4.2_p42, if i run: read -t0.013 -n1 it returns pretty quickly with the devel branch though, it often times does not return automatically. it's as if the -t option were not specified. -mike signature.asc Description: This is a digitally signed message part.
Re: autoconf 2.69?
On Wednesday 27 March 2013 11:44:32 Roman Rakus wrote: > Support for the ARM 64 bit CPU architecture (aarch64) was introduced in > autoconf 2.69. bash uses an earlier version of > autoconf, preventing its being built. are you talking about config.{sub,guess}, or something else ? -mike signature.asc Description: This is a digitally signed message part.
Re: autoconf 2.69?
On Wednesday 27 March 2013 14:02:57 Chet Ramey wrote: > On 3/27/13 1:07 PM, Mike Frysinger wrote: > > On Wednesday 27 March 2013 11:44:32 Roman Rakus wrote: > >> Support for the ARM 64 bit CPU architecture (aarch64) was introduced in > >> autoconf 2.69. bash uses an earlier version of > >> autoconf, preventing its being built. > > > > are you talking about config.{sub,guess}, or something else ? > > He's talking about config.{sub,guess}. I updated them from the autoconf > 2.69 versions a couple of days ago and they will be in the next snapshot. that's what i figured. i don't see why any distro would care what config. {sub,guess} file gets shipped with package tarballs. any sane package manger should be transparently replacing them with the latest CVS snapshot from the GNU config project. leaving it to each package maintainer to manually update it is ridiculous. not that i'm against upstream people from updating their autotool versions. -mike signature.asc Description: This is a digitally signed message part.
Re: Local variables overriding global constants
On Wednesday 03 April 2013 09:34:18 Chet Ramey wrote: > A variable is declared readonly for a reason, and, since readonly variables > may not be assigned to, I don't believe you should be able to override a > readonly variable by declaring it local to a function. I did, however > reluctantly, allow a way to do this by permitting a function writer to > override local readonly variables (given bash's dynamic scoping) or to > override global readonly variables when several levels down in the call > stack. sounds like the fundamental limitation is that the person writing the code can't declare their intentions. after your compromise, they now can. if you follow the convention of putting all code into a main() func (rather than executing code in global scope), you can do: #!/bin/bash ... declare all your funcs ... main() { declare -gr VAR=value # i really want this to stay read-only declare -r FOO=value # i'm ok with people overriding this in funcs ... } main "$@"; exit having this flexibility just came up in the Gentoo world and i'm glad i saw this thread as now we have a technical solution :). we have some vars we want to be read-only and not overridable, but we have a few we want to default to read-only but allow people to localize. > The current behavior is a compromise. Compromises always come back to > bite you because the inconsistencies that result are more trouble than > the flexibility they offer is worth. would it be possible to enable a mode where you had to explicitly `declare +r` the var ? being able to "simply" do `local FOO` allows accidental overriding in sub funcs where the writer might not have even realized they were clobbering something possibly important. i'm not interested in debating the "you should be familiar with the code base" as that's not being realistic. large code bases in the open source world, by their nature, accept drive by commits and accidents happen. -mike signature.asc Description: This is a digitally signed message part.
Re: Local variables overriding global constants
On Wednesday 03 April 2013 21:38:19 Chet Ramey wrote: > On 4/3/13 12:31 PM, Mike Frysinger wrote: > > sounds like the fundamental limitation is that the person writing the > > code can't declare their intentions. after your compromise, they now > > can. if you follow the convention of putting all code into a main() > > func (rather than executing code in global scope), you can do: > > #!/bin/bash > > ... declare all your funcs ... > > main() { > > declare -gr VAR=value # i really want this to stay read-only > > declare -r FOO=value # i'm ok with people overriding this in funcs > > ... > > } > > main "$@"; exit > > > > having this flexibility just came up in the Gentoo world and i'm glad i > > saw this thread as now we have a technical solution :). we have some > > vars we want to be read-only and not overridable, but we have a few we > > want to default to read-only but allow people to localize. > > Yes, this has come up before. It's one reason to keep the compromise in > place. But is FOO being readonly in the function where it's declared and > not being able to unset it enough rationale to continue to support these > semantics? One problem is the one you point out below. if we had an explicit `declare +r`, then it would be OK :) i like languages which allow you to sprinkle sugar around to proactively protect you, but still allow you to override things in that 0.1% edge case. like C's const marker on a pointer -- the vast majority of the time, you want it to be const and to keep people from clobbering things, but every once in a blue moon, you want to ignore it and allow the store. > >> The current behavior is a compromise. Compromises always come back to > >> bite you because the inconsistencies that result are more trouble than > >> the flexibility they offer is worth. > > > > would it be possible to enable a mode where you had to explicitly > > `declare +r` the var ? being able to "simply" do `local FOO` allows > > accidental overriding in sub funcs where the writer might not have even > > realized they were clobbering something possibly important. > > It's an idea, but I don't really like the idea of making declare +r, which > is disallowed everywhere else, do something in just this one special > context. Maybe another flag. I'll have to think on it. is there a reason for not just allowing `declare +r` everywhere ? seems like the proposal fits nicely into the existing system (although you've said you're not terribly happy with said system): you can do `declare -gr` to get perm- read only before, or you can do `declare -r` to get read only by default while still allowing sub functions to override if they really want. > > i'm not interested in debating the "you should be familiar with the code > > base" as that's not being realistic. large code bases in the open > > source world, by their nature, accept drive by commits and accidents > > happen. > > Are you sure you meant to include this in your reply? Were you replying to > some other message at the same time? i meant it here to head off the naysayers who would propose my request was unnecessary based on the logic "if you don't know the code base, then you shouldn't be changing it". i've seen this argument line before, but maybe i'm being unnecessarily pessimistic :). -mike signature.asc Description: This is a digitally signed message part.
Re: Local variables overriding global constants
On Thursday 04 April 2013 10:20:50 Chet Ramey wrote: > On 4/4/13 12:34 AM, Mike Frysinger wrote: > >>> would it be possible to enable a mode where you had to explicitly > >>> `declare +r` the var ? being able to "simply" do `local FOO` allows > >>> accidental overriding in sub funcs where the writer might not have even > >>> realized they were clobbering something possibly important. > >> > >> It's an idea, but I don't really like the idea of making declare +r, > >> which is disallowed everywhere else, do something in just this one > >> special context. Maybe another flag. I'll have to think on it. > > > > is there a reason for not just allowing `declare +r` everywhere ? seems > > like the proposal fits nicely into the existing system (although you've > > said you're not terribly happy with said system): you can do `declare > > -gr` to get perm- read only before, or you can do `declare -r` to get > > read only by default while still allowing sub functions to override if > > they really want. > > The idea, I believe, behind `readonly' is that you want constants. There's > no reason to even offer the functionality if you can easily make things > non-constant. so would the more palpable thing be to introduce a new flag then like -R ? it'd be like -r, but you could override it in local scope by doing `declare +R`. e.g. something like: #!/bin/sh declare -R VAR=val foo() { declare +R VAR=myval echo $VAR } echo $VAR foo echo $VAR this would show: val myval val but doing something like `local VAR` or `declare VAR` in foo() would result in an error -- you need the explicit +R. the issue i'm trying to solve is to provide an environment that protects the dev from accidental clobbers without preventing the small edge cases when the dev really truly wants to override something. my proposal above satisfies that, but i'm not tied to specific details as long as i can support the stated use case. -mike signature.asc Description: This is a digitally signed message part.
`declare -fp` mishandles heredoc with || statement
simple code snippet: $ cat test.sh func() { cat > / < / < signature.asc Description: This is a digitally signed message part.
Re: `declare -fp` mishandles heredoc with || statement
On Saturday 01 June 2013 17:07:33 Chet Ramey wrote: > On 5/31/13 10:37 PM, Mike Frysinger wrote: > > simple code snippet: > > $ cat test.sh > > func() { > > cat > / < > 11 > > EOF > > } > > declare -fp > > > > when run, we see the || statement is incorrectly moved to after the > > heredoc: $ bash ./test.sh > > func () > > { > > cat > / < > 11 > > EOF > > || echo FAIL > > } > > > > every version of bash i tried fails this way (2.05b through 4.2.45) > > I don't get this. I see, when using bash-4.2.45: > > $ ./bash-4.2-patched/bash ./x1 > func () > { > cat > /tmp/xxx < 11 > EOF > echo FAIL > } > > I get the same thing going all the way back to bash-4.0. I see the same > results you do on bash-3.2.51, but that's old enough that it's not going > to change. err, yeah, sorry. running too many versions of bash (like 10) made me miss the subtle nuance of the || being up top vs down below. bash-4.0+ work. -mike signature.asc Description: This is a digitally signed message part.
Re: don't just seek to the next line if the script has been edited
On Sunday 09 June 2013 16:59:15 Linda Walsh wrote: > jida...@jidanni.org wrote: > > All I know is there I am in emacs seeing things in the output of a > > running bash script that I want to tweak and get busy tweaking and saving > > changes before the script finishes, thinking that all this stuff will be > > effective on the next run of it, when lo and behold now it begins > > executing random bytes... Yes one can say that these are not C programs > > > If they were C programs, and you were to edit binary in place, while it was > executing, using memory-mapped I/O, the machine might very well start > executing garbage and your C program would dump core or similar. pretty sure the linux kernel (and others?) would return ETXTBSY and not even allow the write -mike signature.asc Description: This is a digitally signed message part.
Re: currently doable? Indirect notation used w/a hash
On Monday 10 June 2013 18:20:44 Chris F.A. Johnson wrote: > On Mon, 10 Jun 2013, Linda Walsh wrote: > >> Point taken, but the only way such a string would be passed as a > >> variable name is if it was given as user input -- which would, > >> presumably, be sanitized before being used. Programming it literally > >> makes as much sense as 'rm -rf /'. > > > > --- > > > > That still didn't POSIX-Gnu rm from disabling that ability. > > Did they? I'm not going to test it :( do it as non-root: $ rm -rf / rm: it is dangerous to operate recursively on `/' rm: use --no-preserve-root to override this failsafe -mike signature.asc Description: This is a digitally signed message part.
Re: currently doable? Indirect notation used w/a hash
On Tuesday 11 June 2013 03:23:29 Chris Down wrote: > On 11 Jun 2013 02:19, "Mike Frysinger" wrote: > > On Monday 10 June 2013 18:20:44 Chris F.A. Johnson wrote: > > > On Mon, 10 Jun 2013, Linda Walsh wrote: > > > >> Point taken, but the only way such a string would be passed as a > > > >> variable name is if it was given as user input -- which would, > > > >> presumably, be sanitized before being used. Programming it > > > >> literally makes as much sense as 'rm -rf /'. > > > > > > > > --- > > > > > > > > That still didn't POSIX-Gnu rm from disabling that ability. > > > > > > Did they? I'm not going to test it :( > > > > do it as non-root: > > $ rm -rf / > > rm: it is dangerous to operate recursively on `/' > > rm: use --no-preserve-root to override this failsafe > > -mike > > If that check didn't exist, rm -rf / would still be dangerous; it would > just give a lot of errors for the files it couldn't delete, and delete the > ones it can. Running it as a normal user doesn't make it safer. sure it does. you just have to be fast :P. -mike signature.asc Description: This is a digitally signed message part.
`printf -v foo ""` does not set foo=
simple test code: unset foo printf -v foo "" echo ${foo+set} that does not display "set". seems to have been this way since the feature was added in bash-3.1. -mike signature.asc Description: This is a digitally signed message part.
Re: `printf -v foo ""` does not set foo=
On Monday 24 June 2013 16:13:01 Chet Ramey wrote: > On 6/17/13 1:27 AM, Mike Frysinger wrote: > > simple test code: > > unset foo > > printf -v foo "" > > echo ${foo+set} > > > > that does not display "set". seems to have been this way since the > > feature was added in bash-3.1. > > printf returns immediately if the format string is null. It has always > been implemented this way. seems like when the -v arg is in use, that [otherwise reasonable] shortcut should be not taken ? -mike signature.asc Description: This is a digitally signed message part.
Re: bash built-ins `true' and `false' undocumented
On Friday 27 September 2013 16:20:57 Chris Down wrote: > On 2013-09-27 20:19, Roland Winkler wrote: > > Yet I think that the info pages are supposed to provide the definitive > > information about GNU software. So I still believe that it would be > > useful to list these builtins in the info pages, too. Certainly, > > the info pages are more useful for getting an overview. `help foo' > > only helps if you already have some idea of what you are looking for. > > Well, they're directly part of the POSIX spec. I'm not sure there's a > need to reiterate absolutely everything that is already required by > POSIX. on a GNU system, coreutils provides `true` and `false` as well as man & info pages. i don't think having bash duplicate things would be useful in any way. -mike signature.asc Description: This is a digitally signed message part.
Re: [PATCH] bash: add socket server support
On Thursday 14 November 2013 11:32:18 Cedric Blancher wrote: > On 13 November 2013 15:46, Joel Martin wrote: > > On Wed, Nov 13, 2013 at 6:39 AM, Irek Szczesniak wrote: > >> The other problems I see is: > >> How can the script get access to the data returned by accept()? Unlike > >> ksh93 bash4 has no compound variables yet. > >> How can the accept() socket be passed to a child process so that > >> request processing can be passed to a fork()ed child process? > > > > The accept socket is not exposed to the script by this change. > > Why? > > What happens if someone wishes to set more flags on that socket? does bash even support setting flags on sockets today ? if not, then demanding more than the status quo here is a bit unreasonable. > Or > wishes to have the per-accept() data, like the requester address? Not > all protocols (say, NFS or non-krb5 rsh protocols) embed the request > address in the request itself, and if you look at the Apache archives > it can even be considered an attack of the address from accept() and > in the request do not match (requester spoofing attack). > > So far I can see the server socket you implemented is only barely > enough to do a HTTP server, and not even an good one. it's fsckin bash for fsck sake. it's never going to be able to be a "good" server. it's meant for quick hacks. if you have an important service utilizing this stuff, then your infrastructure deserves to burn to the ground when it fails :P. > Maybe a better way would be to implement something which creates a > server socket for accept, and then have a poll builtin (coincidentally > ksh93v- has a very nice poll(1) implementation) which can be used to > wait both on the accept socket and the sockets returned by accept. If > the accept sockets returns an event you could use something like > /dev/tcpserver/nextaccept to open to obtain the next socket returned > by accept(), and store the accept() data in an env variable. adding other features like this doesn't sound like a bad idea at all, but it also doesn't sound like it should preclude a simple & useful extension to existing bash code. -mike signature.asc Description: This is a digitally signed message part.
Re: [PATCH] bash: add socket server support
On Wednesday 13 November 2013 06:39:45 Irek Szczesniak wrote: > On Wed, Nov 13, 2013 at 7:35 AM, Piotr Grzybowski wrote: > > Hi Everyone, hi Joel, > > > > the idea is nice, and I can really see that it is useful, but I would > > > > be extremely careful with introducing those kind of changes, it can be > > easily interpreted as "backdoor feature", that is: from security point > > of view it could be a disaster. > > ':' in *any* Unix paths is not wise because its already used by $PATH. i don't think that matters here. we aren't talking about any path that is searched via $PATH, we're talking about a bash-internal pseudo path that is checked explicitly. there is no such /dev/tcp/ path on *nix systems; bash keys off that to do internal network parsing. -mike signature.asc Description: This is a digitally signed message part.
Re: [PATCH] bash: add socket server support
On Thursday 14 November 2013 00:50:33 Piotr Grzybowski wrote: > I can think of an attack, just provide me with ip address of the host > :) and a root account password and login :) > > I agree that most systems have other abilities to do the (almost) > same, but yet, all systems (that is to say many more than have nc) > have bash, and while roots on those will expect netcat to be able to > open listen sockets they do not necessarily expect bash to do the > same. > My main point is: this patch means that every user that has access to > who-knows-how restricted shell can open listen sockets, and unless > someone thought of using grsecurity to deny access to bind(2) it is > unrestricted. as Joel said, the functionality he is adding does not impact the attack vector at all. bash already has networking functionality built into it. > This feature should at least be switchable, or otherwise restricted. it already is via a configure flag: --disable-net-redirections -mike signature.asc Description: This is a digitally signed message part.
Re: Why bash doesn't have bug reporting site?
On Tuesday 14 January 2014 01:31:01 Yuri wrote: > On 01/13/2014 12:32, Eric Blake wrote: > > A mailing list IS a bug reporting system. When something receives as > > low a volume of bug reports as bash, the mailing list archives are > > sufficient for tracking the status of reported bugs. It's not worth the > > hassle of integrating into a larger system if said system won't be used > > often enough to provide more gains than the cost of learning it. In > > particular, I will refuse to use any system that requires a web browser > > in order to submit or modify status of a bug (ie. any GOOD bug tracker > > system needs to still interact with an email front-end). > > e-mail has quite a few vulnerabilities. Spam, impersonation, etc. In the > system relying on e-mail, spam filter has to be present. And due to this > you will get false positives and false negatives, resulting in lost > information. yeah, none of those are real issues, nor are they specific to e-mail. > Among other benefits: > * Ability to search by various criteria. For ex. database-based tracking > system can show all open tickets or all your tickets. How can you do > this in ML? use one of the many archives and do free form text search. or download the files and run `grep` yourself :p. > * Ability to link with patches. In fact, github allows submitters to > attach a patch, and admin can just merge it in with one click, provided > there are no conflicts. git has dirt simple integration with e-mail too. -mike signature.asc Description: This is a digitally signed message part.
Re: Bash-4.3-rc2 available for FTP
general.c is missing traps.h include: general.c: In function ‘bash_tilde_expand’: general.c:991:3: warning: implicit declaration of function ‘any_signals_trapped’ [-Wimplicit-function-declaration] if (any_signals_trapped () < 0) unicode.c is missing stdio.h include: unicode.c: In function ‘u32tocesc’: unicode.c:141:5: warning: implicit declaration of function ‘sprintf’ [-Wimplicit-function-declaration] l = sprintf (s, "\\u%04X", wc); unicode.c:141:9: warning: incompatible implicit declaration of built-in function ‘sprintf’ [enabled by default] l = sprintf (s, "\\u%04X", wc); expr.c uses a func name that conflicts with glibc's math lib: expr.c:210:17: warning: conflicting types for built-in function ‘exp2’ [enabled by default] static intmax_t exp2 __P((void)); guess it should be renamed to something else. -mike signature.asc Description: This is a digitally signed message part.
Re: Bash-4.3-rc2 available for FTP
On Thursday, January 30, 2014 10:48:34 Chet Ramey wrote: > o. The shell now handles backslashes in regular expression arguments to the >[[ command's =~ operator slightly differently, resulting in more >consistent behavior. hmm, i seem to be running into a bug here. the bash man page suggests that the behavior should match regex(3), but it doesn't seem to. consider: $ cat doit.sh v="a\-b" [[ ! a-b =~ ${v} ]] : $? # This is expected behavior. $ bash-4.2_p45 -x ./doit.sh + v='a\-b' + [[ ! a-b =~ a\-b ]] + : 1 # This is unexpected behavior. $ bash-4.3_rc2 -x ./doit.sh + v='a\-b' + [[ ! a-b =~ a\\-b ]] + : 0 compare that to regex(3) behavior: $ cat test.c #include #include int main(int argc, char *argv[]) { regex_t preg; int i1 = regcomp(&preg, argv[1], 0); int i2 = regexec(&preg, argv[2], 0, NULL, 0); printf("regcomp(%s) = %i\n" "regexec(%s) = %i\n", argv[1], i1, argv[2], i2); return 0; } $ gcc test.c $ ./a.out 'a\-b' a-b regcomp(a\-b) = 0 regexec(a-b) = 0 it had no problem matching ... -mike signature.asc Description: This is a digitally signed message part.
Re: Bash-4.3-rc2 available for FTP
On Thursday, January 30, 2014 23:12:18 Andreas Schwab wrote: > Mike Frysinger writes: > > $ ./a.out 'a\-b' a-b > > regcomp(a\-b) = 0 > > The effect of \- in a BRE is undefined. yes, but that's kind of irrelevant for the point raised here. bash's =~ uses ERE, and passing in REG_EXTENDED to get ERE semantics with regcomp() yields the same result (at least with glibc) as above. -mike signature.asc Description: This is a digitally signed message part.
Re: Bash-4.3-rc2 available for FTP
On Thursday, January 30, 2014 23:53:55 Andreas Schwab wrote: > Mike Frysinger writes: > > yes, but that's kind of irrelevant for the point raised here. bash's =~ > > uses ERE, and passing in REG_EXTENDED to get ERE semantics with regcomp() > > yields the same result (at least with glibc) as above. > > The effect of \- in an ERE is still undefined. so it is. yet still irrelevant. the GNU library defines behavior explicitly: https://www.gnu.org/software/gnulib/manual/html_node/The-Backslash-Character.html#The-Backslash-Character and it still doesn't explain why every version from bash-3.0 through bash-4.2 behaves the same but bash-4.3 does not. the release notes specifically call out backslash changes to make things "more consistent behavior". i don't changing the behavior of the example i posted makes sense. -mike signature.asc Description: This is a digitally signed message part.
Re: pattern substitution expands "~" even in quoted variables
On Fri 07 Mar 2014 16:15:05 Eduardo A. Bustamante López wrote: > dualbus@debian:~$ for shell in /bin/bash ~/local/bin/bash; do "$shell" -c > 'p=foo_bar; echo "${p/_/\~} $BASH_VERSION"'; done > foo\~bar 4.2.37(1)-release > foo~bar 4.3.0(2)-release you can get same behavior in <=bash-4.2 and >=bash-4.3 by using single quotes: P="foo_bar" MY_P=${P/_/'~'} echo "${MY_P}" -mike signature.asc Description: This is a digitally signed message part.
Re: RFE: a way to echo the arguments with quoting
On Sun 02 Mar 2014 10:12:04 Andreas Schwab wrote: > Dave Yost writes: > > I have an ugly function I wrote for zsh that does this: > > > > Sat 14:17:25 ip2 yost /Users/yost > > 1 634 Z% echo-quoted xyz \$foo 'a b c ' '\n' > > xyz '$foo' 'a b c ' '\n' > > Sat 14:17:53 ip2 yost /Users/yost > > 0 635 Z% > > > > It would be nice if there were an easy way to do this in bash. > > printf "%q" does that. indeed -- also remember that you need "$@" and not $@ (as OP's first e-mail used). e.g.: set -- a 'b c d' 1 2 printf '%q ' "$@" -mike signature.asc Description: This is a digitally signed message part.
Re: When a hashed pathname is deleted, search PATH
On Tue 18 Mar 2014 01:04:03 Linda Walsh wrote: > Chet Ramey wrote: > > Because the execution fails in a child process. You'd be able to fix it > > for that process, but would do nothing about the contents of the parent > > shell's hash table. > > > > The way the option works now is to check the hash lookups and delete > > anything that is no longer an executable file, then redo the lookup and > > hash the new value. > > > Wouldn't bash notice that the child exited in <.1 seconds ( > or is it less? as soon as you talk about trying to time something, you're obviously looking at it wrong. having a system that only works when the cpu/disk is fast and idle is a waste of time and bad for everyone. if bash could rely on real time signals, sigqueue could be used to send a signal with attached info (like ENOEXEC) and the parent could look for that. alternatively, accept that it's not a real problem in practice for the majority of people as Chet has pointed out ;). -mike signature.asc Description: This is a digitally signed message part.
Re: bash cross with installed readline
On Sun 16 Mar 2014 13:30:55 Andrew Kosteltsev wrote: > When we build bash for some targets the INCLUDES variable for BUILD_CC > contains the path to target readline headers. This path points to the > target headers which not preferred for utilities which prepared for build > machine. > > Also when we have installed readline on the target the configure script > avoids cross_compilation problems with AC_TRY_RUN and substitutes wrong > (very old) version of libreadline. If we sure that we installed correct > readline version we can change configure script for cross compilation > process. > > Please look at attached patches. If this solution can be used for common > case then please apply these patches for the future versions of bash. i haven't seen the issues you describe for the first patch. maybe it's because i don't pass full paths to the target readline but instead let the toolchain find it for me. so there is never any -I flag mixing. the 2nd patch is the wrong way to approach the problem. change the AC_TRY_RUN into an AC_TRY_COMPILE test by relying on RL_VERSION_{MAJOR,MINOR} being defined and doing an incremental search for its value. see how autoconf implements its AC_CHECK_SIZEOF macro using only compile tests for the algorithm. -mike signature.asc Description: This is a digitally signed message part.
Re: When a hashed pathname is deleted, search PATH
On Tue 18 Mar 2014 21:11:07 Linda Walsh wrote: > Mike Frysinger wrote: > > On Tue 18 Mar 2014 01:04:03 Linda Walsh wrote: > >> Chet Ramey wrote: > >>> Because the execution fails in a child process. You'd be able to fix it > >>> for that process, but would do nothing about the contents of the parent > >>> shell's hash table. > >>> > >>> The way the option works now is to check the hash lookups and delete > >>> anything that is no longer an executable file, then redo the lookup and > >>> hash the new value. > >> > >> > >> Wouldn't bash notice that the child exited in <.1 seconds ( > >> or is it less? > > > > as soon as you talk about trying to time something, you're obviously > > looking at it wrong. having a system that only works when the cpu/disk > > is fast and idle is a waste of time and bad for everyone. > > --- > > Um... this is a User Interface involving humans, and you are looking > for something that needs to be 100%? If this was a reactor control > program, that's one thing, but in deciding what solution to implement to > save some small lookup time or throw it away, an 90% solution is > probably fine. It's called a heuristic. AI machines use them. > Thinking people use them. Why should bash be different? except now you have useless knobs users don't want to deal with, and now your solution is "sometimes it works, sometimes it doesn't, so really you can't rely on it and you have to go back to the same system you've been using all along". trotting out other systems (like defense in depth) doesn't change the fact that your idea is flaky at best and is entirely user visible (unlike defense in depth strategies). i already highlighted a technical way of solving it 100% of the time. -mike signature.asc Description: This is a digitally signed message part.
Re: Special built-ins not persisting assignments
On Tue 25 Mar 2014 00:39:18 Pollock, Wayne wrote: > $ echo $BASH_VERSION > 4.2.45(1)-release > > $ unset foo > > $ foo=bar : > > $ echo $foo > > > $ > > === > > According to POSIX/SUS issue 7, assignments for special builtins > should persist. So the output should be ``bar''. > > Is there a setting I should turn off (or need to enable), to > make this work correctly? > > I was able to confirm this bug for version 4.2.37(1)-release as > well. (zsh 4.3.17 (i386-redhat-linux-gnu) has the same bug.) as noted, this is a feature of bash :) POSIX also imposes annoying behavior that bash fixes: unset foo f() { :; } foo=bar f echo $foo POSIX will show bar (ugh) while bash will not (yeah!) -mike signature.asc Description: This is a digitally signed message part.
Re: ls doesn't work in if statements in bash 4.3
On Wed 26 Mar 2014 17:45:33 billyco...@gmail.com wrote: > I thought about the changes I have made recently and I had added the > following into my .bashrc: > > eval $(dircolors -b ~/.dir_colors) > > I commented it out, and now everything works. I think it's still a bug, > though I know how to fix it. doubtful the problem is bash. if ls is writing control codes, then bash will treat them as part of the filename. you can verify by piping the output through hexdump and seeing what shows up. this is a good example though of why using `ls` is almost always the wrong answer. use unadorned globs: for f in dog*; do ... i'd point out that if any of the files in your dir have whitespace, your code would also break: touch 'dog a b c' for f in `ls dog*`; do ... -mike signature.asc Description: This is a digitally signed message part.
easier construction of arrays
On Thu 27 Mar 2014 08:01:45 Greg Wooledge wrote: > files=() > while IFS= read -r -d '' file; do > files+=("$file") > done < <(find . -iname '*.mp3' ! -iname '*abba*' -print0) i've seen this construct duplicated so many times :(. i wish we had a native option for it. maybe something like: read -A files -r -d '' < <(find . -iname '*.mp3' -print0) perhaps there is another shell out there that implements something that can replace that loop that bash can crib ? -mike signature.asc Description: This is a digitally signed message part.
Re: easier construction of arrays
On Thu 27 Mar 2014 19:15:13 Pierre Gaston wrote: > On Thu, Mar 27, 2014 at 5:53 PM, Mike Frysinger wrote: > > On Thu 27 Mar 2014 08:01:45 Greg Wooledge wrote: > > > files=() > > > while IFS= read -r -d '' file; do > > > > > > files+=("$file") > > > > > > done < <(find . -iname '*.mp3' ! -iname '*abba*' -print0) > > > > i've seen this construct duplicated so many times :(. i wish we had a > > native > > > > option for it. maybe something like: > > read -A files -r -d '' < <(find . -iname '*.mp3' -print0) > > > > perhaps there is another shell out there that implements something that > > can replace that loop that bash can crib ? > > An option to change the delimiter for readarray/mapfile? thanks, i wasn't aware of that func. that seems like the easiest solution. -mike signature.asc Description: This is a digitally signed message part.
Re: ls doesn't work in if statements in bash 4.3
On Fri 28 Mar 2014 10:23:17 Chris Down wrote: there's really nothing to add to Chris's wonderful post :) -mike signature.asc Description: This is a digitally signed message part.