Error when configuring bash 3.2
[...] checking for bits64_t... no checking for ptrdiff_t... yes checking whether stat file-mode macros are broken... no checking whether #! works in shell scripts... ./configure: ./conftest: /bin/cat: bad interpreter: No such file or directory yes checking whether the ctype macros accept non-ascii characters... no checking if dup2 fails to clear the close-on-exec flag... no checking whether pgrps need synchronization... no [...] ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
Re: Commands executed with $($prog) do not run properly
On Sat, Nov 6, 2010 at 6:12 AM, wrote: > Configuration Information [Automatically generated, do not change]: > Machine: i486 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL > -DHAVE_CONFIG_H -I. -I../bash -I../bash/include -I../bash/lib -g -O2 > -Wall > uname output: Linux main.hgbhome.net 2.6.18-4-686 #1 SMP Wed May 9 23:03:12 > UTC 2007 i686 GNU/Linux > Machine Type: i486-pc-linux-gnu > > Bash Version: 3.2 > Patch Level: 39 > Release Status: release > > Description: > > Hi, > > if a program ist started with $($prog) the output is different from > or even $(). Example with find (excluding a directory from search): > > $ ls -l /test > total 16 > drwxrwxr-x 2 hgb hgb 4096 Nov 6 02:14 a > drwxrwxr-x 2 hgb hgb 4096 Nov 6 02:14 b > drwxrwxr-x 2 hgb hgb 4096 Nov 6 02:14 c > drwxrwxr-x 2 hgb hgb 4096 Nov 6 02:14 d > $ find /test -type d ! -wholename "/test" > /test/c > /test/b > /test/d > /test/a > $ echo "$(find /test -type d ! -wholename "/test")" > /test/c > /test/b > /test/d > /test/a > $ prog='find /test -type d ! -wholename "/test"' > $ echo $prog > find /test -type d ! -wholename "/test" > $ echo "$($prog)" > /test > /test/c > /test/b > /test/d > /test/a > $ > > As seen above /test ist shown when $prog is executed. > > I see the same behavior with > GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) (CentOS) > GNU bash, version 3.2.39(1)-release (i486-pc-linux-gnu) (debian) > > > I understand that the above looks crazy and not like a problem out of real > life > I had to use find to crawl through a huge tree and remove only a few > files > in it. I came across the above problem when feeding an array with the output > of > find; I fixed it for now by removing the unwanted entries from the output > array. > > Thanks in advance for looking into this. > > Regards > -- hgb > > > > your problem is there: $ prog='find /test -type d ! -wholename "/test"' The quotes around " " are quoted, they are not special to the shell anymore, only the literal, non quoted quotes are special to the shell. When you later execute $prog, the " " are not removed, they are no more special than the other literal chars. check this http://mywiki.wooledge.org/BashFAQ/050
Re: BUG: echo builtin does not handle -- arguments
On Mon, Nov 22, 2010 at 10:33 AM, Марк Коренберг wrote: > in latest bash: > > suppose script: > > for i in "${filenam...@]}"; do > echo "$i" > done > > if malicious user give file name "-e", empty string will be emitted to > stdout, but string "-e" should. > > It will be nice if I cat write > echo -- "$i" > as many tool, such as grep, use. > > Now, I replace echo "$i" with printf "%s\n" "$i", but it is > workaround, as I think. > > -- > Segmentation fault > > It's one of the known problem with echo. SUS specifically says that -- should not do that, that echo is not portable and that "New applications are encouraged to use printf instead of echo". So printf is more than a workaround, it's the way to go.
Re: comparison inside [[]] is not numeric comparison?
On Sat, Nov 20, 2010 at 2:45 AM, john.ruckstuhl wrote: > In bash, a comparison inside "[["/"]]" is lexicographic not numeric? > This isn't what I expected. > Which part of the documentation would set me straight? If someone > could quote the fine manual, that would be great. > > $ if [[ 2000 > 200 ]]; then echo pass; else echo wierd; fi > pass > > $ if [[ 1000 > 200 ]]; then echo pass; else echo wierd; fi > wierd > > $ set | grep BASH_VERSION > BASH_VERSION='3.2.51(24)-release' > > Thanks, > John R. >, http://www.gnu.org/software/bash/manual/bash.html#Bash-Conditional-Expressions string1 < string2 True if string1 sorts before string2 lexicographically. string1 > string2 True if string1 sorts after string2 lexicographically.
Re: Recursively calling a bash script goes undetected and eats all system memory
On Wed, Dec 8, 2010 at 11:15 AM, Diggory Hardy wrote: > Hello, > > With a simple script such as that below, bash can enter an infinite loop of > eating memory until the system is rendered unusable: > > #!/bin/bash > PATH=~ > infinitely-recurse > > Save this as infinitely-recurse in your home directory and run - and make > sure you kill it pretty quick. OK, so an obvious bug when put like this, > though it bit me recently (mistakenly using PATH as an ordinary variable and > having a script with the same name as a system program). Would it not be > simple to add some kind of protection against this — say don't let a script > call itself more than 100 times? > > Thanks, > Diggory > Well, I'm not a big fan of the technique, but out there I see a lot of wrapper scripts calling themselves to automatically restart an application.
Re: Recursively calling a bash script goes undetected and eats all system memory
On Fri, Dec 10, 2010 at 11:25 AM, Diggory Hardy wrote: > On Thursday 09 December 2010 Pierre Gaston wrote: >> On Wed, Dec 8, 2010 at 11:15 AM, Diggory Hardy >> wrote: >> > Hello, >> > >> > With a simple script such as that below, bash can enter an infinite loop >> > of eating memory until the system is rendered unusable: >> > >> > #!/bin/bash >> > PATH=~ >> > infinitely-recurse >> > >> > Save this as infinitely-recurse in your home directory and run - and make >> > sure you kill it pretty quick. OK, so an obvious bug when put like this, >> > though it bit me recently (mistakenly using PATH as an ordinary variable >> > and having a script with the same name as a system program). Would it not >> > be simple to add some kind of protection against this — say don't let a >> > script call itself more than 100 times? >> > >> > Thanks, >> > Diggory >> > >> Well, I'm not a big fan of the technique, but out there I see a lot of >> wrapper scripts calling themselves to automatically restart an >> application. >> > Uh. Then over time it is legitimate to have a script recursively call itself > a few thousand times with each instance still in memory? Well they use exec to avoid that. > The potential to grind the system to a complete halt is pretty serious > though. Perhaps the ideal solution would be to have the kernel intervene > before it starts thrashing memory, but that doesn't seem to happen. Sure, but you can do that with pretty much any tools available.
Re: how to escape single quote?
On Wed, Dec 29, 2010 at 12:44 PM, ali hagigat wrote: > Dennis, > > Nice. Much appreciated > What logic is it using you think when we use echo 'ppp'\''qqq'? > The logic is: 1) you need to close the quotes with ' 2) concatenate a single quotes using another form of quoting for instance: \' but "'" also works 3) then you reopen the single quote for the rest of your string ' It's not a form of quoting is 3 parts put together 'A' and \' and 'B' Why echo 'ppp\'qqq' is not OK? It can not escape single quote by \ ! because \ is not special inside single quotes
Re: List of keys of an associative array
On Fri, Dec 31, 2010 at 11:38 AM, wrote: >Hello > > For regular arrays, we can get the list of keys by using the form > ${!some_arr...@]}. > But this just doesn’t work for associative arrays. > > ${!some_associative_arr...@]} is actually 0. Is that a bug ? > Is there another way to get the list of keys available in an associative > array ? > $ declare -A a;a[foo]=bar;a[baz]=bux;echo ${...@]} baz foo
Re: List of keys of an associative array
On Fri, Dec 31, 2010 at 11:56 AM, wrote: > Ha. Indeed, if i use declare –A, it works. > > > > But why is bash letting me use foo[bar]=something in the first place, if I > don’t declare foo as an associative array ? > > Maybe the bug’s here. > > > > D > > > It's because [ ] is an arithmetic context. In an arithmetic context the content of the variable is evaluated, and a null string is evaluated to 0, just like when you do echo $((var))
Setting TMOUT for select is broken?
Configuration Information [Automatically generated, do not change]: Machine: i686 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='i686' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu' -DCONF_VENDOR='pc' -DLOCALEDIR='$ uname output: Linux pgas-laptop 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux Machine Type: i686-pc-linux-gnu Bash Version: 4.1 Patch Level: 5 Release Status: release Description: when TMOUT is set, select just redisplay the menu, nothing is executed, the select never exits. Repeat-By: bash -c 'TMOUT=1;select f in f b;do echo foo;done'
Re: How to get control from bash script of a batch job
On Tue, Feb 8, 2011 at 7:38 AM, mauede wrote: > > If I type "qstat" on a terminal line I get the batch running process ID > which is not the same as $!. You are confusing bash's jobs with the jobs of the batch utilities, they are 2 unrelated things
Re: Inconsistence when checking if a pattern is quoted or not for `==' and `=~' in [[ ]]
=~ \a. matches an a followed by any char On Thu, Feb 17, 2011 at 4:35 PM, Clark J. Wang wrote: > On Thu, Feb 17, 2011 at 9:20 PM, Andreas Schwab >wrote: > > > "Clark J. Wang" writes: > > > > > See following script output: > > > > > > bash-4.2# cat quoted-pattern.sh > > > [[ .a == \.a* ]] && echo 1 # not quoted > > > [[ aa =~ \.a* ]] && echo 2 # quoted > > > > > > [[ aa =~ \a. ]] && echo 3 # not quoted > > > [[ aa =~ \a\. ]] && echo 4 # quoted > > > bash-4.2# bash42 quoted-pattern.sh > > > 1 > > > 3 > > > bash-4.2# > > > > > > From my understanding 1 2 3 4 should all be printed out. > > > > "aa" contains no period, so why should it be matched? > > > > > If it should not be matched why I got 3 printed out? > > > > Andreas. > > > > -- > > Andreas Schwab, sch...@linux-m68k.org > > GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 > > "And now for something completely different." > > > > > > -- > Clark > == \.a* matches a dot followed by an "a" followed by 0 or more char =~ \.a* matches a dot followed by 0 or more "a" anywhere in a string =~ \a. matches an a followed by one char, whatever that char is anywhere in a string aa =~ \a\. matches an a followed by a dot anywhere on in string
Re: Inconsistence when checking if a pattern is quoted or not for `==' and `=~' in [[ ]]
On Thu, Feb 17, 2011 at 4:56 PM, Clark J. Wang wrote: > On Thu, Feb 17, 2011 at 7:09 PM, Clark J. Wang wrote: > > > See following script output: > > > > bash-4.2# cat quoted-pattern.sh > > [[ .a == \.a* ]] && echo 1 # not quoted > > [[ aa =~ \.a* ]] && echo 2 # quoted > > > > [[ aa =~ \a. ]] && echo 3 # not quoted > > [[ aa =~ \a\. ]] && echo 4 # quoted > > bash-4.2# bash42 quoted-pattern.sh > > 1 > > 3 > > bash-4.2# > > > > From my understanding 1 2 3 4 should all be printed out. > > > > > The point is: ``Any part of the pattern may be quoted to force it to be > matched as a string.'' And backslash is one of bash's quoting chars. But > in my examples, a pattern with `\' in it sometimes is considered to be > quoted and sometimes unquoted. It's not clear to me what's the exact rule > to > tell if a pattern is quoted or not. > > aaah well the "it" in "force it" is the part, not the whole pattern. so if you do \.. the first . is a litteral dot, the second one matches any char.
Re: Why escape char `:' with `\' when auto completing filenames?
On Fri, Feb 18, 2011 at 12:17 PM, Clark J. Wang wrote: > On Fri, Feb 18, 2011 at 5:38 PM, Andreas Schwab >wrote: > > > Maarten Billemont writes: > > > > > Why are we escaping all word break characters? rm file:name and rm > > file\:name are effectively identical, I'm not sure I see the need for > > escaping it. > > > > How do you differentiate between completing file:name and completing > > file:name? > > > > > I don't understand what did you mean. :( If you complete PATH, you want to be able to press tab after: PATH=/bin:/home/clark/ so that it completes "/home/clark" If you complete a file named 'foo:bar' you want to be able to press tab after: ls foo:b so that it completes just "foo:b" So how bash can decide if if it should complete just "b" like in the example of PATH or "foo:b"?
Re: ``complete -b'' does not include ``coproc''
On Mon, Feb 21, 2011 at 10:12 AM, Clark J. Wang wrote: > Tested with 4.2: > > bash-4.2# complete -b help > bash-4.2# help co > command compgen complete compopt continue > bash-4.2# > > -- > Clark J. Wang > It's probably because: $ type coproc coproc is a shell keyword likewise help tim doesn't complete "time"
Re: set -e, but test return not honoured _conditionally_
On Fri, Feb 18, 2011 at 5:36 PM, Steffen Daode Nurpmeso < sdao...@googlemail.com> wrote: > > Configuration Information [Automatically generated, do not change]: > Machine: i386 > OS: darwin10.0 > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='i386' > -DCONF_OSTYPE='darwin10.0' -DCONF_MACHTYPE='i386-apple-darwin10.0' > -DCONF_VENDOR='apple' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' > -DSHELL -DHAVE_CONFIG_H -DMACOSX -I. -I/SourceCache/bash/bash-80/bash > -I/SourceCache/bash/bash-80/bash/include > -I/SourceCache/bash/bash-80/bash/lib > -I/SourceCache/bash/bash-80/bash/lib/intl > -I/var/tmp/bash/bash-80~440/lib/intl -arch x86_64 -arch i386 -g -Os -pipe > -no-cpp-precomp -mdynamic-no-pic -DM_UNIX -DSSH_SOURCE_BASHRC > uname output: Darwin sherwood.local 10.6.0 Darwin Kernel Version 10.6.0: > Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386 > Machine Type: i386-apple-darwin10.0 > > Bash Version: 3.2 > Patch Level: 48 > Release Status: release > > Description: > I am not a sophisticated shell programmer, but i really think this > time it's a shell fault. > You may invoke the code snippet via 'script test1 test3' or so. > >#!/bin/sh >set -e > >_t() { >echo "Entry _t for $CURR" >test "$PARAMS" != "${PARAMS/$CURR/}" && { return; } ># Uncomment the next line and the script won't fail! >#echo "Exit _t for $CURR" >} > >PARAMS="$@" > >CURR='test1' _t >CURR='test2' _t >CURR='test3' _t set -e only exits the shell when simple commands exit with errors, and not for commands with && for instance.
Re: Strange bug in arithmetic function
On Mon, Feb 21, 2011 at 11:13 AM, Marcel de Reuver wrote: > In a bash script I use: $[`date --date='this week' +'%V'`%2] to see if > the week number is even. > Only in week 08 the error is: bash: 08: value too great for base > (error token is "08") the same in week 09, all others are Ok... > > GNU bash, version 3.2.39(1)-release (i486-pc-linux-gnu) > Copyright (C) 2007 Free Software Foundation, Inc. > > Running Ubutu 2.6.24-28 server > > This is a feature, numbers with a leading 0 are interpreted as octal. also $[ ] is deprecated, prefer $(( ))
Re: set -e, but test return not honoured _conditionally_
On Tue, Feb 22, 2011 at 3:46 PM, Greg Wooledge wrote: > On Fri, Feb 18, 2011 at 04:36:06PM +0100, Steffen Daode Nurpmeso wrote: > > I am not a sophisticated shell programmer, but i really think this > > time it's a shell fault. > > You think *what* is the shell's fault? > > > You may invoke the code snippet via 'script test1 test3' or so. > > > > #!/bin/sh > > set -e > > > > _t() { > > echo "Entry _t for $CURR" > > test "$PARAMS" != "${PARAMS/$CURR/}" && { return; } > > # Uncomment the next line and the script won't fail! > > #echo "Exit _t for $CURR" > > } > > > > PARAMS="$@" > > > > CURR='test1' _t > > CURR='test2' _t > > CURR='test3' _t > > Setting aside for the moment what you are attmepting to do here (which > looks rather unorthodox), I don't see what your complaint is. You > asked for "set -e", meaning for the shell to exit any time a simple > command fails. Then you called three simple commands in a row, each > one named "_t". If any of them fails, the shell is supposed to exit. > And it does so, yes? Is that what you are complaining about? > > Are you confused about what your function is doing? It is returning > success or failure based on what's in the variables PARAMS and CURR. > When it fails, the exit status tells bash to abort, because you asked > bash to do so. > > http://mywiki.wooledge.org/BashFAQ/035 - How can I handle command-line > arguments (options) to my script easily? > > http://mywiki.wooledge.org/BashFAQ/105 -- Why doesn't set -e (set -o > errexit) do what I expected? > > More explanations on your example. This line: test "$PARAMS" != "${PARAMS/$CURR/}" && { return; } Never cause the shell to exit because it's complex command However it causes the shell function to exit with non 0 if the test fails. So your call to _t returns with 1 and the shell exits because _t is a simple command When you add echo, it is executed because the complex command doesn't cause the shell to exit and since echo succeeds, it causes the function to return 0 and your call to _t doesn't exit the shell anymore. PS: maybe you wanted "|| return" so that it always return 0?
Re: BUG? RFE? printf lacking unicode support in multiple areas
On Fri, May 20, 2011 at 10:31 AM, Linda Walsh wrote: > > It appears printf in bash doesn't support unicode > characters in a couple of ways: > > 1) use of of the \u and \U escape sequences > in the format string (16 and 32 bit Unicode values). $ printf '%s: \u6444\n' $BASH_VERSION 4.2.8(1)-release: 摄
Re: integer addition crash
On Wed, Jul 20, 2011 at 1:35 PM, Cédric Martínez Campos wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' > -DSHELL -DHAVE_CONFIG_H -I. -I../bash -I../bash/include > -I../bash/lib -g -O2 -Wall > uname output: Linux Asus-M50Vc 2.6.35-28-generic #50-Ubuntu SMP Fri > Mar 18 18:42:20 UTC 2011 x86_64 GNU/Linux > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.1 > Patch Level: 5 > Release Status: release > > Description: > The integer addition $(($x+$y)) crashes if $x==08 or $x==09 (with a > leading 0). > > Repeat-By: > $ echo $((8+1)) > 9 > $ echo $((9+1)) > 10 > $ echo $((08+1)) > bash: 08: too big element for the base (the error element is "08") integer with a leading 0 are considered as octal by bash and 08 is not valid in base 8. you can force the base doing: echo $((10#08+1))
Re: How to do? Possible?
On Mon, Jul 25, 2011 at 12:20 PM, Linda Walsh wrote: > > > > I got great help in learning about how to do the perl equiv of (var1, > var2, var3)= (list) using read var1 var2 var3 <<<(list). > > I use it often to get back lists of values from subroutine calls, but with > sometimes useful, and sometimes hindering fact that when I do > read var1 var2 var3 <<<$(function x y z) > > function's ability to affect the global env, is 'nixed', Since you are already using global variables, why not simply use a couple more for the return values?
Re: How to do? Possible?
On Mon, Jul 25, 2011 at 8:33 PM, Linda Walsh wrote: > Pierre Gaston wrote: >> Since you are already using global variables, why not simply use a >> couple more for the return values? >> > --- > Because a subshell cannot access the global variables of the > parent. uh? you don't make sense. 1) a subshell can access the parent variables just fine 2) your intial problem was to avoid the subshell of your solution to be able to use global variables I'm just saying : foo () { ret1=blah;ret2=bleh; }; foo;echo $ret1 $ret2
Re: Syntax Question...
On Sun, Aug 14, 2011 at 7:51 AM, Linda Walsh wrote: > > > > ` Dennis Williamson wrote: >> >> On Sat, Aug 13, 2011 at 6:41 PM, Linda Walsh wrote: >> >>> >>> I have to use 'indirect' syntax: ${!name} >>> But that doesn't seem to play well with array syntax: >>> ${#${!name}[*]} # bash: ${#${!name}[*]}: bad substitution >>> >>> What syntax would I use for this? >>> >>> >> >> Please read BashFAQ/006: http://mywiki.wooledge.org/BashFAQ/006 >> > > I looked at the pageread every section that was >basic & some basic... > > nothing about using indirect variables with arrays... (let alone assigning > to them).. lies
Re: Syntax Question...
On Sun, Aug 14, 2011 at 10:39 PM, Linda Walsh wrote: > > > > ` Pierre Gaston wrote: >> >> On Sun, Aug 14, 2011 at 7:51 AM, Linda Walsh wrote: >> >>> >>> Dennis Williamson wrote: >>> >>>> >>>> On Sat, Aug 13, 2011 at 6:41 PM, Linda Walsh wrote: >>>> >>>>> >>>>> I have to use 'indirect' syntax: � � ${!name} >>>>> But that doesn't seem to play well with array syntax: >>>>> ${#${!name}[*]} � �# �bash: ${#${!name}[*]}: bad substitution >>>>> >>>>> What syntax would I use for this? >>>>> >>>> >>>> Please read BashFAQ/006: http://mywiki.wooledge.org/BashFAQ/006 >>>> >>> >>> I looked at the pageread every section that was >basic & some >>> basic... >>> >>> nothing about using indirect variables with arrays... (let alone >>> assigning >>> to them).. >>> >> >> lies >> > > > Is this an anime? 'Lies! All lies!..' Such lines are > usually said by the bad buy and are made clearly evident to others > that such is said, not because it is true, but because the person > is in denial of the truth or wishes to convince others that it is > not true. > > But, usually (depends on how dark the anime is (or, how much, it > is like real life ;~/ ) ) the truth comes out. > > But in any event, should I be gullible and ask for proof, or should I > simply take it as humor? If the former, please show quote the section > that shows using an variable that holds the name of an array to be used > (and assigned to) else, ... I love the French...though I admit to > an often low grasp of French colloquialisms and generally (in English) > less than 100% detection of subtle humor -- that percentage falls as > language and cultural differences rise. > > But (I'm so damn unfun/lame sometimes), if you don't respond, I'll > assume you believed it not worth it as was supposed to be humorous, and > having to explain such, crimps the humorist's style... (I do have a sense > of humor -- just not always the same as some other's..). > Linda > The proof is in the faq, you could have found it if you were not busy trolling the list.
Re: Syntax Question...
On Mon, Aug 15, 2011 at 2:31 AM, Linda Walsh wrote: > > > Re: BashFAQ/006: http://mywiki.wooledge.org/BashFAQ/006 > Pierre Gaston wrote: >> >> Linda: >> >> >>> >>> please show quote the section >>> that shows using an variable that holds the name of an array to be used >>> (and assigned to) else ... >>> >> >> The proof is in the faq, you could have found it if you were not busy >> trolling the list. >> >> > > > Guess this was not possible. > The FAQ covers indirect, > it covers arrays, > but I see no place where it covers the combination. > If you see such, then quote it. Don't just wave your arms around > making unsubstantiated claims or accusations. > > I didn't ask for the impossible -- just a quote. > > Apparently that was too much to ask for, so you call me a trollya, > right. > Who's the troll? > # Bash -- trick #1. Seems to work in bash 2 and up. realarray=(...) ref=realarray; index=2 tmp="$ref[$index]" echo "${!tmp}"# gives array element [2]
Too many expansions when assigning to a "dynamic" array element?
I don't use this but we often have question on irc (and from time to time here) about indirect array reference, so I thought it might be worth mentionning >From the example of http://mywiki.wooledge.org/BashFAQ/006 ref='x[$(touch evilfile; echo 0)]' ls -l evilfile # No such file or directory declare "$ref=value" ls -l evilfile # It exists now! The same thing happens if you use read and printf -v: read "$ref"
Re: conditional aliases are broken
On Thu, Aug 18, 2011 at 6:46 AM, Linda Walsh wrote: > > > > ` Eric Blake wrote: >> >> On 08/15/2011 04:40 PM, Sam Steingold wrote: * Andreas Schwab [2011-08-15 22:04:04 +0200]: Sam Steingold writes: > > Cool. Now, what does this imply? "For almost every purpose, shell functions are preferred over aliases." >>> >>> so, how do I write >>> alias a=b >>> as a function? >>> (remember that arguments may contain spaces&c) >> >> a() { b "$@"; } > > --- > Way too easy. > how do you declare a variable for storage in the context of the caller? > (using a function)... > ??? > > The DebugPush & DebugPop routines I used needed to store > the current func's flags in it's context -- I found it very troublesome > inside a function, to store a value into a local variable in the caller. Is this a question? or are you trying to make a point? For the question (If I understand correctly): 1) Most variables don't need to be declared in bash. 2) bash4.2 introduce a new -g to declare a global variable inside a function. For the point: Yes the manual says "most" not "all", one interesting hack with aliases is: http://www.chiark.greenend.org.uk/~sgtatham/aliases.html
Re: Syntax Question...
On Thu, Aug 18, 2011 at 5:13 PM, Linda Walsh wrote: > > > > Pierre Gaston wrote: >> >> On Mon, Aug 15, 2011 at 2:31 AM, Linda Walsh wrote: >> >>> >>> Re: BashFAQ/006: http://mywiki.wooledge.org/BashFAQ/006 >>> Pierre Gaston wrote: >>> >>>> >>>> Linda: >>>> >>>>> >>>>> please show quote the section >>>>> that shows using an variable that holds the name of an array to be used >>>>> (and assigned to) else .. >>>>> >>>> >>>> The proof is in the faq, you could have found it if you were not busy >>>> trolling the list. >>>> >>> >>> >>> Guess this was not possible. The FAQ covers indirect, >>> it covers arrays, but I see no place where it covers the combination. >>> If you see such, then quote it. Don't just wave your arms around >>> making unsubstantiated claims or accusations. >>> >>> I didn't ask for the impossible -- just a quote. >>> >>> Apparently that was too much to ask for, so you call me a troll >>> ya, right. Who's the troll? >>> >> >> # Bash -- trick #1. Seems to work in bash 2 and up. >> realarray=(...) ref=realarray; index=2 >> tmp="$ref[$index]" >> echo "${!tmp}" # gives array element [2] >> > > > > Ok I'll give you credit for, being serious in believing the page answered > my qustion > > But my question was: what was the syntax to do: to use an > indirect array reference directly to reference members of the array, as > indicated in my failed example: > >> echo ${#${!name}[*]} > > bash: ${#${!name}[*]}: bad substitution > # note that ${#[*(or @)]} will give you the num elements in > the array. I wanted to sub in '${!var} for 'arrname' in order to > > --- > Michael Witten, immediate got the issues I was trying to avoid and > responded: > > It's probably what you're trying to avoid, but you'll probably have to > construct and then eval the right code by hand: > > $(eval "echo \${#$name[*]}") > > I was trying to avoid any workaround that used > one or more multiple common workarounds like evals and/or > use of tmp vars... I.e. I saw no such syntax. > > As others confirmed: such syntax is NOT > possible in the current bash. It was in the context of that, when you > indicated there was an answer to my > 'how to do syntax for xxyz' > > on the page in question and thus my need to have > you explain what you meant (via a quote showing the > use of such). > > I now understand that you thought such a response would suffice. > Perhaps you also understand > why that's not what I was looking for. > > *peace* > linda l understood a while ago, now I'll just stop feeding the troll.
Re: YAQAGV (Yet Another Question About Global Variables)
On Tue, Aug 23, 2011 at 4:42 PM, Steven W. Orr wrote: > I made a decision to implement the require module written by Noah Friedman > that comes in the examples part of the bash distro. This is the trick for > implementing the provide / require functionality of features. > > I love it. It works great. I just ran into one little snag. Not a show > stopper, but I'm wondering if there's a better way to deal with it. > > Some of the modules I have are designated as library modules, and so are > used as args to require. Since require is a function, any variables that are > declared in a required module which declare global variables using typeset > then become local variables to the require function. Then after the modules > are loaded, the variables that used to be global are gone. > > I went through the library modules and removed the typeset commands from all > of the global variables and that seems to fix it. What got lost however was > the functional part of the typset commands. For example, a variable was > declared as > > typeset -a foo=( abc def ghi ) > and now it has to be changed to > foo=( abc def ghi ) > > No big loss. But I also had some things declared as constants or integers > (or both). > > typeset -i x1=44 > typeset -ir x2=55 > > I 'm not 100% sure that that won't break something, but for now I'm just glad > that they didn't go out of scope. Take care that the difference between integer and string can be subtle: $ typeset -i i=0;while ((i<=10));do i=$i+1;done;echo $i # works 11 $ unset i;i=0;while ((i<=10));do i=$i+1;done;echo $i #works too but 0+1+1+1+1+1+1+1+1+1+1+1 i is evaluated inside (( )) so 0+1+1+1+1+1+1+1+1+1+1+1 is evaluated at each iteration and it "works"
Re: how to extract an array and sorted by the array
On Wed, Sep 7, 2011 at 7:43 AM, lina wrote: > (...) > I wish the fied 2 from file 2 arranged the same sequence as the field > 2 of file 1. > > Thanks > > (...) > For very general scripting questions like these, prefer the comp.unix.shell group, this list is primarily about bugs in bash.
Re: Time delay on command not found
On Tue, Oct 11, 2011 at 3:38 AM, Bill Gradwohl wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > when I typo a command, bash comes back with a command not found but > hangs the terminal for way too long. > > How do I get rid of the delay. I want it to release the terminal > immediately. > Check if "command_not_found_handle" is declared: declare -f command_not_found_handle These days some linux distributions run a python script in it, find where it's defined or unset it in your .bashrc
crash when using read -t
On Linux, I get an assertion error, using the following example: $ for i in {1..1000};do read -t .02 = size) /* XXX was i + 2; use i + 4 for multibyte/read_mbchar */ { input_string = (char *)xrealloc (input_string, size += 128); remove_unwind_protect (); add_unwind_protect (xfree, input_string); } at least blocking and unblocking ALRM around this seems to fix the crash. Pierre
Re: How to get filename completion without variable expansion?
On Thu, Nov 17, 2011 at 6:22 PM, Chet Ramey wrote: > On 11/16/11 7:13 AM, jens.schmid...@arcor.de wrote: >> Hi, >> >> I have the following problem: >> >> (Environment or regular) variable FOO contains the path of existing >> directory "/foo". When I have a file "/foo/bar" in that directory and when >> I press TAB in the following commandline ('|' denoting the cursor position) >> >> $ cat $FOO/b| >> >> bash expands the commandline to >> >> $ cat /foo/bar | >> >> However, I would like to expand it to >> >> $ cat $FOO/bar | >> >> that is, keep the variable unexpanded, exactly as bash does not expand tilde >> characters during filename completion. > > This is the default bash-4.2 behavior. > I think he wants something different than the current behavior. He wants the variable to stay a variable so that it is expanded when the command is executed while the current behavior is to escape the $ so that there is no expansion is happening when the command is executed .
Re: How to directly modify $@?
On Sun, Nov 20, 2011 at 6:43 PM, Peng Yu wrote: > Hi, > > I don't see if there is a way to directly modify $@. I know 'shift'. > But I'm wondering if there is any other way to modify $@. > > ~$ 1=x > -bash: 1=x: command not found > ~$ @=(a b c) > -bash: syntax error near unexpected token `a' > you need to use the set builtin: set -- a b c
Re: How to protect > and interpret it later on? (w/o using eval)
On Fri, Dec 2, 2011 at 8:24 AM, Peng Yu wrote: > Hi, > > ~$ cat ../execute.sh > #!/usr/bin/env bash > > echo "$@" > "$@" > > $ ../execute.sh ls >/tmp/tmp.txt > $ cat /tmp/tmp.txt #I don't want "ls" be in the file > ls > main.sh > > '>' will not work unless eval is used in execute.sh. > > $ ../execute.sh ls '>' /tmp/tmp.txt > ls > /tmp/tmp.txt > ls: cannot access >: No such file or directory > /tmp/tmp.txt > > How to make execute protect > and interpret it later on w/o using eval? > This really belongs to the new help-b...@gnu.org mailing list * https://lists.gnu.org/mailman/listinfo/help-bash The most simple is to redirect the output to standard error or the terminal: echo "$@" >&2 # not that using set -x will give you this for free echo "$@" > /dev/tty Another possibilty is to pass the file name as an argument instead file=$1 shitf echo "$@" exec > "$file" "$@"
Re: popd always has return status 0
On Fri, Dec 2, 2011 at 2:01 AM, wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL > -DHAVE_CONFIG_H -I. -I../bash -I../bash/include -I../bash/lib -g -O2 > -Wall > uname output: Linux cirrus 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:27:26 > UTC 2011 x86_64 x86_64 x86_64 GNU/Linux > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.2 > Patch Level: 10 > Release Status: release > > Description: > popd does not appear to return a nonzero exit status when the > directory stack is empty anymore. The bash manual says the following for > popd: > > "If the popd command is successful, a dirs is performed as well, and > the return status is 0. popd returns false if an invalid option is > encountered, the directory stack is > empty, a non-existent directory stack entry is > specified, or the directory change fails." > > I am seeing this problem on: Ubuntu 11.10 oneiric > bash package version: 4.2-0ubuntu4 > > Repeat-By: > Put some directories on the stack with pushd, then call popd and check > the return status. It is expected to be nonzero when the stack is empty or > an error occurs. > The following function should pop all dirs off the stack and then > stop, however the while loop continues infinitely: > > popall () > { > local not_done=0; > while [ $not_done -eq 0 ]; do > popd; > not_done=$?; > done > } > I don't seem to be able to reproduce this: $ dpkg-query --show bash bash4.2-0ubuntu4 $ echo $BASH_VERSION 4.2.10(1)-release $ declare -f popall popall () { local not_done=0; while [ $not_done -eq 0 ]; do popd; not_done=$?; done } $ pushd test ~/test ~ $ popall ~ bash: popd: directory stack empty $
Re: '>;' redirection operator
On Sat, Dec 24, 2011 at 5:08 PM, Bill Gradwohl wrote: > On Thu, Dec 22, 2011 at 5:34 PM, Thorsten Glaser wrote: > >> People complain about the readability of code enough already, and as >> practice shows, things like [[ have been around and nobody uses them >> anyway (often using just POSIX, but not even knowing – myself included >> – that POSIX sh has $((…))⁺; or even using less-than-POSIX, e.g. in >> autoconf, which means that anything we were to introduce now would not >> be used in the places where it counts anyway, for compatibility). >> > > > I'm a professional software developer (operating system internals mostly), > but I have no standing in your group. However, I'd like to provide a hint > as to why features aren't known about or used. I agree that adding new > capabilities would largely be a wasted effort unless the most serious BASH > deficiency is addressed first. It's the documentation - or lack of it > PROPERLY done. Adding features that only your core group knows about might > be "scratching your own itch", but does little to help the average end user > unless its PROPERLY documented. > > The man page is written the way Robbie the Robot used to speak in the old > black and white TV days. Short, cryptic and in many cases unintelligible IN > THE DETAILS. Alternatively, one might snicker that some lawyer wrote it to > purposely make it difficult to understand. As with most of the > documentation I've seen in the Linux community, it's awful. > > What's documented may indeed be the truth, but its not the whole truth, and > lacks so many of the details, the finer points, as to make what's written > of little value in and of itself. I find myself experimenting > (experimenting - euphemism for wasting lots of valuable time) with test > scripts precisely because the documentation largely just hints at what's > possible. > > The only people with the expertise to write proper documentation are the > authors / maintainers of the actual code base. Anyone else trying to do > that job without a thorough understanding of what the code actually says, > would be guessing in many cases, and would produce a sub optimum product. > Better perhaps than what is available now, but still not what it could be. > > The single largest failing in BASH, and in most of what's available open > source, is the documentation. > > > -- > Bill Gradwohl nice troll.
Re: let's set the positional parameters with a simple 'read'
On Tue, Jan 3, 2012 at 7:16 PM, wrote: ... > So I propose we 'wreck the language' to allow me to do > $ read @ > to set $@, same with 1, 2,.. * (almost the same as @). > Since you can use "read -a arr" to set arr[1] arr[2] ...etc it's not that interesting Setting the positional parameters is really only useful as tricks in shells that don't have function local variables or arrays.
Re: Restricted Bash - Not so restrictive (in 4.2 as well)
On Thu, Jan 12, 2012 at 12:26 PM, Sarnath K - ERS, HCLTech wrote: > Hello Jonathan, > > Thanks for your inputs. I was able to created a super-restricted login. > Here are a few things that I learnt during this process: > > 1. "vim" has a restricted mode called "rvim (or) vim -Z". This way, I can > restrict the user from running shell commands from vim and peep into the > Filesystem > a) CAVEAT: "vim" allows the user to "read" and "write" files in the > file-system provided the user _knows_ the path (or guesses some file path) > b) So, to make it foolproof, I had to go with "nano" editor > - which supports a restricted mode that does not allow the user to > edit any other file than the one specified in the command line Can't you read a file with: echo "$(< pathtofile)"? I never really tried, but I'd probably look into things like chroot (or even a vm) to provide something really restricted.
Re: Restricted Bash - Not so restrictive (in 4.2 as well)
On Thu, Jan 12, 2012 at 12:51 PM, Sarnath K - ERS, HCLTech wrote: > Oops.. It actually works! That's a great catch! > > I thought "redirection" is not supported in restricted mode though..! > I just checked... It is mostly related to "output" re-direction. > > Hmm..I think I am going to tinker "bash" source code to disable the > "echo" builtin. :-) > > Any ideas? > I don 't think it's a good idea, there are many many many tricks like this ( printf, read, mapfile), or for instance just run: "$(
Re: Edit vs delete a running script. Why difference?
On Wed, Jan 18, 2012 at 6:19 AM, Teika Kazura wrote: > > Hi. When you edit a running bash script, it's affected, so you > shouldn't do that [1][2]. However, I heard[3] that if you delete the > script itself from the filesystem, the original is remembered by bash, > and it continues to run as-is (as-was?). It seems correct as far as I > tried several times. > > What's the, or are there any rationale for this difference? If the > entire script is read at invocation, then why should / does > modification affect? Is it a bug? > It's due to the way unix works, if you look at the documentation of unlink, for instance in the posix man page: "When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file shall no longer be accessible. If one or more processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of the file contents shall be postponed until all references to the file are closed." You'll notice that your are not really deleting a file, you are decrementing a counter by one. So what you do when you delete the file is that you remove one link, since bash has it open to execute the script there is still a reference to the old script and bash continues normally. If you edit the file bash is reading, then everything can happen, you might edit a portion of the file that bash has read but not yet executed for instance (since io is buffered).
Re: echo '-e' doesn't work as expected, or does it?
On Mon, Jan 23, 2012 at 11:47 AM, Ralf Goertz wrote: > Philip wrote: > >> Hi! Short question this time.. >> >> $ echo '-e' >> does not print -e >> >> $ echo '-e ' >> does print -e . > > By the way, neither -e nor -E are explained in the option section of > „help echo“. Only -n is mentioned there. > > GNU bash, Version 4.2.10(1)-release (x86_64-suse-linux-gnu) > you might want to report a bug to suse, here i get: echo $BASH_VERSION;help echo 4.2.10(1)-release echo: echo [-neE] [arg ...] Write arguments to the standard output. Display the ARGs on the standard output followed by a newline. Options: -ndo not append a newline -eenable interpretation of the following backslash escapes -Eexplicitly suppress interpretation of backslash escapes ...
Re: How to enable infinite command history
On Mon, Jan 30, 2012 at 8:01 PM, Ivan Yosifov wrote: > Hi everyone, > > I got an admittedly basic question but I'm really at my wits' end with > this. > > How do I enable infinite command history ? > > One simple suggestion I've seen online is to set HISTSIZE and > HISTFILESIZE to a large number. This is not what I need, I want > genuinely unconstrained history file growth. > > Another idea I've seen is to unset HISTSIZE and HISTFILESIZE. This > doesn't seem to work, the history file is being cropped to the default > of 500 lines. > > I'm probably missing something obvious but any help is appreciated. I'm > running Bash 4.1.5 (Debian Squeeze). I don't think there is a way. But do you plan to use bash normally? Setting HISTFILESIZE to 2147483647 gives you 68 years of history at one command per seconds (I hope I got my math right) with say 5 chars per commands it's something like 5GB of history.
Re: bash man page needs more examples...(i.e. >0)
On Mon, Jan 30, 2012 at 9:02 PM, Linda Walsh wrote: > > > DJ Mills wrote: > >>> OK. �How about if that sentence began with `When specifying n, the >>> digits greater ...'? >> >> >> declare -i foo; foo=20#a2; echo "$foo" >> 202 >> >> [base#]n, 'base' is a INTEGER 2-64, then '#', followed by the number. > > > > ^^^ That's much more clear! But then one could think that the integer 0xa is a valid base which is not the case since it's not a decimal number
Re: Pathname expansion not performed in Here Documents
On Mon, Feb 27, 2012 at 6:44 AM, Davide Baldini wrote: > On 02/27/12 05:04, DJ Mills wrote: >> Think of regular here-doc (with an unquoted word) as being treated the >> same way as a double-quoted string > > Thank you Mills, of course I can understand it _now_, after having hit > the problem, but my point is different: the description of a program's > details should be first of all in its main point of reference, its > manual. I'm a bit surprised that while the developers elite perfectly > know the correct details, nobody is going to review a misleading manual > being a reference for the most of us. The manual seems quite clear: "If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion. In the latter case, the character sequence \ is ignored, and \ must be used to quote the characters \, $, and `." Maybe you could point the part of the manual that mislead you into thinking that " here doc are supposed to expand with no special exceptions" so that it can be corrected?
Re: Pathname expansion not performed in Here Documents
On Mon, Feb 27, 2012 at 2:50 PM, Steven W. Orr wrote: > On 2/27/2012 1:26 AM, Pierre Gaston wrote: >> >> On Mon, Feb 27, 2012 at 6:44 AM, Davide Baldini >> wrote: >>> >>> On 02/27/12 05:04, DJ Mills wrote: >>>> >>>> Think of regular here-doc (with an unquoted word) as being treated the >>>> same way as a double-quoted string >>> >>> >>> Thank you Mills, of course I can understand it _now_, after having hit >>> the problem, but my point is different: the description of a program's >>> details should be first of all in its main point of reference, its >>> manual. I'm a bit surprised that while the developers elite perfectly >>> know the correct details, nobody is going to review a misleading manual >>> being a reference for the most of us. >> >> >> The manual seems quite clear: >> "If word is unquoted, all lines of the here-document are subjected to >> parameter expansion, command substitution, and arithmetic >> expansion. In the latter case, the character sequence \ is >> ignored, and \ must be used to quote the characters \, $, and `." >> >> Maybe you could point the part of the manual that mislead you into >> thinking that " here doc are supposed to expand with no special >> exceptions" so that it can be corrected? > > > I don't mean this in a snarky way, but shell man pages are historically in > the class of docs that you really need to read over and over again. There > are a few books on shell programming, most of them not very good, but I > personally have read the bash man pages literally thousands of times and > before I'm dead, I expect to multiply that many times over. There are really > good web pages that people have put a lot of of time and energy into, and > those are not to be dismissed. The idea is to assemble your resources enough > that you can know where to go to answer a specific question. In between > those questions, you really need to re-read your reference material on a > regular basis. > > It never ends. :-) Sure, reference material is always a bit rough, it's a different thing to call it misleading.
Re: bash 4.2 breaks source finding libs in lib/filename...
On Wed, Feb 29, 2012 at 3:30 PM, Greg Wooledge wrote: > On Tue, Feb 28, 2012 at 05:34:21PM -0800, Linda Walsh wrote: >> How can one get the same behavior as before and look up files >> relative to PATH regardless of them having a '/' in them? > > What? That sounds like it WAS a bug before, and you had somehow > interpreted it as a feature. And now you're asking to have the bug > back. > > Any pathname that contains a / should not be subject to PATH searching. > Not sure which version supported that: $ echo $BASH_VERSION;mkdir -p foo/bar; echo echo foo foo/bar/file;PATH=$PWD/foo:$PATH;source bar/file;source foo/bar/file 2.05b.0(1)-release bash2: bar/file: No such file or directory foo $ echo $BASH_VERSION;mkdir -p foo/bar; echo echo foo> foo/bar/file;PATH=$PWD/foo:$PATH;source bar/file;source foo/bar/file 3.2.25(1)-release -bash: bar/file: No such file or directory foo $ echo $BASH_VERSION;mkdir -p foo/bar; echo echo foo foo/bar/file;PATH=$PWD/foo:$PATH;source bar/file;source foo/bar/file 4.0.33(1)-release bash4: bar/file: No such file or directory foo
Re: bash 4.2 breaks source finding libs in lib/filename...
On Fri, Mar 2, 2012 at 9:54 AM, Stefano Lattarini < stefano.lattar...@gmail.com> wrote: > On 03/02/2012 02:50 AM, Chet Ramey wrote: > > On 2/29/12 2:42 PM, Eric Blake wrote: > > > > In the middle of the histrionics and gibberish, we have the nugget of an > > actual proposal (thanks, Eric): > > > > [to allow `.' to look anchored relative pathnames up in $PATH] > > > >> About the best we can do is accept a patch (are you willing to write it? > >> if not, quit complaining) that would add a new shopt, off by default, to > >> allow your desired alternate behavior. > > > > Maybe we can have a rational discussion about that. > > > Or here is a what it sounds as a marginally better idea to me: Bash could > start supporting a new environment variable like "BASHLIB" (a' la' > PERL5LIB) > or "BASHPATH" (a' la' PYTHONPATH) holding a colon separated (or semicolon > separated on Windows) list of directories where bash will look for sourced > non-absolute files (even if they contain a pathname separator) before > (possibly) performing a lookup in $PATH and then in the current directory. > Does this sounds sensible, or would it add too much complexity and/or > confusion? > It could be even furthermore separated from the traditional "source" and a new keyword introduced like "require" a la lisp which would be able to do things like: 1) load the file, searching in the BASH_LIB_PATH (or other variables) for a file with optionally the extension .sh or .bash 2) only load the file if the "feature" as not been provided, eg only load the file once 3) maybe optionally only load the definition and not execute commands (something I've seen people asking for on several occasions on IRC), for instance that would allow to have test code inside the lib file or maybe print a warning that it's a library not to be executed. (No so important imo) I think this would benefit the bash_completion project and help them to split the script so that the completion are only loaded on demand. (one of the goal mentionned at http://bash-completion.alioth.debian.org/ is "make bash-completion dynamically load completions") My understanding is that the http://code.google.com/p/bash-completion-lib/project did something like this but that it was not working entirely as they wanted. (I hope some of the devs reads this list) On the other hand, there is the possibility to add FPATH and autoload like in ksh93 ... I haven't think to much about it but my guess is that it would really be easy to implement a module system with that. my 2 centsas I don't have piles of bash lib.
Re: interactive test faulty
On Thu, Mar 22, 2012 at 2:06 PM, Tim Dickson wrote: > eg a script called test2 as follows > #!/bin/bash > echo "type in your name" > read USERNAME > echo "hello $USERNAME" > > called via the shell by typing > ./test2 > is interactive, but $- special variable doe not indicate it is. It's not interactive, the kernel calls the script like: /bin/bash test2 that is, it is called with an argument and the manual says: "An interactive shell is one started without non-option arguments "
Re: UTF-8 regression in bash version 4.2
On Tue, Mar 27, 2012 at 3:00 PM, Joachim Schmitz wrote: > dennis.birkh...@rwth-aachen.de wrote: > > >> >> Bash Version: 4.2 >> Patch Level: 24 >> Release Status: release > > > Interesting, seems the announcements dor patches 21-24 have gotten lost? > > bye, Jojo > they were posted on the mailing list, maybe the relay to the group failed
Re: status on $[arith] for eval arith vsl $((arith))??
On Sun, Apr 8, 2012 at 12:50 AM, Linda Walsh wrote: > > > Mike Frysinger wrote: > >> On Saturday 07 April 2012 16:45:55 Linda Walsh wrote: >>> >>> Is it an accidental omission from the bash manpage? >> >> >> it's in the man page. read the "Arithmetic Expansion" section. >> -mike > > > > > My 4.2 manpage says: > > Arithmetic Expansion > Arithmetic expansion allows the evaluation of an arithmetic > expression > and the substitution of the result. The format for arithmetic > expan- > sion is: > > $((expression)) > > The expression is treated as if it were within double quotes, but > a > double quote inside the parentheses is not treated specially. > All > tokens in the expression undergo parameter expansion, string > expansion, > command substitution, and quote removal. Arithmetic expansions may > be > nested. > > The evaluation is performed according to the rules listed below > under > ARITHMETIC EVALUATION. If expression is invalid, bash prints a > message > indicating failure and no substitution occurs. > > -- > No mention of square brackets. > > What's yours say? > Some linux distributions patch the man page and document $[ ] as deprecated. The SUS rationale says: In early proposals, a form $[expression] was used. It was functionally equivalent to the "$(())" of the current text, but objections were lodged that the 1988 KornShell had already implemented "$(())" and there was no compelling reason to invent yet another syntax. Furthermore, the "$[]" syntax had a minor incompatibility involving the patterns in case statements.
Re: Exit status of "if" statement?
On Mon, Apr 9, 2012 at 8:31 PM, Dan Stromberg wrote: > > What should be the behavior of the following? > > if cmd1 > then > cmd2 > fi && if cmd3 > then > cmd4 > fi > > I've not joined two if's with a short-circuit boolean before, but I'm > suddenly working on a script where someone else has. > > Playing around, it appears that cmd1 and cmd3 have no direct impact on > the exit codes of the two if's, while cmd2 and cmd4 do (if cmd1 or > cmd3 evaluate true). Is this the defined behavior in POSIX shell? In > bash? In bash symlinked to /bin/sh? In dash? > > TIA! > posix says "The exit status of the if command shall be the exit status of the then or else compound-list that was executed, or zero, if none was executed." The bash documentation says essentialy the same.
Re: how are aliases exported?
On Sat, Apr 14, 2012 at 3:44 AM, Linda Walsh wrote: > > > Dennis Williamson wrote: > > Aliases are intended for command line convenience. You should use >> functions, which can be exported and are the correct thing to use in >> scripts (and even from the command line). >> >> "For almost every purpose, shell functions are preferred over aliases." >> >> But, of course, you know that already. >> > > --- >Yeah... and I've already demonstrated the 'almost' part. > > It's one of those: > > function _include_h { return "source $1" ;} > > > alias include='eval $( _include_h "$1")' > > Near as I can tell, you can't do that in a function. > If you source a file in a function, the local vars in the file > would be local to the function -- not to the prog using the alias > > > I thought I did suggest that line be corrected -- so people wouldn't > think functions replace aliases... > Since for some purposes, functions cannot replace aliases. > > Now that I've justified another usage (similar to previous that eval'ed a > result), -- how about how do I export aliases?? > you can't.
Re: how are aliases exported?
On Sat, Apr 14, 2012 at 3:44 AM, Linda Walsh wrote: > > > Dennis Williamson wrote: > > Aliases are intended for command line convenience. You should use >> functions, which can be exported and are the correct thing to use in >> scripts (and even from the command line). >> >> "For almost every purpose, shell functions are preferred over aliases." >> >> But, of course, you know that already. >> > > --- >Yeah... and I've already demonstrated the 'almost' part. > > It's one of those: > > function _include_h { return "source $1" ;} > > > alias include='eval $( _include_h "$1")' > > Near as I can tell, you can't do that in a function. > If you source a file in a function, the local vars in the file > would be local to the function -- not to the prog using the alias > > > local vars in the file? what is this?
Re: how are aliases exported?
On Sat, Apr 14, 2012 at 8:31 AM, Pierre Gaston wrote: > > > On Sat, Apr 14, 2012 at 3:44 AM, Linda Walsh wrote: > >> >> >> Dennis Williamson wrote: >> >> Aliases are intended for command line convenience. You should use >>> functions, which can be exported and are the correct thing to use in >>> scripts (and even from the command line). >>> >>> "For almost every purpose, shell functions are preferred over aliases." >>> >>> But, of course, you know that already. >>> >> >> --- >>Yeah... and I've already demonstrated the 'almost' part. >> >> It's one of those: >> >> function _include_h { return "source $1" ;} >> >> >> alias include='eval $( _include_h "$1")' >> >> Near as I can tell, you can't do that in a function. >> If you source a file in a function, the local vars in the file >> would be local to the function -- not to the prog using the alias >> >> >> local vars in the file? what is this? > Oh I get it, the non working code put me off (returning a string really?) You mean if you have "declare var=foo" in a file and then source it from a function the variable will be local to the function, newer bash versions have a -g option to work around this. Anyway, as to export an alias there are 2 cases: 1) interactive bash These source .bashrc so you can put your aliases there 2) non-interactive bash Aliases are off by default. So given that you need to run something at the beginning of your new bash instance anyway, you could define your aliases in a function together with the mandatory shopt, eg: function start_aliases { shopt -s expand_aliases alias foo=ls } export -f start_aliases Then you can do: bash <<< $'start_aliases\nfoo' Note that the function need to be on a line before the first use of the alias, eg bash -c 'start_aliases;foo' doesn't work. You can even make a kinda of export alias function with a hack like: function start_aliases { shopt -s expand_aliases eval "$my_aliases" } export -f start_aliases function exportalias { export my_aliases+=$'\n'"$(alias "$1")" } alias bar=ls exportalias bar
Re: ((i++)) no longer supported?
On Thu, May 3, 2012 at 9:34 AM, Pan ruochen wrote: > Hi All, > > Suddenly I found that ((i++)) is not supported on bash. > Just try the following simple case: > $i=0; ((i++)); echo $? > And the result is > 1 > which means an error. > I got the same result on GNU bash, version 4.1.2(1)-release > (x86_64-redhat-linux-gnu) and GNU bash, version 4.1.10(4)-release > (i686-pc-cygwin). > > - BR, Ruochen > It has always been the case, and fits the documentation since i++ value is 0 and that is false in the arithmetic context. What changed is that it bash exits in this case if you use set -e. Some Possible workarounds: ((i++)) || : ((i+=1)) i=$((i+1)) and a gazillon others.
Re: Space or multiple spaces before command causes it to not get logged
On Fri, May 25, 2012 at 6:05 AM, wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL > -DHAVE_CONFIG_H -I. -I../bash -I../bash/include -I../bash/lib > -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 > -Wformat -Wformat-security -Werror=format-security -Wall > uname output: Linux gouch.ihate.ms 3.2.0-24-virtual #38-Ubuntu SMP Tue May 1 > 16:38:34 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.2 > Patch Level: 24 > Release Status: release > > Description: > > Putting a space or multiple spaces before a command and executing it causes > the command to not get logged to bash history file. > > > Repeat-By: > > user@server:~$ pwd > /root > user@server:~$ history > 1 pwd > 2 history > user@server:~$ ls <-- Note the space before the command > user@server:~$ date > Fri May 25 02:59:43 UTC 2012 > user@server:~$ history > 1 pwd > 2 history > 3 date > 4 history Probably a feature, check if the variable HISTCONTROL contains "ignorespace" and the value of HISTIGNORE (printf %q\\n "$HISTIGNORE")
Re: SH bahaviour to not fork a subshell after " | while read "
On Fri, Jun 1, 2012 at 11:53 AM, freD wrote: > Configuration Information [Automatically generated, do not change]: > Machine: powerpc > OS: aix5.1 > Compiler: xlc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='powerpc' > -DCONF_OSTYPE='aix5.1' -DCONF_MACHTYPE='powerpc-ibm-aix > 5.1' -DCONF_VENDOR='ibm' -DLOCALEDIR='/opt/freeware/share/locale' > -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I. -I > ./include -I./lib -I/opt/freeware/include -O2 > uname output: AIX tsm 1 6 00C530EB4C00 > Machine Type: powerpc-ibm-aix5.1 > > Bash Version: 3.0 > Patch Level: 16 > Release Status: release > > Description: > > In bash mode, variable are lost after a while loop: > > bash-3.00# T=toto ; du | while read a ; do T=$a ; done ; echo $T > toto > > I can keep then a little using parenthesis: > > bash-3.00# T=toto ; du | (while read a ; do T=$a ; done ; echo $T) ; echo $T > 1489648 . > toto > > Repeat-By: > > bash-3.00# T=toto ; du | while read a ; do T=$a ; done ; echo $T > toto > > Fix: > May be starting in "sh" mode and/or posix mode should behave like a > real bourne shell > > bash-3.00# /usr/bin/sh > # T=toto ; du | while read a ; do T=$a ; done ; echo $T > 1489632 . > # exit > bash-3.00# > > You use a relatively old bash. In the newer bash you can get this beahviour using "shopt -s lastpipe" Note that this behaviour is not mandated by posix and that a least one "real bourne shell", which is not posix acts like bash Eg with the heirloom bourne shell: $ ./sh $ T=toto ; du | while read a ; do T=$a ; done ; echo $T toto I think only ksh93 and zsh behaves like that by default. dash, pdksh, mksh all put the last part of the pipe in a subshell.
Re: .bashrc is sourced even for non-interactive shells (when run from sshd)
On Sat, Jun 2, 2012 at 8:15 PM, Mikel Ward wrote: > bash sources .bashrc even for some non-interactive shells. > > For example with > > echo \$- is $- > > in ~/.bashrc, and shell set to /bin/bash (bash 4.2.28) > > ssh -n -T localhost true > > produces the output > > $- is hBc > > I assume this is caused by this code in shell.c > > if (run_by_ssh || isnetconn (fileno (stdin))) > > The man page says > > When an interactive shell that is not a login shell is started, > bash reads and executes commands from ~/.bashrc, > > but makes no mention of the special handling for ssh and rsh. > > This seems to have been the case since at least bash 2.02. > > I'd argue this is a misfeature, but I guess that ship has sailed. Can > the man page at least be updated? > > Thanks > >From http://mywiki.wooledge.org/DotFiles: "Remote non login non interactive shells" Bash has a special compile time option that will cause it to source the .bashrc file on non-login, non-interactive ssh sessions. This feature is only enabled by certain OS vendors (mostly Linux distributions). It is not enabled in a default upstream Bash build, and (empirically) not on OpenBSD either. If this feature is enabled on your system, Bash detects that SSH_CLIENT or SSH_CLIENT2 is in the environment and in this case source .bashrc. eg suppose you have var=foo in your remote .bashrc and you do: ssh remotehost echo \$var it will print foo. This shell is non-interactive so you can test $- or $PS1, if you don't want things to be executed this way in your .bashrc. Without this option bash will test if stdin is connected to a socket and will also source .bashrc in this case BUT this test fails if you use a recent openssh server (>5.0) which means that you will probably only see this on older systems. Note that a test on SHLVL is also done, so if you do: ssh remotehost bash -c echo then the first bash will source .bashrc but not the second one (the explicit bash on the command line that runs echo). The behaviour of the bash patched to source a system level bashrc by some vendors is left as an exercise.
Re: .bashrc is sourced even for non-interactive shells (when run from sshd)
On Sat, Jun 2, 2012 at 8:24 PM, Mikel Ward wrote: > On Sat, Jun 2, 2012 at 10:19 AM, Pierre Gaston > wrote: >> On Sat, Jun 2, 2012 at 8:15 PM, Mikel Ward wrote: >>> bash sources .bashrc even for some non-interactive shells. > ... >> "Remote non login non interactive shells" >> Bash has a special compile time option that will cause it to source >> the .bashrc file on non-login, non-interactive ssh sessions. > > IIUC, it was once a compile time option, but it's now hard-coded. The > isnetconn test doesn't seem to be toggled by any macro. > > if ((run_by_ssh || isnetconn (fileno (stdin))) && shell_level < 2) but run_by_ssh is: #ifdef SSH_SOURCE_BASHRC run_by_ssh = (find_variable ("SSH_CLIENT") != (SHELL_VAR *)0) || (find_variable ("SSH2_CLIENT") != (SHELL_VAR *)0); #else run_by_ssh = 0; #endif
Re: link problem undefined reference tgoto BC & UP
On Sat, Jun 2, 2012 at 8:55 PM, rac8006 wrote: > > Why can't I get a clean compile of bash4.1? I was building until I did a > configure --enable_progcomp > Now it fails with the three missing symbols tgoto , BC and UP. I've > searched this site with no answers. > Searched the web found references to -ltinfo. But I don't have that > library. > What do I need to do to get a clean build? Hard to tell with so little and so imprecise information
Re: .bashrc is sourced even for non-interactive shells (when run from sshd)
On Sun, Jun 3, 2012 at 3:05 AM, Linda Walsh wrote: > > > Pierre Gaston wrote: > >> On Sat, Jun 2, 2012 at 8:24 PM, Mikel Ward wrote: >>> >>> On Sat, Jun 2, 2012 at 10:19 AM, Pierre Gaston >>> wrote: >>>> >>>> On Sat, Jun 2, 2012 at 8:15 PM, Mikel Ward wrote: >>>>> >>>>> bash sources .bashrc even for some non-interactive shells. >>> >>> ... >>>> >>>> "Remote non login non interactive shells" >>>> Bash has a special compile time option that will cause it to source >>>> the .bashrc file on non-login, non-interactive ssh sessions. >>> >>> IIUC, it was once a compile time option, but it's now hard-coded. �The >>> isnetconn test doesn't seem to be toggled by any macro. >>> >>> � � �if ((run_by_ssh || isnetconn (fileno (stdin))) && shell_level < 2) >> >> but run_by_ssh is: >> >> #ifdef SSH_SOURCE_BASHRC >> run_by_ssh = (find_variable ("SSH_CLIENT") != (SHELL_VAR *)0) || >> (find_variable ("SSH2_CLIENT") != (SHELL_VAR *)0); >> #else >> run_by_ssh = 0; >> #endif >> > > > I would say that's broken -- bash can detect if it is > hooked up to a terminal for input, or not, but chooses not to. > > prelude: > > ans=("is "{not,}" a tty") > alias sub=function > sub echoAns { echo ${ans[$?==0]}; } > alias }{=else {=then }=fi ?=if > > 4 basic cases... > > 1) > Ishtar:...> if ssh ishtar isatty 0 2>/dev/null; { echoAns; }{ echoAns; } > .bashrc STDIN: is not a tty ( $-=hBc ) > is not a tty > > > 2) > Ishtar:...> if ssh -T ishtar isatty 0 2>/dev/null; { echoAns; }{ echoAns; } > .bashrc STDIN: is not a tty ( $-=hBc ) > is not a tty > > > 3) > Ishtar:...> if ssh -tn ishtar isatty 0 2>/dev/null; { echoAns; }{ echoAns; } > .bashrc STDIN: is not a tty ( $-=hBc ) > is not a tty > > > 4) > Ishtar:...> if ssh -t ishtar isatty 0 2>/dev/null; { echoAns; }{ echoAns; } > .bashrc STDIN: is a tty ( $-=hBc ) > is a tty > > While it is arguable whether or not 1 & 2 are 'interactive' (they are but > not in a character oriented way), #4, by: > --rcfile file > Execute commands from file instead of the standard personal > ini- > tialization file ~/.bashrc if the shell is interactive > (see > INVOCATION below). > ---Under invocation: > An interactive shell is one started without non-option arguments > and > without the -c option whose standard input and error are both > connected > to terminals . > > ***(as determined by isatty(3)),*** > > or one started with the -i > option. PS1 is set and $- includes i if bash is interactive, > allowing > a shell script or a startup file to test this state. > > > By using the isatty test, none of 1-3 should be calling bashrc. > You can note that the "-i" switch isn't specified at any point. > > Minimally I would claim #4 to be a bug, and from the manual, #1 and #2 are > as > well. (-n redirects STDIN from /dev/null -- a definite "non-winner for > interactivity). In all your examples the shell will be called like: bash -c 'isatty 0 2'. If you use a bash compiled with the above option you can add 'ps -p$$ -ocmd' at the top of your .bashrc to verify it. They are all non-interactive because they are called with -c, disregarding if they are connected to a terminal or not.
Re: .bashrc is sourced even for non-interactive shells (when run from sshd)
On Sun, Jun 3, 2012 at 11:02 AM, Linda Walsh wrote: > > > Pierre Gaston wrote: >> >> In all your examples the shell will be called like: bash -c 'isatty 0 >> 2'. If you use a bash compiled with the above option you can add 'ps >> -p$$ -ocmd' at the top of your .bashrc to verify it. >> >> They are all non-interactive because they are called with -c, >> disregarding if they are connected to a terminal or not. > > === > I see what you mean... > > Wouldn't that '*doubly*' mean the .bashrc shouldn't be called? > That's precisely the subject of this thread. I thought I was not documented (before 4 it was a bit less obvious to find the relevant bit) that's why I gave the link, but it is in fact documented. Eg in the bash 4 manual: Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc and ~/.bashrc, if these files exist and are readable. If I'm not mistaken this feature is inherited from csh. I can guess some people are using it since there is this workaround testing for the SSH variables to make it work with openssh>5. You can see some people having troubles with that because .bashrc is also sourced if you use less obvious non-interactive shells, like when you use "scp" That's why you can find things like: [ -z "$PS1" ] && return in the default .bashrc of debian.
Re: Indirect access to variables, including arrays (was Re: Compare 2 arrays.)
On Thu, Jun 7, 2012 at 6:07 AM, Linda Walsh wrote: > > > Greg Wooledge wrote: >> >> The only Bourne-family shell that can manipulate arrays whose names are >> passed to a function is ksh93, with its "nameref" command. Bash has >> nothing analogous to that yet. > > = > > I don't understand. > > Are you saying that ${!nameofvar} isnt' supported? in ksh93 you can do: $ function foo { nameref locarr=$1;echo ${locarr[0]};locarr[0]=newfoo; } $ array=(foo bar baz) $ foo array foo $ echo ${array[0]} newfoo There is nothing that easy in bash.
Re: Indirect access to variables, including arrays (was Re: Compare 2 arrays.)
On Thu, Jun 7, 2012 at 6:07 AM, Linda Walsh wrote: >(no I haven't made it space/bracket...whatever proof...just a bit > more work) It's not just "a bit more work", there are many workarounds but it's not really possible to make a really robust generic solution for assignment, and in the end it just not as simple and pretty as nameref. Fwiw here is a robust and simple solution for in_: _in () { local e t t="${2:?}[@]"; for e in "${!t}"; do [[ $1 = "$e" ]] && return 0;done return 1; }
Re: Indirect access to variables, including arrays (was Re: Compare 2 arrays.)
On Thu, Jun 7, 2012 at 12:27 PM, Dan Douglas wrote: > On Thursday, June 07, 2012 10:01:51 AM Pierre Gaston wrote: >> On Thu, Jun 7, 2012 at 6:07 AM, Linda Walsh wrote: >> >(no I haven't made it space/bracket...whatever proof...just a bit >> > more work) >> >> It's not just "a bit more work", there are many workarounds but it's not >> really possible to make a really robust generic solution for assignment, >> and in the end it just not as simple and pretty as nameref. >> >> Fwiw here is a robust and simple solution for in_: >> >> _in () { >> local e t >> t="${2:?}[@]"; >> for e in "${!t}"; do [[ $1 = "$e" ]] && return 0;done >> return 1; >> } >> > > Not robust due to the name conflicts with "e" or "t". There's also no good way > to generate a list of locals without parsing "local" output (i'd rather live > with the conflicts). I usually store indirect references in the positional > parameters for that reason. Proper encapsulation is impossible. Ah true, thanks for the reminder that also kinda prove my first point.
Re: Unhelpful behaviors in 4.2.10(1)
On Sat, Jun 9, 2012 at 4:01 AM, Linda Walsh wrote: > File1: > sdf: > Ishtar:/tmp> more sdf > #!/bin/bash > > _prgpth="${0:?}"; _prg="${_prgpth##*}"; _prgdr="${_prgpth%/$_prg}" > [[ -z $_prgdr || $_prg == $_prgdr ]] && $_prgdr="$PWD" > export PATH="$_prgdr/lib:$_prgdr:$PATH" > shopt -s expand_aliases extglob sourcepath ; set -o pipefail > > . backtrace.shh > > . sdf2 > > file2: sdf2 > #!/bin/bash > > > [[ $# -ge 2 ]] && echo weird > > > running: > Ishtar:/tmp> sdf > Error executing "[[ $# -ge 2 ]]" in "main" > at "./sdf", line "4" (level=0) > --- > > > So why am I getting a backtrace (and how do I make it put out the right > file)? > I thought rather than something just dying -- it would be nice to know how > it got there... so...backtrace... but the line it is complaining about > is in "sdf2" --- and WTF?... how is testing a number an error that would > cause a > exception? > > (if this was running under -e, I presume that if statements that are false > now fail?... )...I thought complex statements were not supposed to fail > > POSIX really F-U'ed on this one... go back to bash 2's "simple > statements"... > having conditional statements fail is going overboard... > I mean it's an explicit check to avoid an error condition...so, of course, > such checks are now errors?!?!?!ARG *hitting head against wall*** > > file: > backtrace.shh: > >> more /home/law/bin/lib/backtrace.shh > > #!/bin/bash > function backtrace { > > declare -i level=0 > while { > local cmd=$BASH_COMMAND > local fn=${FUNCNAME[level+1]:-} > local src=${BASH_SOURCE[level+1]:-} > local ln=${BASH_LINENO[level]:-} > }; do > [[ -n $fn && -n $src && -n $ln ]] && { > echo " Error executing \"$cmd\" in \"$fn\"" > echo " at \"$src\", line \"$ln\" (level=$level)" > level+=1 > continue; > } > exit 1 > done > } > > trap backtrace ERR > set -T > To sum up ". sdf2" is returning 1 Bash considers . to be a simple command even though what's really executed is [[ $# -ge 2 ]] && echo hello. The same thing happens with a function, eg you'll probably get the same result with: foo () { false && echo foo; } It kinda make sense to me. For what it's worth in bash 2.05b.0(1)-release: echo '[[ $# -ge 2 ]] && echo foo' > sdf2; ( set -e; . ./sdf2;echo bar) prints nothing...so I guess it's not that new
Re: Unhelpful behaviors in 4.2.10(1)
On Sat, Jun 9, 2012 at 10:05 AM, Linda Walsh wrote: > > > Pierre Gaston wrote: >> >> >>> trap backtrace ERR >>> set -T >>> >> >> To sum up ". sdf2" is returning 1 >> Bash considers . to be a simple command even though what's really >> executed is [[ $# -ge 2 ]] && echo hello. > > --- > Right It's NOT a simple command. > > I am trapping on ERR, not 'anything' that is not zero. > > Of all the stupid definitions... you have 256 useful values to return, and > some > idiots decide 255 of them should be reserved for fatal errors (even when > they are > not errors)... > > Is this even fixable? Sorry if I was unclear but it is the "." command that causes the error. If you add "return 0" at the end of sdf2, you will see no trace. I would probably consider it a bug if "." was returning 1 without triggering an error. Now you could consider a bug to get the "[[ $# -ge 2 ]]" instead of "." in BASH_COMMAND, though I guess some people might, on the contrary, find it useful.
Re: bash tab variable expansion question?
On Mon, Jun 11, 2012 at 10:59 AM, John Embretsen wrote: > On 27 Feb 2011 18:18:24 -0500, Chet Ramey wrote: >>> On Sat, Feb 26, 2011 at 10:49 PM, gnu.bash.bug wrote: >>> A workaround is fine but is the 4.2 behavior bug or not? >> >>It's a more-or-less unintended consequence of the requested change Eric >>Blake referred to earlier in the thread. > > http://lists.gnu.org/archive/html/bug-bash/2011-02/msg00275.html > > (...) > >> The question is how to tell readline that the `$' should be quoted >> under some circumstances but not others. There's no hard-and-fast >> rule that works all the time, though I suppose a call to stat(2) >> before quoting would solve part of the problem. I will have to give >> it more thought. >> >> Chet > > Any updates on this issue? > > The workarounds for $PWD and $OLDPWD is not enough, and the "workaround" of > going back on the command line and removing escape characters is not > acceptable in my humble opinion. > > I often use environment variable-based tab completion to navigate to the > correct directory on my system, but with bash 4.2 this is no longer an > option. For example, > > with CODE=/path/to/dir and /path/to/dir/ containing test1/ and test2/, > > cd $CODE/test should give a list of $CODE/test1 $CODE/test2 when > those directories exist, not "cd \$CODE/test". > > If there is no fix in sight for this issue, can someone point me to a guide > for downgrading bash in recent popular Linux distros? > > > thanks, > > -- > John > There have been many updates on this. There have been a "fix" available for some time now on this list . It is now available as an official patch ftp://ftp.gnu.org/gnu/bash/bash-4.2-patches/bash42-029
Re: Arrays declared global in function and initialised in same line are not visible outside of function
On Mon, Jun 18, 2012 at 9:35 PM, Greg Wooledge wrote: > On Mon, Jun 18, 2012 at 02:19:41PM +0100, Jacobo de Vera wrote: > >> Subject: Re: Arrays declared global in function and initialised in same line >> are not visible outside of function > > I can't reproduce that problem: Probably because you have this ftp://ftp.gnu.org/gnu/bash/bash-4.2-patches/bash42-025 applied
Re: Overflow Bug
On Thu, Jul 12, 2012 at 8:09 PM, Ernesto Messina wrote: > Hello, I think I found an overflow bug. I got the follow C program: > > #include > #include > > int main(int argc, char *argv[]) > { > char a[10]; > int i; > > strcpy(a, argv[1]); > > return 0; > } > > Compiling with: gcc program.c -o program > And running: program `perl -e 'print "a" x 24'` > > The terminal loses the control, entering into a infinite buckle, and bash is not the terminal and is not involved once the program runs, and yes, writing buggy programs can cause buggy behaviour. On this system man strcpy says under BUGS: If the destination string of a strcpy() is not large enough, then any‐ thing might happen. Overflowing fixed-length string buffers is a favorite cracker technique for taking complete control of the machine. PS: "infinite loop" not "infinite buckle"
Re: Case modification fails for Unicode characters
On Fri, Jul 13, 2012 at 3:46 AM, Dennis Williamson wrote: > On Thu, Jul 12, 2012 at 1:57 PM, DJ Mills wrote: >> On Thu, Jul 12, 2012 at 2:19 PM, Dennis Williamson >> wrote: >>> s=łódź; echo "${s^^} ${s~~}"' >>> łóDź ŁÓDŹ >>> >>> The to-upper and the undocumented toggle operators should produce >>> identical output in this situation, but only the toggle works >>> correctly. >>> >>> This is in en_US.UTF-8, but also reported in pl_PL.utf-8. In Bash >>> 4.2.24 and Bash 4.0.33. >>> >>> -- >>> Visit serverfault.com to get your system administration questions answered. >>> >> >> I get the same result with: >> » echo "$s" | tr '[:lower:]' '[:upper:]' >> łóDź >> >> » locale >> LANG=en_US.UTF-8 >> LC_CTYPE="en_US.UTF-8" >> LC_NUMERIC="en_US.UTF-8" >> LC_TIME="en_US.UTF-8" >> LC_COLLATE="en_US.UTF-8" >> LC_MONETARY="en_US.UTF-8" >> LC_MESSAGES="en_US.UTF-8" >> LC_PAPER="en_US.UTF-8" >> LC_NAME="en_US.UTF-8" >> LC_ADDRESS="en_US.UTF-8" >> LC_TELEPHONE="en_US.UTF-8" >> LC_MEASUREMENT="en_US.UTF-8" >> LC_IDENTIFICATION="en_US.UTF-8" >> LC_ALL= >> >> >> This is a locale issue, and has nothing to do with bash itself... > > > That's partly true except that ~~ works. > Also many (all?) versions of tr don't know about locale, eg here: $ echo ź | tr ź a aa
Re: "cd //" isn't the same as "cd /" or "cd ///"
On Fri, Aug 3, 2012 at 6:32 AM, Noah Spurrier wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' > -DSHELL -DHAVE_CONFIG_H -I. -I../bash -I../bash/include > -I../bash/lib -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector > --param=ssp-buffer-size=4 -Wformat -Wformat-security > -Werror=format-security -Wall > uname output: Linux se 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 > 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.2 > Patch Level: 24 > Release Status: release > > Description: > > Executing `cd //` will put you in the root directory as expected; > however executing `pwd` will report "//" as your working directory. > see E10 http://tiswww.case.edu/php/chet/bash/FAQ
Re: Bash 4.1 doesn't behave as I think it should: arrays and the environment
On Fri, Aug 17, 2012 at 10:19 AM, John Summerfield wrote: (...) > The man page for bash contains a para entitled ENVIRONMENT which doesn't > mention arrays, leaving the reader to assume they are not different from > other shell variables. the BUGS section contains: Array variables may not (yet) be exported.
Automatically assigning BASH_XTRACEFD while redirecting doesn't make it a special variable.
It seems BASH_XTRACEFD becomes special only if you assign it normally but not if you do: exec {BASH_XTRACEFD}>file Not a real major problem and I don't use it everyday, but the statement looks so nice :D
Re: fd leak with {fd}>
On Thu, Nov 22, 2012 at 9:15 PM, Chet Ramey wrote: > On 11/16/12 10:47 AM, Sam Liddicott wrote: > > Repeated executions of: { echo $fd ; } {fd}> /dev/null > > will emit different numbers, indicating that fd is not closed when the > > block completes. > > This is intentional. Having been given a handle to the file descriptor, > the shell programmer is assumed to be able to manage it himself. > > It seems rather counter intuitive that the fd is not closed after leaving the block. With the normal redirection the fd is only available inside the block $ { : ;} 3>&1;echo bar >&3 -bash: 3: Bad file descriptor if 3 is closed why should I expect {fd} to be still open?
Re: fd leak with {fd}>
On Mon, Nov 26, 2012 at 3:37 PM, Chet Ramey wrote: > On 11/23/12 2:04 AM, Pierre Gaston wrote: > > > It seems rather counter intuitive that the fd is not closed after leaving > > the block. > > With the normal redirection the fd is only available inside the block > > > > $ { : ;} 3>&1;echo bar >&3 > > -bash: 3: Bad file descriptor > > > > if 3 is closed why should I expect {fd} to be still open? > > Because that's part of the reason to have {x}: so the user can handle the > disposition of the file descriptor himself. . I don't see any difference between 3> and {x}> except that the later free me from the hassle of avoid conflicting fd >
Re: fd leak with {fd}>
On Mon, Nov 26, 2012 at 3:41 PM, Pierre Gaston wrote: > > > On Mon, Nov 26, 2012 at 3:37 PM, Chet Ramey wrote: > >> On 11/23/12 2:04 AM, Pierre Gaston wrote: >> >> > It seems rather counter intuitive that the fd is not closed after >> leaving >> > the block. >> > With the normal redirection the fd is only available inside the block >> > >> > $ { : ;} 3>&1;echo bar >&3 >> > -bash: 3: Bad file descriptor >> > >> > if 3 is closed why should I expect {fd} to be still open? >> >> Because that's part of the reason to have {x}: so the user can handle the >> disposition of the file descriptor himself. > > . > I don't see any difference between 3> and {x}> except that the later free > me from the hassle of avoid conflicting fd > It seems that ksh93 behaves just like bash in this regard Well, as I don't use it I don't really care, but I vote for this as a bug as I fail to see the benefit of this behavior as i find it useless and not consistent with the normal redirection.
Re: fd leak with {fd}>
On Mon, Nov 26, 2012 at 10:48 PM, Chet Ramey wrote: > On 11/26/12 12:11 PM, Sam Liddicott wrote: > > 3. there already exists simple and explicit way to get the supposed > benefit > > using the existing mechanism "exec" > > Not quite. You still have to pick the file descriptor you want to use with > `exec'. But you are not being forced to use it -- by all means, if you > think it's not what you need or want, feel free to avoid it and encourage > your friends to do the same. There have been unsuccessful new features -- > the case-modifying expansions are one example of a swing and miss. You seem to say that there are 2 aspects of this new feature, giving control on the fd and letting the system choose the number. Let's see the first aspect, you are saying that leaving the fd open doing "{ : } {fd}>file" is a feature to let the user control the file descriptor. But why would one use this when you can do: exec {fd}>file That is there is already exec to answer the problem of letting the user manage the fd, Why would you use another new, non intuitive non consistent feature? For me having bashing assigning the fd number solves another problem.
Re: Incorrect exit status of the Build-IN (( ))
On Fri, Dec 7, 2012 at 12:52 PM, Orlob Martin (EXT) < extern.martin.or...@esolutions.de> wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' > -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL > -DHAVE_CONFIG_H -I. -I../bash -I../bash/include -I../bash/lib > -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 > -Wformat -Wformat-security -Werror=format-security -Wall > uname output: Linux ESO0560-ubuntu 3.2.0-25-generic #40-Ubuntu SMP Wed May > 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.2 > Patch Level: 24 > Release Status: release > > Description: > PROBLEM: Exit status not correct, when count up from zero using > Build-in (( )) > > EXAMPLE (enter each following line in Bash): > a=0 > ((a++)) > echo $? > echo $a > ((a++)) > echo $? > echo $a > COMMENTS TO EXMAPLE: > The first ((a++)) should perform 'a+1' --> '0+1' (correct > operation) > The first 'echo $?' returns '1' which is not correct, since > following 'echo $a' returns '1' (result of adding 0+1) which is > correct > Not a bug, a++ adds one to "a" but the result of the expression is the value of a before the increment, so it's 0. you can see it using echo $((a++)) instead of ((a++)) ((++a)) does what you expect.
Re: Questions to bash "read" builtin functionality (and supposed problem with while).
On Thu, Jan 17, 2013 at 4:24 PM, Linus Swälas wrote: > I have a similar problem to report as Fiedler Roman regarding read and also > another problem regarding while. Maybe the while case is intended behavior > though. =) > # It the below also a bug? > # while can't handle nulls, this doesn't work: > # while read -d \x00 cfg > # while this does work: > # read -d \x00 test < <(find . -name some_file -print0) ; echo $test > > \x00 doesn't mean anything special for bash it's just an "x" followed by 2 zeros (echo, printf can interpret it and it has a special meaning inside $'') Even if it did, you cannot really pass the null byte as an argument, bash uses null delimited string so the best you can do is to pass the empty string: read -d '' The good thing is that it works to read null delimited input! you need a bit more work to be fully safe though: while IFS= read -rd ''; do . done < <(find ... -print0) Pierre PS: next time consider trimming your use case to avoid us avoid to search for your problems.
Re: Questions to bash "read" builtin functionality (and supposed problem with while).
On Fri, Jan 18, 2013 at 1:38 PM, Linus Swälas wrote: > On Fri, Jan 18, 2013 at 6:57 AM, Pierre Gaston > wrote: >> >> On Thu, Jan 17, 2013 at 4:24 PM, Linus Swälas >> wrote: >>> >>> I have a similar problem to report as Fiedler Roman regarding read and >>> also >>> another problem regarding while. Maybe the while case is intended behavior >>> though. =) > >>> # while can't handle nulls, this doesn't work: >>> # while read -d \x00 cfg >>> # while this does work: >>> # read -d \x00 test < <(find . -name some_file -print0) ; echo $test >>> >> >> \x00 doesn't mean anything special for bash it's just an "x" followed by 2 >> zeros (echo, printf can interpret it and it has a special meaning inside >> $'') >> Even if it did, you cannot really pass the null byte as an argument, bash >> uses null delimited string so the best you can do is to pass the empty >> string: >> read -d '' >> The good thing is that it works to read null delimited input! you need a bit >> more work to be fully safe though: >> while IFS= read -rd ''; do . done < <(find ... -print0) > > Yes, I see that now, I managed to fool myself with vim's syntax highlighting. > =) > In combination with the fact that I only find one file with no x in > it's name, and that > read then never can find it's delimiter, x in this case, gives the > appearance of not > being able to handle nulls. > That links back to the real question though, one that got overlooked, > that read seem > to discard input. > Cut from the earlier example, with superfluous comments etc removed: > > while : > do > # Now, read from the console, with \n as delimiter. > read -t 2 data > ret=$? > > # Problem is here, $data is empty when read has timed > out, i.e, not > # read a complete line. I expect data to contain the > line read so > # far. > echo "$data" >&2 > > [[ $ret -gt 128 ]] || continue > > # This is a futile attempt at getting the data I miss > from the timed out > # read above, this is just to prove that the data > isn't there, even if I look > # for these two other delimiters, I know that either > one or the other will > # be there. > read -d ':' -t 1 data > # Data is still empty here ... > if [[ -n "$data" ]] > then > [[ "$data" =~ "login" ]] && return 0 > fi > > read -d '#' -t 1 data > # ... and empty here too. > if [[ -n "$data" ]] > then > [[ "$data" =~ 'root@' ]] && return 1 > fi > > # And data is empty on the next iteration of the loop too, > thus > # read discarded the incomplete line of input, the > line that holds > # the login: prompt. And that line does not hold a > newline so read > # times out. And discards that data. =( > > done < "$xl_console_output" > > > > >> Pierre >> PS: next time consider trimming your use case to avoid us avoid to search >> for your problems. > > Sorry, just wanted to be thorough, hope this new trimmed example is better, > and thanks for pointing out my mistake regarding while. =) > > / Linus >From what I can see after a quick look, you are calling "read" inside a "while read;do done < something" loop The read inside will read inside the "do done" will read from the same standard input as the one after "while" and it is connected to "something" and not from the terminal. Maybe you want something like: while IFS= read -rd '' -u3 line;do read done 3< <(find. .. -print0) using another file descriptor (3) as input to the outer loop
Re: Q on Bash's self-documented POSIX compliance...
On Sun, Jan 27, 2013 at 5:52 AM, John Kearney wrote: > Am 27.01.2013 01:37, schrieb Clark WANG: >> On Sat, Jan 26, 2013 at 1:27 PM, Linda Walsh wrote: >> >>> I noted on the bash man page that it says it will start in posix >>> compliance mode when started as 'sh' (/bin/sh). >>> >>> What does that mean about bash extensions like arrays and >>> use of [[]]? >>> >>> Those are currently not-POSIX (but due to both Bash and Ksh having >>> them, some think that such features are part of POSIX now)... >>> >>> If you operate in POSIX compliance mode, what guarantee is there that >>> you can take a script developed with bash, in POSIX compliance mode, >>> and run it under another POSIX compliant shell? >>> >>> Is it such that Bash can run POSIX compliant scripts, BUT, cannot be >>> (easily) used to develop such, as there is no way to tell it to >>> only use POSIX? >>> >>> If someone runs in POSIX mode, should bash keep arbitrary bash-specific >>> extensions enabled? >>> >>> I am wondering about the rational, but also note that some people believe >>> they are running a POSIX compatible shell when they use /bin/sh, but would >>> get rudely surprised is another less feature-full shell were dropped in >>> as a replacement. >>> >> I think every POSIX compatible shell has its own extensions so there's no >> guarantee that a script which works fine in shell A would still work in >> shell B even if both A and B are POSIX compatible unless the script writer >> only uses POSIX compatible features. Is there a pure POSIX shell without >> adding any extensions? > dash is normally a better gauge of how portable your script is, than > bash in posix mode. It is, but it still has a couple of extensions over the standard There's also posh around. As for the rationale, making it strictly compatible in order to test scripts probably requires quite some more work and I bet Chet would not be against a --lint option or something like that but it may not be his primary objective.
Re: eval doesn't close file descriptor?
On Tue, Feb 12, 2013 at 1:54 AM, wrote: > With the script below, I'd expect any fd pointing to /dev/null to be > closed when the second llfd() is executed. Surprisingly, fd 3 is closed, > but fd 10 is now open, pointing to /dev/null, as if eval copied it instead > of closing it. Is this a bug? > > Thanks, > M > > > $ bash -c 'llfd () { ls -l /proc/$BASHPID/fd/; }; x=3; eval "exec > $x>/dev/null"; llfd; eval "llfd $x>&-"' > total 0 > lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2 > lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2 > lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2 > l-wx-- 1 matei matei 64 Feb 11 18:36 3 -> /dev/null > lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv > total 0 > lrwx-- 1 matei matei 64 Feb 11 18:36 0 -> /dev/pts/2 > lrwx-- 1 matei matei 64 Feb 11 18:36 1 -> /dev/pts/2 > l-wx-- 1 matei matei 64 Feb 11 18:36 10 -> /dev/null > lrwx-- 1 matei matei 64 Feb 11 18:36 2 -> /dev/pts/2 > lr-x-- 1 matei matei 64 Feb 11 18:36 8 -> /proc/4520/auxv > $ bash --version > GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu) > Copyright (C) 2011 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later < > http://gnu.org/licenses/gpl.html> > > This is free software; you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. > $ > Note that the same happens without using eval: $ llfd 3>&- total 0 lrwx-- 1 pgas pgas 64 Feb 12 08:00 0 -> /dev/pts/0 lrwx-- 1 pgas pgas 64 Feb 12 08:00 1 -> /dev/pts/0 l-wx-- 1 pgas pgas 64 Feb 12 08:00 10 -> /dev/null lrwx-- 1 pgas pgas 64 Feb 12 08:00 2 -> /dev/pts/0 lrwx-- 1 pgas pgas 64 Feb 12 08:00 255 -> /dev/pts/0 But you need to consider what process you are examining, you use a function and you examine the file descriptors of the process where this function runs. A function runs in the same process as the parent shell, if it simply closes 3 then there will be no more fd opened on >/dev/null in the parent shell when the function returns So what bash does is a little juggling with the file descriptors, moving 3 temporarily to be able to restore it.
Re: eval doesn't close file descriptor?
On Tue, Feb 12, 2013 at 6:07 PM, Matei David wrote: > Ok, but I see the same behaviour when eval runs in a subshell: > > $ bash -c 'llfd () { echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/ > >&2; }; x=3; eval "exec $x>/dev/null"; llfd; echo | eval "llfd $x>&-"' > [same output, fd 10 open, pointing to /dev/null, even though it's a > subshell] > eval runs in a subshell, but it's the same thing inside this subshell. eg you could have: echo | { eval "llfd "$x>&-"; echo blah >&3; } Bash could optimize this once it realizes there's only one command, but it's probably not that simple to implement. Try with a function that spawns a subshell eg: llfd () ( echo "pid:$BASHPID" >&2; ls -l /proc/$BASHPID/fd/ >&2; ) or llfd () { bash -c 'ls -l /proc/$$/fd' ; }
gnu parallel in the bash manual
I don't quite see the point of having gnu parallel discussed in the bash reference manual. http://www.gnu.org/software/bash/manual/bashref.html#GNU-Parallel I don't argue that it can be a useful tool, but then you might as well discuss sed awk grep make find etc.. Or even the ones not part of the standard toolset since parallel is not installed by default even on the linux distribution I know: flock fdupes recode convmv rsync etc... On top of that the examples teach incorrect things eg, "the common idioms that operate on lines read from a file"(sic) for x in $(cat list); do doesn't even read lines! I'd say this should be removed.
Re: Should this be this way?
On Tue, Feb 26, 2013 at 3:03 AM, Linda Walsh wrote: > My login shell is /bin/bash (i.e. not /bin/sh); SHELL=/bin/bash as well. > Typing 'which bash' gives /bin/bash, and whence bash: bash is /bin/bash. > > I had the foll0wing script which acts differently based on > whether or not it has a #!/bin/bash at the top: (i.e., as it is > displayed below, it fails; one need remove the [] from the first > line for it to work. > > #[!/bin/bash] > while read fn;do > base=${fn%.*} > if [[ -e $base ]]; then > if [[ $base -ot $fn ]]; then echo "compressed version ($fn) seems newer" > elif [[ $base -nt $fn ]]; then echo "uncompressed version ($base) > seem newer" > else echo "both versions ($base) are same age" > fi > else > echo "No uncompressed version of $base exists" > fi > done < <(find . -type f -name \*.[0-9].\*[zZ]\* ) > - > The error: > ./manscan.sh: line 12: syntax error near unexpected token `<' > ./manscan.sh: line 12: `done < <(find . -type f -name \*.[0-9].\*[zZ]\* )' > > Why would this script behave differently if the first line > exists or not? (Putting the !shell in square brackets, > made it a comment, not an interpreter spec, thus the same > effect as if it wasn't there ('cept the line number of the error is 1 > less if you don't have the line! ;-)). > > So...is this correct behavior for some[inane POSIX] reason? > Seems a bit odd to me. > I don't seem to be able to reproduce it with my default configuration. However I can reproduce it by setting (but not exporting) POSIXLY_CORRECT
Re: Should this be this way?
On Tue, Feb 26, 2013 at 11:22 AM, Roman Rakus wrote: > On 02/26/2013 02:03 AM, Linda Walsh wrote: >> >> My login shell is /bin/bash (i.e. not /bin/sh); SHELL=/bin/bash as well. >> Typing 'which bash' gives /bin/bash, and whence bash: bash is /bin/bash. > > which is not always correct. Use type builtin. > >> >> I had the foll0wing script which acts differently based on >> whether or not it has a #!/bin/bash at the top: (i.e., as it is >> displayed below, it fails; one need remove the [] from the first >> line for it to work. >> >> #[!/bin/bash] > > I think the line above will produce unspecified behavior. I think the kernel will try to find a shebang and not recognize it, then the current shell will try to run it Man bash says: If this execution fails because the file is not in executable format, and the file is not a directory, it is assumed to be a shell script, a file containing shell commands. A subshell is spawned to execute it. This subshell reinitializes itself, so that the effect is as if a new shell had been invoked to handle the script, with the exception that the locations of commands remembered by the parent (see hash below under SHELL BUILTIN COMMANDS) are retained by the child. SUS says If the execve() function fails due to an error equivalent to the [ENOEXEC] error defined in the System Interfaces volume of POSIX.1-2008, the shell shall execute a command equivalent to having a shell invoked with the pathname resulting from the search as its first operand, with any remaining arguments passed to the new shell, except that the value of "$0" in the new shell may be set to the command name. If the executable file is not a text file, the shell may bypass this command execution. In this case, it shall write an error message, and shall return an exit status of 126.
Re: Should this be this way?
On Thu, Feb 28, 2013 at 7:09 PM, Andreas Schwab wrote: > Bob Proulx writes: > >> I say that somewhat tongue-in-cheek myself. Because sourcing files >> removes the abstraction barriers of a stacked child process and >> actions there can persistently change the current shell. Not good as >> a general interface for random actions. Normal scripts are better. > > You can still put the sourcing in a subshell if you don't want > persistent changes. ehe, or just "bash script"
Re: ignoring current shell and always running posix shell? Re: Should this be this way?
On Mon, Mar 11, 2013 at 7:11 PM, Linda Walsh wrote: > Pierre Gaston wrote: >> On Tue, Feb 26, 2013 at 11:22 AM, Roman Rakus wrote: >>> I think the line above will produce unspecified behavior. > >> Man bash says: >> If this execution fails because the file is not in executable >> format, and the file is not a directory, it is assumed to be a shell >> script, a file containing shell commands. A subshell is spawned to >> execute it. This subshell reinitializes itself, so that the effect is >> as if a new shell had been invoked to handle the script, >> with the exception that the locations of commands remembered by the >> parent (see hash below under SHELL BUILTIN COMMANDS) are retained by >> the child. > > I doubt that starting a different shell than the one you are > running under is going to preserve commands in the same way as the parent > UNLESS the parent is the same shell. Correct, that's why bash doesn't do that (at least an upstream not patched version that the manual is documented) > Has passing such hashed args been standardized between zsh/tcsh,ksh > /sh/bash? I don't know >> SUS says >> If the execve() function fails due to an error equivalent to the >> [ENOEXEC] error defined in the System Interfaces volume of >> POSIX.1-2008, the shell shall execute a command equivalent to having a >> shell invoked with the pathname resulting from the search as its first >> operand, with any remaining arguments passed to the new shell, except >> that the value of "$0" in the new shell may be set to the command >> name. If the executable file is not a text file, the shell may bypass >> this command execution. In this case, it shall write an error message, >> and shall return an exit status of 126. > > > It is likely that the document is assuming you are running on > a POSIX compliant system where all users use the same shell so there is > only 1 shell, thus the use of the word 'the' when referring to the shell. > Of course, it's the posix specification for the posix shell
Re: ignoring current shell and always running posix shell? Re: Should this be this way?
On Tue, Mar 12, 2013 at 12:37 AM, Linda Walsh wrote: > Pierre Gaston wrote: >>> >>> >>> It is likely that the document is assuming you are running on >>> a POSIX compliant system where all users use the same shell so there is >>> only 1 shell, thus the use of the word 'the' when referring to the shell. >>> >> Of course, it's the posix specification for the posix shell > > What does that say about bash (in nonposix mode), perl, python, > rbash, etc i.e. -- the case that I ran into was NOT me running in posix > mode. Not much the bash part was covered by quoting the manual. No part of this email was about you, I was merely answering to Roman about the fact that it is specified in the 2 documents that are somehow relevant on this mailing list. > It would make no sense for posix to take the stance that any > unknown script without a shebang at the top, presented to any interpreter > shell > be ignored by the interpreter and instead shall be run under /bin/sh. > > Posix used to claim they were "descriptive", not "prescriptive", > though > they are becoming more of the latter with each new update, I'd find it hard to > think they'd try to enforce all script languages to default sources to > /bin/sh. > Afaik Posix doesn't impose anything on anyone but those who want to claim being posix compliant. (and yeah perl and python don't try to be compliant with the posix shell specification).
Re: If rbash is worthless, why not remove it and decrease bloat?
On Sat, Mar 16, 2013 at 6:28 PM, Chris Down wrote: > On 2013-03-16 12:13, Chet Ramey wrote: >> > If it cannot be removed, then some people are using it with the false >> > expectation that it provides some increased security. Better to get >> > rid of that than have someone think it is worth the extra bytes it takes >> > to implement. >> >> Folks cling tightly to their ideas about what should and should not be in >> bash and how it should behave. I'm comfortable with leaving the restricted >> shell feature in the current state and allowing users or distributions to >> disable it at their option. The `bloat' is not significant enough to be a >> factor. > > I agree in general, however, I would be in favour of at least adding something > to the man page that indicates rbash should not be considered secure except in > very specific implementations. I've dealt with too many people that falsely > think it increases security (although, whether these are the sort of people to > read man pages over ill-informed garbage on some guy's "Linux blog", I don't > know). > > Chris I don't think the manual gives this impression as it is. It doesn't say "secure" but "more controlled" and I think the way it is described really force the possible user to think about what rbash really provides.
Re: Bug/limitation in 'time'
On Sun, Mar 17, 2013 at 4:33 AM, Bruce Dawson wrote: > Thanks -- good to know that there is a fast and POSIX compliant method of > doing this. I should have included my optimized counting loop -- it's what > we switched to when we realized that $(expr) was a problem. Here it is now: > > # This code performs quite well > function BashCount() { > i=$1 > while [ $i -gt 0 ]; do > (( i-- )) > done > echo Just did $1 iterations using bash math > } > time BashCount 15 > > It's a *lot* faster, of course. BTW, I've poked around in the 'time' source > code enough to know that it is just displaying the results of wait3(), so > the misleading CPU consumption information is ultimately a wait3()/kernel > issue. However showing this in the documentation would be great. At least the man page of time on my ubuntu system is pretty much clear about what it does. The result is not striking me as impossible though, I can imagine a lot of real time spent waiting for the scheduler to run expr and then to run bash again. I tried a little experiment that I think shows the importance of the scheduler on the real time result: I run at the same time this little loop with different "niceness" i=0;time while ((i++<1));do /bin/echo -n;done sudo nice -n 19 bash -c 'i=0;time while ((i++<1));do /bin/echo -n;done' 2>&1| sed s/^/19:\ / & sudo nice -n -20 bash -c 'i=0;time while ((i++<1));do /bin/echo -n;done' 2>&1| sed s/^/-20:\ / I get: -20: real 0m9.331s -20: user 0m0.468s -20: sys0m1.504s 19: real0m14.004s 19: user0m0.532s 19: sys 0m1.660s so the nicer loop takes twice as much real time indicating that much real time is spent waiting for the process to run.
Re: Bug/limitation in 'time'
On Sun, Mar 17, 2013 at 5:58 PM, Bruce Dawson wrote: > The man page is clear that it is displaying the results of wait3(). However > it doesn't mention that this means that sub-process startup time is not > accounted for. That's what I feel should be clarified. Otherwise a CPU bound > task may appear to not be CPU bound. > > My expectation is that the sum of 'user' and 'sys' time should equal the > elapsed time because the overall task is 100% CPU bound (I've confirmed > this). It is unfortunate that the sub-process startup time is not accounted > for in 'user' or 'sys' time, and I think it would be appropriate to document > this. I'm not sure these are not taken into account, my guess is that the difference between real and sys+user can well be due to the other processes of your system trying to run
Re: Bug/limitation in 'time'
On Sun, Mar 17, 2013 at 9:07 PM, Bob Proulx wrote: > Bruce Dawson wrote: >> The man page is clear that it is displaying the results of wait3(). > > Man page for time? You mean the time section of the man page for > bash. no > If you are looking at the time man page then you are looking at > the standalone /usr/bin/time command and not the bash builtin time > command. The OP in his original email showed that he was aware of that.
Re: Bug/limitation in 'time' (kernel setings?)...
On Tue, Mar 19, 2013 at 5:03 AM, Bruce Dawson wrote: > I'll give those a try. > > BTW, I just posted the blog post to share what I'd found. You can see it > here: > > http://randomascii.wordpress.com/2013/03/18/counting-to-ten-on-linux/ > > I hope it's accurate, and I do think it would be worth mentioning the issue > in the documentation for 'time' and the bash 'time' internal command. > For what it's worth, I still thinks that time is not lying (though the man page warns about possible inaccuracies), Your loop with expr might be "cpu bound" but it does not run often because other processes are given a chance to run.
Re: unfamiliar construct...
On Sat, Mar 23, 2013 at 3:15 AM, Linda A. Walsh wrote: > In reading some suse startup code (*shiver*), > > I came across this construct > > > > func() { > local comm ## command from /proc/$pid/stat > > for comm; do > test -s comm || continue > ppid = pidofproc $comm > parents="${parents:+parents:}${ppid}" > done > } > --- > Is that valid code? does for have some arcane usage > to test if the contents of it exists as a file or something? `for comm;d'o is like: `for comm in "$@";do' The 2 lines after that are definitely wrong though and should probably look like test -s $comm || continue ppid=$(pidofproc $comm) (without trying to guess what the purpose of this function)
Re: weird problem -- path interpretted/eval'd as numeric expression
On Fri, Mar 29, 2013 at 5:10 PM, John Kearney wrote: > consider > dethrophes@dethace ~ > $ read -ra vals -d '' <<< $'lkjlksda\n adasd\n:sdasda:' > > dethrophes@dethace ~ > $ echo ${vals[0]} > lkjlksda > > I meant to update your wiki about it but I forgot. > I guess read uses gets not fread and that truncates the line anyway. > you miss the IFS part: IFS=: read -ra vals -d '' <<< $'lkjlksda\n adasd\n:sdasda:' echo "${vals[0]}" (IFS contains \n by default)
off topic IFS=: read changing the global IFS
On Fri, Mar 29, 2013 at 5:18 PM, John Kearney wrote: > Oh and FYI > IFS=: read > may change the global IFS on some shells I think. > Mainly thinking of pdksh right now. it seems ok on this netbsd machine: PD KSH v5.2.14 99/07/13.2 IFS=f read
Re: Local variables overriding global constants
On Wed, Apr 3, 2013 at 11:03 AM, Chris Down wrote: > On 2013-04-03 11:00, Nikolai Kondrashov wrote: > > >>>It doesn't work because you are trying to redefine an existing > > >>>readonly variable. > > >> > > >>Yes, but I'm explicitly redefining it locally, only for this function. > > >>And this works for variables previously defined in the calling > function. > > > > > >You're not redefining it locally, you are unsuccessfully trying to > override a > > >global. > > > Still Nikolai has a point. It's not clear why readonly variable can be overridden when the variable is declared readonly in the scope of an englobing function but not if it is declared readonly in the global scope. $ bash -c 'a() { v=2;echo "$v"; }; b () { declare -r v=1; a; echo "$v"; }; b' bash: v: readonly variable The variable is locale to b, but the readonly flag is preserved in a $ bash -c 'a() { declare -r v=2;echo "$v"; }; b () { declare -r v=1; a; echo "$v"; }; b' 2 1 The variable is locale to b, but you can redeclare it locale to a even if it has the readonly flag $ bash -c 'declare -r v=2; b () { declare -r v=1; echo "$v"; }; b' bash: line 0: declare: v: readonly variable 2 it looks like the same as the first case except that the variable is declared readonly in the global scope. (Also readonly defers from declare -r: bash -c 'a() { declare -r v=2;echo "$v"; }; b () { readonly v=1; a; echo "$v"; }; b; v=2' $ bash -c 'a() { declare -r v=2;echo "$v"; }; b () { readonly v=1; a; echo "$v"; }; b; v=2' bash: line 0: declare: v: readonly variable 1 1 bash: v: readonly variable. I seem to recall this has been discussed on this list at some point)