declare(1) not executing inside source'd script
Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu' -DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib -D_GNU_SOURCE -DRECYCLES_PIDS -DDEFAULT_PATH_VALUE='/usr/local/bin:/usr/bin' -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -Wno-parentheses -Wno-format-security uname output: Linux student.eeconsulting.net 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Machine Type: x86_64-redhat-linux-gnu Bash Version: 4.4 Patch Level: 23 Release Status: release Description: A script contains a function definition followed by declare -fx funcname but the declare(1) doesn't execute and the function is not exported after source'ing the file. In addition, executing 'declare -p -F | grep "funcname"' does not produce any output (making me think that declare(1) isn't being executed at all). Repeat-By: $ cat <<__EOF__ >/tmp/bashbug.bash > function myfunc { > echo "Running..." > } > declare -fx myfunc > declare -p -F | grep "myfunc" > __EOF__ $ source /tmp/bashbug.bash The function is now defined, but is not exported. And the output of the last command never appears, but if the same command is executed now -- at the interactive shell prompt -- it does show that 'myfunc' is defined. Fix: Strangely, if I create another script that sources the first one, then execute the declare commands in that second script, everything works when I source the second file?!
Re: declare(1) not executing inside source'd script
Agreed. But the bash installation does create a man page for it, does it not? So a command like, “man 1 declare” does work...? I always thought this was somewhat incongruous. -- Frank Edwards Edwards & Edwards Consulting, LLC Hours: 10A-6P ET Phone: (813) 406-0604 Sent from my iPhone. No electrons were harmed in the creation or transmission of this message. > On Jul 27, 2018, at 10:09 AM, Greg Wooledge wrote: > >> On Fri, Jul 27, 2018 at 12:26:06AM -0400, fr...@eec.com wrote: >>produce any output (making me think that declare(1) >>isn't being executed at all). > > For the record, declare is a builtin, not an external command with a > man page in the (1) section, so writing "declare(1)" is extremely > misleading.
Re: declare(1) not executing inside source'd script
Hm. Strange. I will examine the environment for weirdness regarding BASH_ENV or anything else that might be an issue. Given that it happens for me in both the stock bash 4.2.46 and the locally built 4.4 (on CentOS 7.4), there must be something common to both and thus external to the shell itself. I have installed bashdb on this system as well, but I don’t expect that to have affected anything since it’s an add on that Bash shouldn’t have noticed without the use of --debugger. Thanks for your help. PS: So the official line is to not specify the man section for builtins to the shell? I will not use section numbers in the future. :) -- Frank Edwards Edwards & Edwards Consulting, LLC Hours: 10A-6P ET Phone: (813) 406-0604 Sent from my iPhone. No electrons were harmed in the creation or transmission of this message. > On Jul 27, 2018, at 10:17 AM, Greg Wooledge wrote: > >> On Fri, Jul 27, 2018 at 12:26:06AM -0400, fr...@eec.com wrote: >> Repeat-By: >>$ cat <<__EOF__ >/tmp/bashbug.bash >>> function myfunc { >>> echo "Running..." >>> } >>> declare -fx myfunc >>> declare -p -F | grep "myfunc" >>> __EOF__ >>$ source /tmp/bashbug.bash >> >>The function is now defined, but is not exported. >>And the output of the last command never appears, >>but if the same command is executed now -- at >>the interactive shell prompt -- it does show that >>'myfunc' is defined. > > I cannot reproduce this, either in Debian's bash 4.4, or in bash 5.0-alpha. > > wooledg:~$ exec bash-5.0-alpha > wooledg:~$ cat foo > function myfunc { > echo "Running" > } > declare -fx myfunc > declare -p -F | grep myfunc > wooledg:~$ source ./foo > declare -fx myfunc > wooledg:~$ bash -c myfunc > Running > > On my system, I see the output from "declare -p -F" upon sourcing the > file, and the function is definitely exported. I get the same results > using Debian's bash 4.4(.12) as well.
Corrupted multibyte characters in command substitutions
Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall uname output: Linux mars 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux Machine Type: x86_64-pc-linux-gnu Bash Version: 5.1 Patch Level: 4 Release Status: release Description: Bash sometimes corrupts multibyte characters in command substitutions. I found the bug with the bash version as shipped with Debian bullseye, but it can be reproduced with an unmodified bash as well. The attached script shows how to build it (the configure options given there seem necessary to trigger the bug, except "-g" which I needed for debugging). The bug is very fragile and depends heavily on things like the length of filler characters in the script, of environment variables, even unrelated ones, and even of the current working directory name. Therefore the attached script is a wrapper that tries to reproduce the conditions for the bug to occur. I'm not sure if anything else from my environment is relevant; if it doesn't reproduce the bug for you, you can trying playing with things like environment variables, filler lines in the script etc. The wrapper then calls the actual buggy script (a trimmed-down version of my actual script exhibiting the bug) which is the lower here-document in the script. It's meant to read input from stdin (here, 511 spaces and a 2-byte UTF-8 character, so it crosses a 512-byte boundary), and output it unchanged (with a trailing newline which is irrelevent here), so the expected output is: 20 ... 20 c3 a4 0a But when the bug occurs, it gives: 20 ... 20 c3 90 a4 0a (The wrongly inserted byte may be something else instead of "90".) I traced the bug to subst.c:6244: mblen = mbrtowc (&wc, bufp-1, bufn+1, &ps); Here, bufn+1 is too big by 1, so the function will overrun the input data, and thus here the buf array, so UB. (That's why the bug is so fragile; that stuff needs to dirty exactly the memory location which is wrongly read here.) However, I'd say the actual cause of the bug is rather the handling of bufn in the read loop. After a char is consumed from the buffer (6207), bufn is not decremented until the next loop iteration, 6199: if (--bufn <= 0) This means (a) bufn is decremented once too many at the start (which is compensated for by using "<=" where otherwise "==" would do), and (b) bufn is too big by 1 for the rest of the loop. So far, the only place where it matters is the mblen call above, so the bug could be avoided by subtracting 1 there, but I think it's more robust to decrement bufn when consuming the char to avoid this pitfall for future changes, so that's what my patch does. Repeat-By: Running the attached wrapper script. Fix: I've included my patch in the wrapper script, activated by setting "patched=y", so it can easily be tested in the same environment; you can just extract it from there. bash-utf8-bug Description: Binary data
Re: Corrupted multibyte characters in command substitutions
Chet Ramey wrote: > On 1/1/22 7:02 PM, Frank Heckenbach wrote: > > Thanks for the report. This is a pretty good in-depth analysis of the issue. > > This was fixed back in March, 2021 in the devel branch as a result of > https://savannah.gnu.org/patch/?10035 (though the fix is different from > yours). > > It's queued up as one of the next set of bash-5.1 patches. I'm not happy with the way this bug is handled. After all, we're talking about silent data corruption, and now I learn the bug is known for almost a year, the fix is known and still hasn't been released, not even as an official patch. In the meantime, the buggy version has made it into a Debian stable release (and I assume many other distributions) and caused me (and I assume many other users) a lot of trouble. I spend many hours, first debugging my own script, then bash, which could have been spent more productively! I'll now have to spend more time installing a patched bash on each system I've upgraded since then. Also, any data that were processed by a bash script are now potentially corrupt, with no easy way of checking! PS: I still think my patch is better as it addresses the root cause instead of leaving a minefield for the next one to make a change there, but anyway, at least get either patch out now ASAP, please! Frank
Re: Corrupted multibyte characters in command substitutions
Chet Ramey wrote: > > After all, we're talking about silent data corruption, and now I > > learn the bug is known for almost a year, the fix is known and still > > hasn't been released, not even as an official patch. > > If you use the number of bug reports as an indication of urgency, Why would you? Aren't you able to assess the severity of a bug yourself? Silent data corruption is certainly one of the most severe kind of bugs (next to security bugs -- which this one might also be; I don't know, I'm no expert in writing exploits). > this is rarely encountered. > Yours is the second (maybe the third?) report. Obviously not. You yourself gave me a link to another report. That one mentions the bug also affects the building of the FSFE website. A quick search found other reports. But I know what you really mean. It only affects those strange non-ASCII locales, so it must be rare. (Anti-American rant skipped for politeness.) > > In the meantime, the buggy version has made it into a Debian stable > > release (and I assume many other distributions) and caused me (and I > > assume many other users) a lot of trouble. > > I wouldn't make any assumptions beyond your own experience. But I do since I read about other's experiences, see above. > > I spend many hours, first debugging my own script, then bash, which > > could have been spent more productively! > > I appreciate that you did. I certainly won't when I find the next bash bug. Instead, I'll ask everyone I know to send a separate bug report to better suit your metric of urgency. PS: One reason I'm so angry is this isn't the first time I've reported a bug with an easy fix to some FS package, or found a bug and discovered someone else had done this, and the fix sat there for a long time. How you do really expect people to contribute (you know, ESR's bazaar and stuff) if all the effort goes wasted and important fixes just rot in some archives? It's different when only a bug is reported and someone needs to find the cause and fix it, but when the patch is there and tested, it's a matter of minutes (if you have a moderately sane build system) to apply it and save your users a lot of trouble. Frank
Re: Corrupted multibyte characters in command substitutions
Ángel wrote: > I think that had you tested the devel branch instead of the last > release, you could have skipped a lot of testing (but how would you > have known? it's an easy thing to miss). > https://savannah.gnu.org/patch/?10035 seems to have gone the "easy > fix", which you discarded to get a more thorough one. Well, the hard part was the analysis. After I found the problem, the fix then wasn't that hard either way. > I was impressed as well by your careful analysis. > > Chet, I think you should consider if Frank patch isn't better than the > previous one. > I agree however that it should be published as an official patch. > 1/512th chance of corruption, and only on certain bash versions is > unlikely to be noticed easily. Which is doesn't mean this isn't really > important. 1/512 may be rare (and thus the more surprising) for many users. In my case, it was (luckily?) more common since my script processed a number of UTF-8 strings which increases the chance of hitting it. Indeed, by varying the environment it was roughly as likely to work correctly or crash at one of 3 points or so. > Think for instance what could happen with this affecting a > pass(1) wrapper. Probably. But any script that processes data (and doesn't just pipe them from one external command to the other) is potentially affected, and one may not notice the corrupt output until much later. > By the way, your reproducer is not working for me with an unpatched 5.1.8: Well, as I wrote in my original mail, it may depend on other factors of my environment, and it would take more work to identify them. Anyway, the point is moot now; my test works on my system and shows that the bug is present in 5.1.12 and fixed in 5.1.16. > As for patching the systems, I think this deserves being patched even > on stable distros. Albeit I would prefer that Chet released an official > patch first. That's been done now (5.1.16), thanks! Of course, I agree that stable distros should be patched as soon as possible. Best regards, Frank
Re: Corrupted multibyte characters in command substitutions fixes may be worse than problem.
> On 2022/01/02 17:43, Frank Heckenbach wrote: > > > Why would you? Aren't you able to assess the severity of a bug > > yourself? Silent data corruption is certainly one of the most severe > > kind of bugs ... > --- > That's debatable, BTW, as I was reminded of a similar > passthrough of what one might call 'invalid input' w/o warning, I think you misunderstood the bug. It was not about passing through invalid input or fixing it. It was about bash corrupting valid input (if an internal buffer boundary happened to fall within a UTF-8 sequence).
Re: Corrupted multibyte characters in command substitutions fixes may be worse than problem.
> In the case of bash with environment having LC_CTYPE: C.UTF-8 or > en_US.UTF-8 > read: > 0xC3 (len=1) i.e. Ã ('A' w/tilde in a legacy 8-bit latin-compatible > charset), > but invalid if bash processes the environment setting of en_US.UTF-8. > > Should bash process it as legacy input or invalid UTF8? > Either way, what should it return? a UTF-8 char > (hex 0xc30x83) transcoded from the latin value of A-tilde, or > keep the binary value the same (return 0x83), > should it return a warning message? If it does, should > it return NUL for the returned value because the input was erroneous? Assuming Latin-1 when nothing in the environment points to it seems questionable. It might just as well be a Cyrillic character in ISO-8859-5 or whatever. Email filters were mentioned. Emails may use charsets different from the current environment -- even several different ones within a mail (I've sent such mails myself). So if bash were to "fix" input depending on the environment, even writing a pass-through filter would require parsing the Content-Type headers and changing the environment accordingly (or else, use an 8-bit clean charset throughout). So I don't think bash should change the input (unintentionally as with the original bug or intentionally as discussed here) unless and until it needs to do charset-dependent operations
"trap" output from "if" statement redirected wrongly
Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall uname output: Linux mars 5.10.0-12-amd64 #1 SMP Debian 5.10.103-1 (2022-03-07) x86_64 GNU/Linux Machine Type: x86_64-pc-linux-gnu Bash Version: 5.1 Patch Level: 16 Release Status: release Description: This script writes "foo" to bar rather than stdout as I'd expect. It's triggered by the "if" statement (which doesn't even cause running in a subshell, so it's not that). #!/bin/bash set -e trap 'echo foo' 0 #false > bar # "foo" written to stdout correctly if true; then false; else false; fi > bar # "foo" written to bar
Re: "trap" output from "if" statement redirected wrongly
> On Wed, Apr 13, 2022 at 7:59 AM Frank Heckenbach > wrote: > > This script writes "foo" to bar rather than stdout as I'd expect. > > > > It's triggered by the "if" statement (which doesn't even cause > > running in a subshell, so it's not that). > > > > #!/bin/bash > > set -e > > trap 'echo foo' 0 > > #false > bar # "foo" written to stdout correctly > > if true; then false; else false; fi > bar # "foo" written to bar > > I don't believe this is a bug, though Chet is welcome to chime in > with an authoritative answer. The script exits after the else clause > because false returns 1 and errexit is set. At this point in the program > stdout has been redirect to the file bar, so when the trap echos to > stdout its content goes to bar as well. I don't see anything in the > docs about traps resetting their output file descriptors before they > are ran. I don't see anything to the contrary either. Syntactically, the redirection applies to the main statement, not the trap. When and how redirections are set up and reset is an implementation detail. Moreover, I don't see anything that would make a difference between the "if" statement and the plain one, so if the behaviour you describe is correct, the bug is that the non-"if" statement actually writes to stdout. :P > To work around this you need to save the original stdout file > descriptor. I know how to work around. But that makes me wonder. What is actually required to use a trap-statement safely (meaning, in a non-surprising way)? I know (for an EXIT trap) to save "$?" and "rethrow" it, now it seems one needs to save all FDs it might use (STDOUT, STDERR, maybe STDIN or other FDs one might redirect somewhere). What else is required? Is this documented anywhere?
executes statement after "exit"
Configuration Information [Automatically generated, do not change]: Machine: x86_64 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall uname output: Linux mars 5.10.0-12-amd64 #1 SMP Debian 5.10.103-1 (2022-03-07) x86_64 GNU/Linux Machine Type: x86_64-pc-linux-gnu Bash Version: 5.1 Patch Level: 16 Release Status: release Description: bash executes statement after "exit". Happens with or without "set -e". Doesn't happen with newline instead of ";". Similar to https://bugs.debian.org/819327, but perhaps not the same (as that one also happens with newline). #!/bin/bash : $((08 + 0)); exit echo "Should not get here." But I guess that's also prescribed by POSIX, right?
Re: executes statement after "exit"
> > #!/bin/bash > > : $((08 + 0)); exit > > echo "Should not get here." > > It never executes `exit'. > > The explanation in > https://lists.gnu.org/archive/html/bug-bash/2022-04/msg00010.html applies > here. > > The arithmetic syntax error (invalid octal constant) results in a word > expansion error (the $((...)) ), which causes the shell to abort execution > of the current command (the command list) Current command means command list? Is this actually documented somewhere? E.g., in "3.5.6 Process Substitution" the manual says: "The process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current command as the result of the expansion." So if "current command" is the command list, in sleep 3; : <(echo foo >&2) shouldn't it start the "echo" before the "sleep" (in order to pass its stdout as a filename to the command list)? It doesn't seem to do so. So here, "current command" apparently means a simple command (or in other cases, a compound command), but not a command list. > and jump back to the top level > to continue execution with the next command (echo). Let't see. This is a command list, so according to your explanation, due to the expansion error, neither the "exit" nor the "echo" is run: : $((08 + 0)); exit; echo "Should not get here." Alright. Now, in "3.2.4 Lists of Commands" it says: "A sequence of one or more newlines may appear in a list instead of a semicolon to delimit commands." So let's do this for the latter semicolon, resulting in: : $((08 + 0)); exit echo "Should not get here." According to the above, this should still be one command list, so the "echo" shouldn't run either, but it does. > POSIX requires the shell to exit on a word expansion error, which bash does > in posix mode. This actually seems the saner behaviour rather than continuing at some arbitrary point -- which I guess is well-defined in some sense, but really unobvious to the programmer and very fragile since it changes when a block is moved to a function, put inside an "if"-statement or just has the formatting (";" vs. newline) changed ...
Missing documentation "-bash_input"
Hi, I just noticed that the man pages are missing documentation for the "-bash" (or better "-bash_input") parameter. I just found the "-bash" parameter in a script and couldn't find any documentation about it, after checking out the source code I found "-bash_input" which after some testing is the code path "-bash" is somehow also pointing to. Could some documentation about this parameter be added to the man pages? It is defined in y.tab.c in line 3945 as part of yy_string_get and in code it is documented with "Let input come from STRING. STRING is zero terminated." meaning it uses the provided argument instead of stdin. Therefore these are equivalent but with the bash parameter it may be better readable when being passed as an argument itself to e.g. nspawn or docker: `echo test1234 | bash -c "echo $@"`, `bash -c "echo $@" -bash "test1234"`, `bash -c "echo $@" -bash_input "test1234"`. The section in the docs could be something like: ``` -bash_input, -bash: Can be used in scripts to provide pipe input to bash itself. This is especially useful when calling bash has to be wrapped within another command like chroot E.g. `chroot /path /bin/bash -c "echo $@" -bash "pipe input"` ``` Yours sincerely, Klaus Frank smime.p7s Description: S/MIME Cryptographic Signature
Re: Missing documentation "-bash_input"
Hi, sorry, but this is not true, I can clearly see that it exists. It may be an distro addition though. Is it specific to ArchLinux? Because I can see it being used and when I try to use it on my system it also clearly works. But against it just being a distro specific thing is that I also can see it within the bash source code mirror on GitHub. Where does it come from if it is not supposed to exist? Sorry, but something is really confusing here. Example usage: https://gitlab.archlinux.org/archlinux/devtools/-/blob/master/src/makechrootpkg.in?ref_type=heads#L152 arch-nspawn "$copydir" "${bindmounts_ro[@]}" "${bindmounts_rw[@]}" \ bash -c 'yes y | pacman -U -- "$@"' -bash "${pkgnames[@]/#//root/}" Bash source code on GitHub mirror: https://github.com/bminor/bash/blob/master/y.tab.c#L3967 "(--bash_input.location.string) = c;" Yours sincerely, Klaus Frank On 2023-11-28 15:30:20, Andreas Schwab wrote: On Nov 28 2023, Klaus Frank wrote: I just noticed that the man pages are missing documentation for the "-bash" (or better "-bash_input") parameter. There is no such thing as a -bash or -bash_input parameter. be better readable when being passed as an argument itself to e.g. nspawn or docker: `echo test1234 | bash -c "echo $@"`, `bash -c "echo $@" -bash "test1234"`, `bash -c "echo $@" -bash_input "test1234"`. The first argument passed after -c ... is just a placeholder here, it is used to set the $0 special parameter. $ bash -c 'echo $0 $2 $1' foo bar mumble foo mumble bar smime.p7s Description: S/MIME Cryptographic Signature
Re: Missing documentation "-bash_input"
Hi, thanks for the explanation. Esp. the parsing one at the bottom, that explains why my tests were false positive. One thing though, I probably should already know that, but why is a $0 needed even though a command was already specified? Shouldn't the command itself be $0? On 2023-11-29 01:27:17, Lawrence Velázquez wrote: On Tue, Nov 28, 2023, at 5:33 PM, Klaus Frank wrote: sorry, but this is not true It is true. I can clearly see that it exists. It may be an distro addition though. Is it specific to ArchLinux? Because I can see it being used and when I try to use it on my system it also clearly works. You see it being used without causing errors, but that doesn't mean it's doing what you think it's doing. Observe that % bash -c 'printf %s\\n "$@"' -bash 1 2 3 and % bash -c 'printf %s\\n "$@"' --no_such_option 1 2 3 appear to behave identically. But against it just being a distro specific thing is that I also can see it within the bash source code mirror on GitHub. Where does it come from if it is not supposed to exist? Sorry, but something is really confusing here. Example usage: https://gitlab.archlinux.org/archlinux/devtools/-/blob/master/src/makechrootpkg.in?ref_type=heads#L152 arch-nspawn "$copydir" "${bindmounts_ro[@]}" "${bindmounts_rw[@]}" \ bash -c 'yes y | pacman -U -- "$@"' -bash "${pkgnames[@]/#//root/}" You're misinterpreting that command line. As Andreas already explained, when you run something like bash -c 'command string here' a b c d the first argument after the command string (in my case "a", and in your case "-bash") is used as $0 while executing that command string, and the remaining arguments are used as the positional parameters. This happens even if an argument looks like an option, so options are not recognized as such if they follow -c [*]: % bash -x -c 'printf %s\\n "$0"' + printf '%s\n' bash bash % bash -c 'printf %s\\n "$0"' -x -x In your example, bash executes the command string 'yes ... "$@"' with "-bash" as $0 and the (possibly multiword) expansion of "${pkgnames[@]/#//root/}" as the positional parameter(s). I don't know why they are setting $0 to "-bash". Doing so makes the shell look like a login shell at a quick glance, but that doesn't make it one: % bash -c 'printf %s\\n "$0"; shopt login_shell' -bash -bash login_shell off --- [*]: Hypothetical "-bash" and "-bash_input" options would have to come before -c to take effect. However, bash does not use Tcl-style options, so "bash -bash" would be equivalent to "bash -b -a -s -h", and "bash -bash_input" to "bash -b -a -s -h -_ -i -n -p -u -t". These options all exist already except for "-_", which is why Greg's demonstration yielded the error "-_: invalid option". smime.p7s Description: S/MIME Cryptographic Signature
internal error
% ./bash -c "set -e; f() { eval false; }; f" ./bash: line 1: pop_var_context: head of shell_variables not a function context Might be related to https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00073.html, but still present in 5.2.21.
Re: internal error
> On 2/10/24 9:41 PM, Frank Heckenbach wrote: > > % ./bash -c "set -e; f() { eval false; }; f" > > ./bash: line 1: pop_var_context: head of shell_variables not a function > > context > > > > Might be related to > > https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00073.html, > > but still present in 5.2.21. > > Thanks, it's the same (cosmetic) issue. It's not cosmetic to the user who doesn't know if it indicates a real problem, if the script will continue to work correctly etc. Viele Grüße, Frank
PIPESTATUS inconsistent behavior in 3.0
Configuration Information [Automatically generated, do not change]: Machine: i386 OS: linux-gnu Compiler: i386-redhat-linux-gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='i386' -DCONF_OSTYPE='linux-gnu ' -DCONF_MACHTYPE='i386-redhat-linux-gnu' -DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/sh are/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib - D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -g -pipe -Wp,-D_FORTIFY_S OURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables uname output: Linux twinhead.yafrank.homeip.net 2.6.12.3-min3 #1 Sat Aug 6 21:13:43 C ST 2005 i686 i686 i386 GNU/Linux Machine Type: i386-redhat-linux-gnu Bash Version: 3.0 Patch Level: 16 Release Status: release Description: Here is what I got from my fc4 box. [EMAIL PROTECTED] ~]$ echo $BASH_VERSION 3.00.16(1)-release [EMAIL PROTECTED] ~]$ ls | bogus_command | wc 0 0 0 [EMAIL PROTECTED] ~]$ echo [EMAIL PROTECTED] 141 127 0 [EMAIL PROTECTED] ~]$ ls | tr [:lower:] [:upper:] | bogus_command | wc bash: bogus_command: command not found 0 0 0 [EMAIL PROTECTED] ~]$ echo [EMAIL PROTECTED] 0 141 127 0 [EMAIL PROTECTED] ~]$ ls | bogus_command | bogus_command2 | wc bash: bogus_command: command not found bash: bogus_command2: command not found 0 0 0 [EMAIL PROTECTED] ~]$ echo [EMAIL PROTECTED] # seems right here 0 127 127 0 However, in 9.1. Internal Variables, Advanced Bash-Scripting Guide 3.5 by Men del Cooper (http://www.tldp.org/LDP/abs/html/abs-guide.html) ... bash$ echo $BASH_VERSION 3.00.0(1)-release bash$ ls | bogus_command | wc bash: bogus_command: command not found 0 0 0 bash$ echo [EMAIL PROTECTED] 0 127 0 I don't have bash-3.00.0(1) in my box to test. If abs is right, then there is inconsistency between the two patched version. Repeat-By: Run same pipe concatenated command with bogus command inserted in bash-3.00. 0(1) and bash-3.00.16(1) then capture the PIPESTATUS immediately after will report di fferent return code. Send instant messages to your online friends http://uk.messenger.yahoo.com ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
"test -w aquota.user" gives different exit code after quotaon run
Configuration Information [Automatically generated, do not change]: Machine: i386 OS: linux-gnu Compiler: i386-redhat-linux-gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='i386' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i386-redhat-linux-gnu' -DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables uname output: Linux twinhead.yafrank.homeip.net 2.6.12.3-min3 #1 Sat Aug 6 21:13:43 CST 2005 i686 i686 i386 GNU/Linux Machine Type: i386-redhat-linux-gnu Bash Version: 3.0 Patch Level: 16 Release Status: release Description: A user quota management script recently always aborts in a fc3 server with latest update. Experiment shows that '[ -w /usr/local/aquota.user ]' returns 1 after quotaon run triggers the abortion. The same script used to work fine in early fc3. Enable quota in my fc4 box's /usr/local partition and do following test shows similar behavior: [EMAIL PROTECTED] local]# pwd /usr/local [EMAIL PROTECTED] local]# quotaoff -aup user quota on /usr/local (/dev/hda9) is off [EMAIL PROTECTED] local]# ls -l aquota* -rw--- 1 root root 11264 8月 18 18:36 aquota.user [EMAIL PROTECTED] local]# [ -r aquota.user ] && echo true true [EMAIL PROTECTED] local]# [ -w aquota.user ] && echo true true [EMAIL PROTECTED] local]# quotaon -au [EMAIL PROTECTED] local]# quotaon -aup user quota on /usr/local (/dev/hda9) is on [EMAIL PROTECTED] local]# ls -l aquota* -rw--- 1 root root 11264 8月 18 18:36 aquota.user [EMAIL PROTECTED] local]# [ -w aquota.user ] && echo true [EMAIL PROTECTED] local]# [ -r aquota.user ] && echo true true Why "test -w aquota.user" returns different exit code after quotaon run while ls shows the same permission? Do I make anything wrong here? Repeat-By: Turn on and off quota and compare exit code from "test -w /path/to/mount_point/aquota.user" Send instant messages to your online friends http://uk.messenger.yahoo.com ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
pipe buffering
I upgreaded versions of Linux, and bash, and I am finding that the pipe, |, is buffering differently. Before I could go a fe pipes before I get a buffered output, now it seems that it buffers at a single pipe. I have looked for settings in bash, as it apears to be bash related and not command related. The specific command is tail httpd-*.logs | cut -c-80 While I presume this is an internal behavor to the CLI, I would liek to be able to turn it off. Frank ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
Re: declare(1) not executing inside source'd script
Thanks for your help; I found the problem. It was a directory in my PATH that included another copy of the script by the same name. I should have been using ./script_name when passing it to the source command. I found this by placing set -x inside the script to see what was going on and it never ran. The which command didn’t help, but whereis tracked down the duplicate. Thanks again for your time (this was embarrassing, as I know better :)). <http://www.eeconsulting.net/> Edwards & Edwards Consulting, LLC <http://www.eeconsulting.net/> Frank J. Edwards sa...@eec.com Voice: +1 813.406.0604 (rings both office and mobile) > On Jul 27, 2018, at 11:28 AM, Frank wrote: > > Hm. Strange. I will examine the environment for weirdness regarding > BASH_ENV or anything else that might be an issue. Given that it happens for > me in both the stock bash 4.2.46 and the locally built 4.4 (on CentOS 7.4), > there must be something common to both and thus external to the shell itself. > > I have installed bashdb on this system as well, but I don’t expect that to > have affected anything since it’s an add on that Bash shouldn’t have noticed > without the use of --debugger. > > Thanks for your help. > > PS: So the official line is to not specify the man section for builtins to > the shell? I will not use section numbers in the future. :) > -- > Frank Edwards > Edwards & Edwards Consulting, LLC > Hours: 10A-6P ET > Phone: (813) 406-0604 > > Sent from my iPhone. No electrons were harmed in the creation or > transmission of this message. > >> On Jul 27, 2018, at 10:17 AM, Greg Wooledge wrote: >> >>> On Fri, Jul 27, 2018 at 12:26:06AM -0400, fr...@eec.com wrote: >>> Repeat-By: >>> $ cat <<__EOF__ >/tmp/bashbug.bash >>>> function myfunc { >>>>echo "Running..." >>>> } >>>> declare -fx myfunc >>>> declare -p -F | grep "myfunc" >>>> __EOF__ >>> $ source /tmp/bashbug.bash >>> >>> The function is now defined, but is not exported. >>> And the output of the last command never appears, >>> but if the same command is executed now -- at >>> the interactive shell prompt -- it does show that >>> 'myfunc' is defined. >> >> I cannot reproduce this, either in Debian's bash 4.4, or in bash 5.0-alpha. >> >> wooledg:~$ exec bash-5.0-alpha >> wooledg:~$ cat foo >> function myfunc { >> echo "Running" >> } >> declare -fx myfunc >> declare -p -F | grep myfunc >> wooledg:~$ source ./foo >> declare -fx myfunc >> wooledg:~$ bash -c myfunc >> Running >> >> On my system, I see the output from "declare -p -F" upon sourcing the >> file, and the function is definitely exported. I get the same results >> using Debian's bash 4.4(.12) as well. >
feature request: file_not_found_handle()
Hi, i think a file_not_found_handle() or a modified command_not_found_handle(), that does not need an unsuccessful PATH search to be triggered, would be useful and consistent. i found this old (Dec, 2009) discussion : http://gnu-bash.2382.n7.nabble.com/command-not-found-handle-not-called-if-command-includes-a-slash-tp7118.html Why are the patches not part of the bash? Use case: -see: command_not_found_handle() Cheers, Andreas
Re: feature request: file_not_found_handle()
Hi Ken, same reason for me: some object-oriented shell ( http://oobash.sourceforge.net/) "But given that the first entry on a command line pretty much has to be a command, I'm not sure it makes sense to invoke file_not_found_handle()" I think you are right. But then we go in circles...: http://gnu-bash.2382.n7.nabble.com/command-not-found-handle-not-called-if-command-includes-a-slash-tp7118.html @all: If there is a reason for not fixing this 'bug', i would like to hear. bye Andreas
Re: feature request: file_not_found_handle()
Hi Chet, I have no idea if there is "enough" demand, but i think there will be some ideas to use this feature... I still think it is a question of consistency to be able to handle a "No such file or directory event", if i can do this with a "command not found event" (independent of the command_not_found_handle history). You say you can easily test whether or not if the file in the pathname exists. And Ken's recommendation to trigger a no_such_file_or_directory_handle() is minimally invasive. So why not ? Andreas * * * * ** 2013/8/18 Chet Ramey > On 8/14/13 7:44 AM, Andreas Gregor Frank wrote: > > Hi, > > > > i think a file_not_found_handle() or a modified > command_not_found_handle(), > > that does not need an unsuccessful PATH search to be triggered, would be > > useful and consistent. > > The original rationale for command_not_found_handle is that there was no > other way to determine whether a command could be found with a PATH search. > (well, no easy way). > > A PATH search is suppressed when the command to be executed contains a > slash: the presence of a slash indicates an absolute pathname that is > directly passed to exec(). Since there's no search done, you know exactly > which pathname you're attempting to execute, and you can easily test > whether or not it exists and is executable. > > Is there enough demand to make this feature addition worthwhile? > > Chet > > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer > ``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, ITS, CWRUc...@case.edu > http://cnswww.cns.cwru.edu/~chet/ >
Re: feature request: file_not_found_handle()
Hi Chet, sorry, i thought you talk about the bash code. I didn't want to show my own usecase but now i have to ;-): I have a File class and can construct a File "object" for example: File anObjectName /etc/passwd and then i can do e.g. anObjectName.getInode (this already works with command_not_found_handle() ) But if i do a: File /etc/passwd /etc/passwd and then /etc/passwd.getInode (i think it would be nice if the normal files in a filesystem could be treated like objects) then there is nothing that triggers the command_not_found_handle() to split "object" and method... So at the moment slashes are forbidden in object names in my fun project. Now you know why your bash example for ckexec() isn't a solution for me. bye Andreas 2013/8/19 Chet Ramey > On 8/19/13 6:57 AM, Andreas Gregor Frank wrote: > > Hi Chet, > > > > I have no idea if there is "enough" demand, but i think there will be > some > > ideas to use this feature... > > I still think it is a question of consistency to be able to handle a "No > > such file or directory event", if i can do this with a "command not found > > event" (independent of the command_not_found_handle history). > > > > You say you can easily test whether or not if the file in the pathname > exists. > > That is not what I said. I said that you, the script writer, can check > whether or not a filename containing a slash is executable before > attempting to execute it. Maybe a function something like this (untested): > > ckexec() > { > case "$1" in > */*);; > *) "$@" ; return $? ;; > esac > > if [ -x "$1" ]; then > "$@" > else > other-prog "$@" > fi > } > > > Chet > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer > ``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, ITS, CWRUc...@case.edu > http://cnswww.cns.cwru.edu/~chet/ >
Re: feature request: file_not_found_handle()
Hello Greg, this is a feature request for no_such_file_or_directory_ handle(). I do not want to talk about missing quotes in anyone's code example! And the question if it makes sense to implement a command_not_found_handle() in this or that way has nothing to do with this request, too. How someone uses a bash feature in his scripts is not the problem of this mailing list i hope. Andreas . 2013/8/21 Greg Wooledge > On Wed, Aug 21, 2013 at 02:22:24AM -0800, Ken Irving wrote: > > $ cat $(ambler.method dispatch) > > #!/bin/bash > > method=$1 && shift > > test -n "$method" || exit > > for s in $(ls|shuf); do > > tob $s.$method "$@" & > > done > > As far as I can tell, this is some incredibly stupid crap thrown together > by an "object oriented" junkie to try to make one language look like some > other language. That is ALWAYS a bad idea. > > If you want to do a "method" to an "object", bash already provides a > syntax for that: > > method object > > Not: > > object.method > > The latter is ass-backwards. It's simply ludicrous. Stop it. > > Now, look at this crap: > > > for s in $(ls|shuf); do > > Do you know how hard we work every day to try to stamp out these sorts > of bugs? This is so bad I'm laugh/crying right now. > > touch 'this is a filename with spaces' > >
Re: feature request: file_not_found_handle()
Hi Eduardo, thank you very much for this constructive and honest answer. Not what i hoped to see, but this is only a request. For me only a nice to have...so fine bye Andreas 2013/8/21 Eduardo A. Bustamante López > On Wed, Aug 21, 2013 at 08:39:53PM +0200, Andreas Gregor Frank wrote: > > Hello Greg, > > > > this is a feature request for no_such_file_or_directory_ > > handle(). I do not want to talk about missing quotes in anyone's code > > example! > > You are free to send patches with the proposed feature. That way we > would be able to test it, and see if it doesn't conflict with > standards or existing codebases. > > There doesn't seem to be much demand for this feature, aside from you > two. Therefore, I don't think it's worth the time of Chet to add it, > when he has clearly better things to do with his time (fixing real > bugs for example). Sure, a lot of things could be added to bash, > Ksh-like discipline functions and compound variables, hey, even FP > things like closures, anonymous functions, ... but the point is, are > they worth the effort? If I want real OOP and FP, I use Python and > Ruby. > > So, I repeat. If you're in need of the feature, implement it. > Otherwise, you're asking too much IMHO. > > -- > Eduardo Bustamante >
Colorizing PS4... needing PS4_AFTER ?
Hello, I wish I could use PS4 to colorize debugging lines, like: PS4='\033[37m+ ' But I need to reset the color at the end of the line. I wish there were a similar variable to be able to reset the ansi code at the end of the line, like PS4_AFTER='\033[0m+' Do you know a hack to reset color at the end of each debug line ? FYI, my current hack is: # in bashrc: export PS4='+\302\240' alias shdebugpipe='sed -e "s/^+\o302\o240.*/\x1b[37m\0\x1b[0m/"' # then invoke script with: bash -x ./foo.sh 2>&1 | shdebugpipe (I replaced the regular space with a non-breakable space[1] in PS4 to avoid highlighting script output. The one I am using is little bit more complex. it uses BOM[2] instead of NBSP): # in bashrc: export PS4='\357\273\277#$LINENO:' alias shdebugpipe='sed -e "s/^\o357\o273\o277.*/\x1b[37m\0\x1b[0m/"' Franklin (Please CC me, I am not subscribed to this list). [1] NBSP: http://en.wikipedia.org/wiki/Non-breaking_space [2] BOM: http://en.wikipedia.org/wiki/Byte_order_mark
Told to report this bug here.
s/bash/lib/intl/bindtextdom.c:20: /Developer/SDKs/MacOSX10.6.sdk//usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /tmp/bash/Sources/bash/lib/intl/bindtextdom.c:23: /Developer/SDKs/MacOSX10.6.sdk//usr/include/stddef.h:74: error: two or more data types in declaration specifiers make[2]: *** [bindtextdom.o] Error 1 make[1]: *** [lib/intl/libintl.a] Error 1 make: *** [build] Error 2 The platform used to build is a dual 1.73 GHz PowerPC G4 Quicksilver (upgraded from a dual 800 MHz PowerPC G4) running MacOS 10.5.8. If there is anything more you need from me or something you would like me to try, feel free to contact me and I will do my best to accommodate your request. Frank J. R. Hanstick tro...@comcast.net