bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
Some of my scripts use command substitution, and now I see that there are lots of files like these in /tmp: prw--- 1 yuri wheel 0 Dec 10 13:32 sh-np-1386738492 -rw-r--r-- 1 yuri wheel3278909 Dec 10 14:54 sh-np-1386721176 Besides the obvious question why they aren't deleted in the end, two questions: Shouldn't bash delete the pipe files once the child process opened them? On most platforms the deleted pipe will function the same with the benefit that it will get deleted automatically. Why some of he files /tmp/sh-np-NNN are regular files, not pipes? When I look through code I see mkfifo call creates them. Shouldn't it always create fifos and not files? bash-4.2.45 FreeBSD-9.2 Yuri
Re: bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
On 12/11/2013 01:04, Piotr Grzybowski wrote: hullo, maybe post exact scripts that generate those files, together with some description of the environment, arent some of those scripts run from su? Simple command run under unprivileged user creates such files: tee >(grep "XXX" | do-filter >> my.log) Yuri
Re: bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
On 12/11/2013 07:20, Ryan Cunningham wrote: Second, you should post the definition of 'do-filter', if 'do-filter' is not a binary (also post the definitions of other custom functions 'do-filter' calls; but do not post the complete script). do-filter is some shell script running some other commands. I didn't think it mattered for this purpose. But in any case, it is one line script placed in a file and made executable: gawk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }' Also, as a correction to your original message, a FIFO _is_ a file, just not a regular file. Yes, it is a special kind of file, but it has a special type S_IFIFO and ls(1) shows it with 'p' type and it also has size=0. What was my point is that this FIFO is sometimes created as a proper FIFO: prw--- 1 yuri wheel 0 Dec 11 01:51 sh-np-1386834327 and sometimes as a regular (non-FIFO) file: -rw-r--r-- 1 yuri wheel3721815 Dec 11 02:00 sh-np-1386805482 And the code shows that only mkfifo(2) is used. (FIFO doesn't leave any persistent data when abandoned, but those files do leave data.) I am not sure why is that that it creates them as files? Doesn't it look strange? Yuri
Re: bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
On 12/10/2013 23:29, Yuri wrote: Some of my scripts use command substitution, and now I see that there are lots of files like these in /tmp: prw--- 1 yuri wheel 0 Dec 10 13:32 sh-np-1386738492 -rw-r--r-- 1 yuri wheel3278909 Dec 10 14:54 sh-np-1386721176 Besides the obvious question why they aren't deleted in the end, two questions: Shouldn't bash delete the pipe files once the child process opened them? On most platforms the deleted pipe will function the same with the benefit that it will get deleted automatically. Why some of he files /tmp/sh-np-NNN are regular files, not pipes? When I look through code I see mkfifo call creates them. Shouldn't it always create fifos and not files? Ok, I figured the details out. The actual script that I run is this radio playing command: stdbuf -i 0 -o 0 -e 0 mplayer http://kqed-ice.streamguys.org:80/kqedradio-ch-e1 | \ stdbuf -i 0 -o 0 -e 0 tee >(grep "^XXX XXX" | prepend-time >> test.log) prepend-time is a script: gawk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }' The problem only occurs with stdbuf(1) commands inserted there, and not without them. 1: bash creates a FIFO /tmp/sh-np-N 2: bash unlinks this FIFO 3: tee is passed /tmp/sh-np-N as an argument, and it creates a regular file with the same name This is the race condition. Obviously, bash doesn't ensure that unlink is issued after the child command opons it, and not before that. Order of operations 2 and 3 should be reversed. And just waiting for a while isn't enough! bash should only unlink the fifo after it has been open by the child command. Please also note, that this is on FreeBSD, which behavior might be different. Linux or other OSes might be getting lucky here with timing, not sure. FreeBSD also has its own version of stdbuf(1). This bug needs to be fixed. The original intention of stdbuf there was to make it all unbufferer. Whether this works or not isn't clear, but it doesn't matter for the purpose of this bug report. Suggested solution: On FreeBSD, for example, there is the kqueue(2) facility, which is able to wait on the event of "link-change": EVFILT_VNODE operation (takes file descriptor) allows sub-operation NOTE_LINK (The link count on the file changed) So the sequence will be like this: 1. bash creates FIFO 2. bash opens FIFO 3. bash waits with kqueue(2) with EVFILT_VNODE/NOTE_LINK 4. bash launches the child command 5. child command opens the FIFO as stdio 6. bash catches the link-change event 7. bash unlinks FIFO Linux and other OSes must have an equivalent facility. Sounds like a fun project to do. -) Absolute must-have if bash needs to support command substitution. Yuri CCing to the maintainer of bash on FreeBSD
Re: bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
On 12/11/2013 22:53, Piotr Grzybowski wrote: Yuri: I have verified that under linux 3.2.0, bash-4.2.25, nothing of the sort takes place. tee gets/dev/fd/${somefd}, why it is not supported in bsd kernel i have no idea, but I thought it was. I've even built mplayer and run your script (btw. probably there is an easier way of doing what you want), I also performed some tests with command that actually outputs what grep looks for. Also please note that `command substitution`, or $(command substitution) is slightly different from >(process substitution). Piotr, /dev/fd/{somefd} is quite an advanced feature, and it can be/has been argued that it can lead to unpredictable behavior when file descriptors appear/disappear. So FreeBSD only implements fds 0,1,2. Even those can be closed too. There are some other systems without such feature, and bash should be fixed to work fine without it. Yuri
Re: bash leaks sh-np-NNN files and pipes in /tmp/ when command substitution is used
On 12/12/2013 05:02, Greg Wooledge wrote: If bash is using named pipes of the form /var/tmp/sh-np-* or /tmp/sh-np-* then the compile-time options or detection must have failed to find a usable/dev/fd/ subsystem. (My HP-UX workstation is littered with sh-np-* FIFOs in BOTH /tmp and /var/tmp.) BSDs traditionally opposed to /dev/fd and /proc file systems. bash configure specifically tests for /dev/fd/3 and notes this is to detect FreeBSD, which only implements /dev/fd/{0..2}. I also have a lot of /tmp/sh-np-*. bash leaves them when race condition doesn't occur. And when it occurs, oddly, this is because the premature deletion of /tmp/sh-np-* If they would keep those fifos in deleted state as much of the time as possible, such leak would be minimal. Yuri
Re: Parameter Substitution Causing Memory Leak
On 01/07/2014 12:18, Todd B. Stein wrote: These bugs result in gradual slowdown of indefinitely-running scripts which rely on parameter substitution (rather than forking sed or awk) for speed and efficiency. Forgive me if I used the wrong terminology, but whether these bugs are considered honest-to-goodness "memory leaks" by valgrind seems unimportant. google-perftools is able to help analyze such problems. It can produce memory reports in certain intervals of the program lifetime, and they graphically show who owns how much memory at particular times. In a nutshell, the run is done like this: HEAPPROFILE=google-profile-pid$$ LD_PRELOAD=/usr/lib/libtcmalloc.so $* Yuri
Why bash doesn't have bug reporting site?
I noticed that bash is in absolute minority of projects not using any bug reporting system. Instead, users are directed to this ML to report bugs. It seems like it could have been very beneficial so that people could track the status of the issues. Yuri
Re: Why bash doesn't have bug reporting site?
On 01/13/2014 12:32, Eric Blake wrote: A mailing list IS a bug reporting system. When something receives as low a volume of bug reports as bash, the mailing list archives are sufficient for tracking the status of reported bugs. It's not worth the hassle of integrating into a larger system if said system won't be used often enough to provide more gains than the cost of learning it. In particular, I will refuse to use any system that requires a web browser in order to submit or modify status of a bug (ie. any GOOD bug tracker system needs to still interact with an email front-end). e-mail has quite a few vulnerabilities. Spam, impersonation, etc. In the system relying on e-mail, spam filter has to be present. And due to this you will get false positives and false negatives, resulting in lost information. On the opposite, login-based website access won't lose information this way and won't get spam. Among other benefits: * Ability to search by various criteria. For ex. database-based tracking system can show all open tickets or all your tickets. How can you do this in ML? * Ability to link with patches. In fact, github allows submitters to attach a patch, and admin can just merge it in with one click, provided there are no conflicts. Yuri
Errors in commands containing UTF8 characters are printed with UTF8 byte expansion
When I accidentally type some nonexistent command containing UTF8 characters, an error has UTF8 characters expanded: $ ЫZZZ bash: $'\320\253ZZZ': command not found I think bash shouldn't discriminate against UTF8 characters in error messages, and shouldn't expand them. If some international users wish to have UTF8 in their commands, bash should preserve them in all messages. My LANG is en_US.UTF-8, and changing it to ru_RU.UTF-8 doesn't seem to make a difference. Same with LC_ALL. bash-4.2.45_1 Yuri
Commands containing UTF8 characters mess up bash history
I manipulate with some files containing UTF8 characters. The only commands I run are these: ./some-cmd < ../some-dir/utf8-containing-file-name.txt vim ../some-dir/utf8-containing-file-name.txt After a while of running of such commands, and going back and forth in history and rerunning them, the history becomes messed up. The 'unkown character' question mark appears in the middle, some of them become concatenated. History becomes unusable. I am sure you will be able to reproduce this. Otherwise is kind of hard to create a test case for this kind of problem. But I am sure bash doesn't treat command, or some part of it, as utf8, and confuses number of characters and number of bytes. bash-4.2.45_1 Yuri
Re: Commands containing UTF8 characters mess up bash history
On 03/01/2014 14:07, Ryan Cunningham wrote: You could use the command "history -c" to clear the history in case this becomes a real issue. I don't have a real fix. The problem is that it comes back over and over again. Yuri
Re: Commands containing UTF8 characters mess up bash history
To: bug-bash@gnu.org Subject: Commands containing UTF8 characters mess up bash history Configuration Information: Machine: amd64 OS: freebsd9.2 Compiler: clang Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='amd64' -DCONF_OSTYPE='freebsd9.2' -DCONF_MACHTYPE='amd64-portbld-freebsd9.2' -DCONF_VEND OR='portbld' -DLOCALEDIR='/usr/local/share/locale' -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib -fno-omit-frame-po inter -I/usr/local/include -O2 -pipe -fno-omit-frame-pointer -fno-strict-aliasing uname output: FreeBSD eagle.yuri.org 9.2-STABLE FreeBSD 9.2-STABLE #4 r260252M: Sat Jan 4 04:09:58 PST 2014 y...@eagle.yuri.org:/usr/obj/u sr/src/sys/GENERIC amd64 Machine Type: amd64-portbld-freebsd9.2 Bash Version: 4.2 Patch Level: 45 Release Status: release Description: I manipulate with some files containing UTF8 characters. The only commands I run are these: ./some-cmd < ../some-dir/utf8-containing-file-name.txt vim ../some-dir/utf8-containing-file-name.txt After a while of running of such commands, and going back and forth in history and rerunning them, the history becomes messed up. The ' unkown character' question mark appears in the middle, some of them become concatenated. History becomes unusable. Repeat-By: n/a Fix: n/a
Re: Commands containing UTF8 characters mess up bash history
On 03/02/2014 14:15, Chet Ramey wrote: I can't reproduce this using bash-4.2.45 or bash-4.3 on Mac OS X or RHEL5. I'm using Mac OS X Terminal, if that makes a difference. I just got the same problem again on 4.2.45. I wonder how can I make a testcase, short of using a keylogger. I use konsole in kde4, if this makes any difference. When the problem occurs, it "eats up" my command prompt. It gets shorter and shorter when I press Up/Down and the damaged history line shows up. Bash writes some wrong control sequences to the terminal. Upgraded to 4.3.0, will see if this will happen again. Yuri
~/.profile and ~/.bash_profile aren't executed on login
None of these files are executed when bash is a user's default shell on FreeBSD. No special options were selected. Despite shell.c saying that they should be executed they just aren't. Is this a bug? Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/09/17 14:59, Bob Proulx wrote: How is the user logging in? Are they logging in with 'ssh' over the network? Or are they logging in through an "xdm" X Display Manager login from a graphical login display? User logs in locally through the display manager. Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/09/17 15:01, Chet Ramey wrote: Since it doesn't happen on any other OS, I suspect an issue with either the FreeBSD port or the pathname that appears in argv[0] when the shell is started, which is what bash uses to detect that it's been invoked as a login shell. The full path is /usr/local/bin/bash as set in passwd db. It doesn't run ~/.profile when I run it manually using this path. Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/09/17 15:24, Chet Ramey wrote: Of course not: that's not a login shell. As the documentation says, "A login shell is one whose first character of argument zero is a -, or one started with the --login option." The INVOCATION section of the manual page explains it in exhaustive detail. Ok, but that's not what my situation is. I am just logging in, using the display manager, when user has /usr/local/bin/bash as the shell in passwd. Why doesn't it execute ~/.profile? Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/09/17 14:14, Yuri wrote: None of these files are executed when bash is a user's default shell on FreeBSD. No special options were selected. Despite shell.c saying that they should be executed they just aren't. The bug is that bash doesn't handle login situation when it isn't linked to sh. When I have bash as a user shell (/usr/local/bin/bash) and it isn't redirected to sh, it fails to read ~/.profile. Pleas fix this. Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/10/17 13:51, Chet Ramey wrote: You have not described a bug, since you have not demonstrated that bash is behaving other than how it is documented, nor have you provided answers to any of the questions you've been asked. You haven't even determined whether or not bash is being invoked as a login shell at some unspecified point in the mystery login sequence you're using. bash never calls ~/.profile when invoked as a login shell: cp ~/.profile ~/.profile.bak echo 'echo "==> executing ~/.profile"' > ~/.profile ln -s /usr/local/bin/bash /usr/local/bin/sh /usr/local/bin/sh --login ==> executing ~/.profile /usr/local/bin/bash --login It only calls ~/.profile when it is named 'sh'. Same happens when this executable is set to be user's login shell. You should stop depending on sh being linked to bash. It should never examine its name, and act as a login shell with any name. Yuri
Re: ~/.profile and ~/.bash_profile aren't executed on login
On 12/11/17 06:03, Chet Ramey wrote: Bash, as documented, reads ~/.bash_profile first when it's invoked as a login shell, falling back to ~/.bash_login and ~/.profile if it's not there. I just verified: none of these 3 files are executed by bash when user logs in. /usr/local/bin/bash is set as user's login shell in 'vipw'. So when this user logs in, it must be invoked as a login shell. Is this correct? Yuri
bash crashes on command substitution
On FreeBSD, I have this command line (1). When I move cursor right after 'tmp', press and , it turns into (2)! What happened? The problem is completely reproducible. bash-4.3.33 Yuri --- (1) begin with this--- [root@yuri /usr/local/etc/rc.d]# [ "$(procstat $(cat /var/tmp/tor.pid)" ] --- (2) result is this! --- [root@yuri /usr/local/etc/rc.d]# [ "$(procstat $(cat /var/tmbash: command substitution: line 30: unexpected EOF while looking for matching `"' bash: command substitution: line 31: syntax error: unexpected end of file bash: command substitution: line 30: unexpected EOF while looking for matching `"' bash: command substitution: line 31: syntax error: unexpected end of file bash: command substitution: line 30: unexpected EOF while looking for matching `)' bash: command substitution: line 31: syntax error: unexpected end of file p
Re: bash crashes on command substitution
On 04/13/2015 05:54, Eduardo A. Bustamante López wrote: Did you build bash yourself? You need bison when compiling it, and make sure to run a: make distclean before building. I also found this issue while building for openbsd/freebsd, but this is due to some problem in the files included with the bash distribution that have to be regenerated specifically for the BSDs. It should fix after doing make distclean and rebuilding. No, It is from FreeBSD ports. Everyone has the same bash there. make distclean etc is all made in a standard way. Are you saying that the ports build is broken? Yuri
bash prints numeric values of unicode characters instead of their UTF8 representations
I have this line in ~/.bashrc: PS1=$'\\[\e[0;38;5;202m\\]\u2514\u2023\\[\e[0m\\] ' My command prompt looks like this: root2514root2023 What makes bash print unicode charater ascii values? bash-4.3.42 FreeBSD-10.3 LANG=en_US.UTF-8 terminal is konsole from kde4 Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 01/31/2016 13:41, Yuri wrote: I have this line in ~/.bashrc: PS1=$'\\[\e[0;38;5;202m\\]\u2514\u2023\\[\e[0m\\] ' This link http://unix.stackexchange.com/questions/25903/awesome-symbols-and-characters-in-a-bash-prompt says: "Since bash 4.2, you can use \u followed by 4 hexadecimal digits in a $'…' string". My bash-4.3.42 misinterprets \u as the user name instead. So what could be wrong? Is \u supposed to be a user name or a unicode codepoint hexadecimal prefix? Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 02/03/2016 13:13, Chet Ramey wrote: Sigh. You are mixing two things that perform backslash-escape character processing. If there is no character corresponding to a particular unicode value in the current character set, the escape sequence is left unchanged. So you get through a round of expansion with the $'...' processing, and the \u2514 is preserved in the result. The PS1 expansion code sees the \u and turns it into the current username. At least U+2023 is a valid character, it should be printed in utf8 as a unicode codepoint. My locale is utf8. And why the same escape character is interpreted in two different ways within the same piece of software? Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 02/03/2016 14:06, Greg Wooledge wrote: Works for me. wooledg@wooledg:~$ PS1=$'\u2023 \w\$ ' ? ~$ I just can't show it in this cross-system-X2X-with-different-character-sets setup. But it works for me, on Debian GNU/Linux with LANG=en_US.UTF-8. I believe you. It does work for you. Just not for me. Running echo $'\u2514\u2023' also prints '\u2514\u2023'. No idea what is broken. (running on FreeBSD-10.3) >And why the same escape character is interpreted in two different ways >within the same piece of software? Welcome to Bash. It's got layers upon layers of new features, deprecated features, historical features, features mandated by external "standards", etc. Doesn't sound like a positive thing to me -) Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 01/31/2016 13:41, Yuri wrote: What makes bash print unicode charater ascii values? I found what the problem is: --disable-nls causes HAVE_ICONV being undefined and \u feature not work. This is a bug, because "nls" refers to translations. I usually turn them off because I don't want to have messages in all possible languages installed. Interpretation of the general unicode characters that appear in scripts shouldn't be affected by NLS=off. Since \u is a general feature, iconv should always be present, and failure to find it should cause configure stage to fail. Is there a place I should create a bug report for this? Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 02/05/2016 11:13, Chet Ramey wrote: AM_GNU_GETTEXT is the autoconf macro that adds the --disable-nls option to configure. It handles checking for iconv by calling AM_ICONV. If you disable it by calling configure with --disable-nls, it doesn't look for iconv. Well, this is wrong in the bash context, because bash needs iconv in unrelated to gettext context. Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 02/07/2016 13:14, Chet Ramey wrote: Sorry, I misread the autoconf macro. This isn't the cause. You should look at config.log to see what happens when configure tests for iconv. Chet Sorry for the delay. Unicode characters without the nls option in bash-4.3.46 are still a problem. config.log seems to find iconv declaration fine, but it doesn't test for the actual library, see the log below. HAVE_ICONV is undefined. I think it comes from AM_GNU_GETTEXT. The problem is that bash shouldn't even have an optional HAVE_ICONV. Since Unicode characters are the major feature, you iconv presence should be an unconditional requirement. You probably need to use AM_ICONV and fail when it isn't present. Please fix this bug. ---begin config.log--- configure:8117: checking for iconv configure:8139: cc -o conftest -O2 -pipe -fno-omit-frame-pointer -DUSE_MKTEMP=1 -DUSE_MKSTEMP=1 -DIMPORT_FUNCTIONS_DEF=0 -Wl,-export-dynamic -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing -fno-omit-frame-pointer -DLIBICONV_PLUG -I/usr/local/include -fstack-protector conftest.c >&5 configure:8139: $? = 0 configure:8171: result: yes configure:8192: checking for iconv declaration configure:8221: cc -c -O2 -pipe -fno-omit-frame-pointer -DUSE_MKTEMP=1 -DUSE_MKSTEMP=1 -DIMPORT_FUNCTIONS_DEF=0 -Wl,-export-dynamic -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing -fno-omit-frame-pointer -DLIBICONV_PLUG conftest.c >&5 cc: warning: -Wl,-export-dynamic: 'linker' input unused configure:8221: $? = 0 configure:8231: result: extern size_t iconv (iconv_t cd, char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft); configure:8243: checking for nl_langinfo and CODESET ---end config.log--- Yuri
Re: bash prints numeric values of unicode characters instead of their UTF8 representations
On 06/28/2016 02:14, Yuri wrote: Sorry for the delay. Unicode characters without the nls option in bash-4.3.46 are still a problem. Correction: it was a stray patch in FreeBSD port that made HAVE_ICONV undefined. Without it iconv is detected fine without nls. However, I believe this is a bug in bash that HAVE_ICONV is conditional on the presence of libiconv. You should make the behavior unconditional, remove the dependency on HAVE_ICONV, and fail the build when it is missing. Yuri
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
On 11/13/24 14:02, Greg Wooledge wrote: The commands within a pipeline are executed in subshells (child processes), so all variable changes are discarded when the subshell exits. This sounds like an implementation detail that should be invisible that affects how the 'source' feature works. This makes it a very confusing from the user's perspective. Yuri
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
Hi !microsuxx, But I need to save the output of the user script into a dedicated log file. This script should run, should save its output into a dedicated log, and then many other commands should use these environment variables. Their logs can't be combined into one. Yuri On 11/13/24 15:36, #!microsuxx wrote: what u need to do with the vars include all code in tee or try exec > >( tee -a log ) 2>&1 set -x . my.bash ... what i firstly meant : #!/bin/bash tee -a log < <( all code here set -x ; . code ; other code that needs the vars set maybe needs a 2> smth ) try ( set -x . mybash vars_code ) |& tee -a log
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
On 11/13/24 14:45, #!microsuxx wrote: depending on actual purpose instead bs demo code , there are serval approaches to code running code The original code in my project runs 'source x.sh > log' where x.sh is some user-provided script. I wanted to trace the code using 'set -x' in order to report errors locations in scripts to the user. However, the trace also goes to log instead of stdout. When I changed that to 'source x.sh | tee log' - environment variables that the user script x.sh also sets disappeared due to the problem in SUBJECT. This problem makes it very inconvenient to implement features around scripts using 'source'. Yuri
The 'source x' command doesn't keep variables set by x when source output is piped into other command
This script: #!/usr/bin/env bash # write child script echo "export XVAR=xx" > child.sh echo "YVAR=yy" >> child.sh # source is piped into tee source child.sh | tee /dev/null echo "XVAR=$XVAR" echo "YVAR=$YVAR" # source is un-piped source child.sh echo "XVAR=$XVAR" echo "YVAR=$YVAR" prints this: XVAR= YVAR= XVAR=xx YVAR=yy The first 'source' command didn't set variables set or exported in the child.sh script, even though 'source' is executed in the current script's context, and it should behave as if these commands were executed in-place. Piping source's output into another command shouldn't change its variables-related behavior. bash-5.2.32 Thanks, Yuri
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
On 11/13/24 15:44, Greg Wooledge wrote: If the "user script" runs quickly enough, then: source userscript >logfile 2>&1 cat logfile This would fail to save the user script's output in case it would execute "exit 1" Yuri
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
On 11/13/24 14:48, #!microsuxx wrote: 1st source cmd , no extra cmds 2. { . foo ; code ; code ; } | tee 3rd tee < <( . foo ; cmds ) I need to (1) source the user script, (2) save this script's output to a dedicated file, (3) keep the same output in stdout, and (4) keep environment variables that the user script sets for later commands. It doesn't look like any of the above 3 lines do all these 4 requirements. Yuri
Re: The 'source x' command doesn't keep variables set by x when source output is piped into other command
Hi Martin, On 11/13/24 20:40, Martin D Kealey wrote: If you need to separate the output of `set -x` from your script's other output, consider setting BASH_XTRACEFD: exec 3>> /path/to/xtrace-logfile.txt BASH_XTRACEFD=3 set -x source your_file set +x exec 3>&- I didn't know about BASH_XTRACEFD. In case I'd really need to help users to diagnose their scripts down to the line number I'd use it. Thank you for this very practical and relevant advise. Yuri
Having an alias and a function with the same name leads to some sort of recursion
Hi, #!/usr/bin/env bash set -eu alias cmd=echo cmd() { echo "$@" } set -x cmd a b c $ ./a.sh + echo a b c + echo a b c ... + echo a b c + echo a b c Command terminated Sounds like a bug. I'd expect it to notice the alias, turn "cmd a b c" into "echo a b c" and print the letters. Regards, Yuri
Re: Having an alias and a function with the same name leads to some sort
Indeed, it appears I enabled expand_aliases. I'll probably reconsider it.
Random crashes when setting locale data
Hi everyone. I've stomped on a bug in either bash or gettext which is very difficult to reproduce and of course one that I cannot ignore. The configuration is as follows: I'm running some calculations using "solar" (a bioinformatical program) which in turns calls "gunzip", which is implemented as a very short bash script: #!/bin/bash PATH=${GZIP_BINDIR-'/bin'}:$PATH exec gzip -d "$@" This script fails roughly 3% of the time for me, only when running on a loaded system, and then again only when run through "solar". I've tried to run bash manually reproducing the *exact* environment by dumping "env" but no luck. I've rebuilt bash using the latest source version: GNU bash, version 4.2.0(1)-release (x86_64-pc-linux-gnu) using the following configuration: --with-curses --enable-largefile --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --host=x86_64-linux-gnu host_alias=x86_64-linux-gnu CC=gcc-4.4 CFLAGS="-O0 -g -ggdb3" YACC="bison -y" --with-included-gettext I've copied the configuration from ubuntu's 10.4 LTS package (since that's what I'm using anyway - but I wanted to avoid their patches), but note that I've also tried rebuilding using gcc-4.6 with the same issue. I've also tried to rebuild several other bash versions (up to 4.1.x as in ubuntu) with the same issue. Also note that I'm building the bundled gettext and disabled optimizations for easier debugging. All being set, I point PATH to a local copy of bash to get a decent stack trace: Core was generated by `/home/ydelia/debug/bin/bash /home/ydelia/debug/bin/gunzip /home/ydelia/debug/file'. Program terminated with signal 11, Segmentation fault. #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 31 ../sysdeps/x86_64/multiarch/../strlen.S: No such file or directory. in ../sysdeps/x86_64/multiarch/../strlen.S (gdb) where #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 #1 0x7f73230671dd in _nl_make_l10nflist (l10nfile_list=, dirlist=, dirlist_len=, mask=, language=, territory=, codeset=0x7fff8fac0566 "UTF-8", normalized_codeset=0x0, modifier=0x0, filename=0x7f7323168e77 "LC_IDENTIFICATION", do_allocate=0) at l10nflist.c:200 #2 0x7f73230606f3 in _nl_find_locale ( locale_path=0x7f7323183f00 "/usr/lib/locale", locale_path_len=16, category=12, name=) at findlocale.c:145 #3 0x7f732305fcf6 in *__GI_setlocale (category=12, locale=) at setlocale.c:303 #4 0x0047cb20 in set_default_locale () at locale.c:71 #5 0x00420e70 in main (argc=3, argv=0x7fff8fac08e8, env=0x7fff8fac0908) at shell.c:399 I also occasionally (less than 1% of the cases) get this gem: /home/ydelia/debug/bash: xmalloc: variables.c:405: cannot allocate 61 bytes (16384 bytes allocated) I'm starting to think that either there's a bug in libintl, or a corruption prior to set_default_locale. Running bash under valgrind doesn't seem to trigger any obvious problem. Can you please help? Thanks.
Re: Random crashes when setting locale data
On Wed, 2 Nov 2011 20:37:35 +0100 Yuri D'Elia wrote: > Core was generated by `/home/ydelia/debug/bin/bash > /home/ydelia/debug/bin/gunzip /home/ydelia/debug/file'. > Program terminated with signal 11, Segmentation fault. > #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 > 31../sysdeps/x86_64/multiarch/../strlen.S: No such file or directory. > in ../sysdeps/x86_64/multiarch/../strlen.S > (gdb) where > #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 > #1 0x7f73230671dd in _nl_make_l10nflist (l10nfile_list= out>, > dirlist=, dirlist_len=, > mask=, language=, > territory=, codeset=0x7fff8fac0566 "UTF-8", > normalized_codeset=0x0, modifier=0x0, > filename=0x7f7323168e77 "LC_IDENTIFICATION", do_allocate=0) at > l10nflist.c:200 > #2 0x7f73230606f3 in _nl_find_locale ( > locale_path=0x7f7323183f00 "/usr/lib/locale", locale_path_len=16, > category=12, > name=) at findlocale.c:145 > #3 0x7f732305fcf6 in *__GI_setlocale (category=12, locale= optimized out>) > at setlocale.c:303 > #4 0x0047cb20 in set_default_locale () at locale.c:71 > #5 0x00420e70 in main (argc=3, argv=0x7fff8fac08e8, > env=0x7fff8fac0908) > at shell.c:399 > > I also occasionally (less than 1% of the cases) get this gem: > > /home/ydelia/debug/bash: xmalloc: variables.c:405: cannot allocate 61 bytes > (16384 bytes allocated) > > I'm starting to think that either there's a bug in libintl, or a corruption > prior to set_default_locale. Running bash under valgrind doesn't seem to > trigger any obvious problem. Any takers? I still can reproduce the problem at will. I'm currently not using bash anymore due to this.
Re: Random crashes when setting locale data
On Sun, 13 Nov 2011 20:32:07 -0500 Chet Ramey wrote: > > This script fails roughly 3% of the time for me, only when running on a > > loaded system, and then again only when run through "solar". I've tried to > > run bash manually reproducing the *exact* environment by dumping "env" but > > no luck. > > It appears to me that this is a problem with the environment solar provides > to the shell script, and the included version of libintl chokes on > something about it (the value of $LC_IDENTIFICATION?). You've tried > several different versions of bash, and they all seem to fail inside > libintl. Apparently, yes. Though I also have weird "xalloc" failures before that (although more rare). > Without knowing more about the data in the stack traceback, like the > values passed to strlen, it's hard to say more. It might be an > interaction between libc and the version of libintl included with bash; > it might be that libc isn't calling out to the bash libintl at all. I don't know what exactly is going on, but I've build everything with -O0 -ggdb and yet cannot get a decent stack trace (maybe there are some hardcoded flags on the makefiles or whatnot). I didn't spent too much time on it however. Since I'd like to investigate more on this issue, what your suggestion would be to dig deeper into the issue?
Output redirection?
Hello, I, probably, need bash-help mailing list, but I could not find it. Here is the bash script fragment: LOG_FILE="./logfile" ... >$LOG_FILE I supposed that the statement above redirects stdout to the logfile. But the following 'echo' statement prints on the screen. The logfile is opened. What is the real purpose of the statement above? Thanks, Yuri ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
bash for cross-target
Hi, I build bash v3.1 for cross-target. I configured bash like this: CC= AR= RANLIB= /configure --host= --target= --build=i686-pc-linux-gnu --disable-nls --prefix= 1. To complete build I had to comment out running test on cross-target. 2. Build picks up headers from /usr/include, not from the directory where is installed. I have no problem building applications for my target using cross-target tool chain. Do I miss something while configuring bash for cross-target? Thank you, Yuri ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
RE: bash for cross-target
> Bash doesn't specify /usr/include at all. Isn't it the responsibility > of the compiler to choose an appropriate set of default include > directories? > When I build other applications for my target, I have no problems with headers. When I explicitly included header's path (CPPFLAGS="-I"), it didn't help. That's why I wondered, may be I missed something during configuration. Yuri ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
RE: bash for cross-target
> -Original Message- > From: Mike Frysinger [mailto:[EMAIL PROTECTED] > > > 1. To complete build I had to comment out running test on cross- > target. > > > > The usual solution for this is to create a config.cache file with the > > correct values for the cache variables in question. > > or to figure out what the most sane value is based upon known information > and > default to that > > iirc, the configure test that failed for bash when cross-compiling did not > come from bash, but from some autotool code ... but it's been a while > since i > cross-compiled bash ... what was the error Yuri ? > -mike You just cannot execute cross-compiled code on native platform... Also mksignames utility is built using native headers (from /usr/include), so signal name translation is incorrect for cross-target. As I understand now, I have to add my host type in configure.in (currently we have 'cygwin', 'mingw' and 'ix86-beos'). Issues with signal names and native code execution could be addressed there. Regards, Yuri ___ Bug-bash mailing list Bug-bash@gnu.org http://lists.gnu.org/mailman/listinfo/bug-bash
One IFS non-whitespace is stripped at the end of a word during word splitting
Hi, I was trying to understand the part of the documentation on word splitting. I realized that without some experiments: https://gist.github.com/x-yuri/6c6b375e7b0721a323960baaedf2a649 I wasn't sure I understood it correctly. And one of the experiments revealed that: $ bash -c 'IFS=x; a=xa; f() { for arg; do echo "($arg)"; done; }; f $a' () (a) $ bash -c 'IFS=x; a=ax; f() { for arg; do echo "($arg)"; done; }; f $a' (a) $ bash --version GNU bash, version 5.2.37(1)-release (x86_64-pc-linux-gnu) I.e. IFS non-whitespaces are not stripped at the beginning of a word, but if there's one such non-whitespace at the end, it is stripped. This looks like a bug, unless I'm missing something. Regards, Yuri
Re: bash doesn't build with gcc 14.x
On Sat, Dec 7, 2024 at 7:50 PM Chet Ramey wrote: > Since you're building in a docker container, or, more specifically, > without ncurses or any kind of termcap/termlib support, it might be > better to run > > ./configure --disable-readline > > The ancient termcap library in lib/termcap is only there to get bash > to link. Well, containers can be used to run software interactively, but indeed --disable-readline may come in handy. Also I was helped to realize that installing ncurses-dev will make it build. Although lib/termcap/tparam.c apparently should build in any case. docker run --rm alpine:3.21 sh -euxc ' apk add git build-base git clone https://git.savannah.gnu.org/git/bash.git cd bash git checkout 6794b5478f660256a1023712b5fc169196ed0a22 ./configure make || true grep TERMCAP config.status apk add ncurses-dev ./configure make grep TERMCAP config.status ' ... + ./configure ... checking for tgetent... no checking for tgetent in -ltermcap... no checking for tgetent in -ltinfo... no checking for tgetent in -lcurses... no checking for tgetent in -lncurses... no checking for tgetent in -lncursesw... no checking which library has the termcap functions... using gnutermcap ... + make ... making lib/termcap/libtermcap.a in ./lib/termcap make[1]: Entering directory '/bash/lib/termcap' gcc -c -g -O2 -DHAVE_CONFIG_H -I. -I../.. -I../.. -I../../lib -I. termcap.c gcc -c -g -O2 -DHAVE_CONFIG_H -I. -I../.. -I../.. -I../../lib -I. tparam.c tparam.c: In function 'memory_out': tparam.c:67:3: error: implicit declaration of function 'write'; did you mean 'wprintf'? [-Wimplicit-function-declaration] 67 | write (2, "virtual memory exhausted\n", 25); | ^ | wprintf make[1]: *** [Makefile:56: tparam.o] Error 1 make[1]: Leaving directory '/bash/lib/termcap' make: *** [Makefile:702: lib/termcap/libtermcap.a] Error 1 + true + grep TERMCAP config.status S["TERMCAP_DEP"]="./lib/termcap/libtermcap.a" S["TERMCAP_LIB"]="./lib/termcap/libtermcap.a" + apk add ncurses-dev ... + ./configure ... checking for tgetent... no checking for tgetent in -ltermcap... no checking for tgetent in -ltinfo... yes checking which library has the termcap functions... using libtinfo ... + make ... gcc -L./builtins -L./lib/readline -L./lib/readline -L./lib/glob -L./lib/tilde -L./lib/malloc -L./lib/sh -rdynamic -g -O2 -o bash shell.o eval.o y.tab.o general.o make_cmd.o print_cmd.o dispose_cmd.o execute_cmd.o variables.o copy_cmd.o error.o expr.o flags.o jobs.o subst.o hashcmd.o hashlib.o mailcheck.o trap.o input.o unwind_prot.o pathexp.o sig.o test.o version.o alias.o array.o arrayfunc.o assoc.o braces.o bracecomp.o bashhist.o bashline.o list.o stringlib.o locale.o findcmd.o redir.o pcomplete.o pcomplib.o syntax.o xmalloc.o -lbuiltins -lglob -lsh -lreadline -lhistory -ltinfo -ltilde -lmalloc lib/intl/libintl.a -ldl ls -l bash -rwxr-xr-x1 root root 4687096 Dec 8 02:33 bash size bash text databssdechexfilename 1293031 48528 456321387191 152ab7bash make[1]: Entering directory '/bash/support' rm -f man2html.o gcc -c -DHAVE_CONFIG_H -DSHELL -I/bash -I.. -Wno-parentheses -Wno-format-security -g -O2 man2html.c gcc -rdynamic -rdynamic -g -O2 man2html.o -o man2html -ldl make[1]: Leaving directory '/bash/support' + grep TERMCAP config.status S["TERMCAP_DEP"]="" S["TERMCAP_LIB"]="-ltinfo" D["HAVE_TERMCAP_H"]=" 1"
bash doesn't build with gcc 14.x
gcc 14.1.0 turned -Wimplicit-function-declaration into an error: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=55e94561e97ed0bce4774aa1c6b5d5d82209a379 https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=a1adce82c17577aeaaf6b9736ca8a1455d1164cb The bug report: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91092 The porting guide which includes a section on -Wimplicit-function-declaration: https://gcc.gnu.org/gcc-14/porting_to.html#implicit-function-declaration A way to reproduce the issue: $ docker run --rm alpine:3.21 sh -euxc ' apk add git build-base git clone https://git.savannah.gnu.org/git/bash.git cd bash git checkout 6794b5478f660256a1023712b5fc169196ed0a22 ./configure gcc --version make ' ... + gcc --version gcc (Alpine 14.2.0) 14.2.0 ... + make ... making lib/termcap/libtermcap.a in ./lib/termcap make[1]: Entering directory '/bash/lib/termcap' gcc -c -g -O2 -DHAVE_CONFIG_H -I. -I../.. -I../.. -I../../lib -I. termcap.c gcc -c -g -O2 -DHAVE_CONFIG_H -I. -I../.. -I../.. -I../../lib -I. tparam.c tparam.c: In function 'memory_out': tparam.c:67:3: error: implicit declaration of function 'write'; did you mean 'wprintf'? [-Wimplicit-function-declaration] 67 | write (2, "virtual memory exhausted\n", 25); | ^ | wprintf make[1]: *** [Makefile:56: tparam.o] Error 1 make[1]: Leaving directory '/bash/lib/termcap' make: *** [Makefile:702: lib/termcap/libtermcap.a] Error 1 Under alpine:3.20 (gcc 13.2.1) it builds. Regards, Yuri
Re: bash doesn't build with gcc 14.x
Meanwhile the workaround is: ./configure CFLAGS=-Wno-implicit-function-declaration && make