Minor problems with bash-3.2-33.el5_11.4
Hi, I have a minor issue with the newest bash-3.2-33.el5_11.4 (RedHat) package. Same happens on Debian Wheezy (unfortunately I do not know that version number now). I usually define in my .bash_profile some functions called "..", "...", "" and so on. After the latest package was installed I continuously get error messages shown above when I start a (sub-)bash from the login bash: bash: error importing function definition for `BASH_FUNC_..' bash: error importing function definition for `BASH_FUNC_...' bash: error importing function definition for `BASH_FUNC_' etc... And these functions cannot be used in the child bash. But no problem is shown when bash is a login shell (called as "bash -l") and the definition is read from the .bash_profile. My bash function family is defined in my .bash_profile similar to bellow lines .. (){ command cd ../"$1";} ... (){ command cd ../../"$1";} (){ command cd ../../../"$1";} export -f .. ... In all previous versions this worked well, so I could go to the parent directory typing ".." instead of "cd ..". Is it by design or just an unforeseen side effect of the bug fix? *** Version number of Bash: $ bash --version GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu) $ rpm -qa|grep bash bash-3.2-33.el5_11.4 *** The hardware and operating system: $ cat /proc/cpuinfo ... model name : Intel(R) Xeon(R) CPU X5650 @ 2.67GHz $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.6 (Tikanga) *** The compiler used to compile Bash: ??? *** A description of the bug behavior: See above description *** A short script or 'recipe' which exercises the bug and may be used to reproduce it: $ ,,() { echo TEST;} $ export -f ,, $ ,, TEST $ bash bash: error importing function definition for `BASH_FUNC_,, $ ,, bash: ,,: command not found Thank You in advance! Kind regards, Tamás Tajthy This email message and any attachments are for the sole use of the intended recipients and may contain proprietary and/or confidential information which may be privileged or otherwise protected from disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not an intended recipient, please contact the sender by reply email and destroy the original message and any copies of the message as well as any attachments to the original message. Local registered entity information: http://www.msci.com/legal/local_registered_entities.html
Re: Bash-4.3 Official Patch 30
Hi, > + char * > + parser_remaining_input () > + { > + if (shell_input_line == 0) > + return 0; > + if (shell_input_line_index < 0 || shell_input_line_index >= > shell_input_line_len) > + return '\0';/* XXX */ Do you mean return ""; ? enami.
Re: Bash-4.3 Official Patch 30
On 10/5/14, 9:45 PM, Ryan Cunningham wrote: > This patch contains statements that add references to the patch directory, > "bash-4.3-patched". You should reissue the patch without such statements if > you find it feasible to do so. Why? I want those pathnames in there, rather than the ones that contain references to a specific previous version, to make possible future patches easier. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Bash-4.3 Official Patch 30
On 10/6/14, 1:35 AM, tsugutomo.en...@jp.sony.com wrote: > Hi, > >> + char * >> + parser_remaining_input () >> + { >> + if (shell_input_line == 0) >> + return 0; >> + if (shell_input_line_index < 0 || shell_input_line_index >= >> shell_input_line_len) >> + return '\0'; /* XXX */ > > Do you mean return ""; ? Yes, good catch. It doesn't make a difference: clang and gcc both accept it as written and it behaves as desired. However, I'll change it for the next version. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Minor problems with bash-3.2-33.el5_11.4
On 10/6/14, 4:04 AM, Tajthy, Tamas wrote: > Hi, > > I have a minor issue with the newest bash-3.2-33.el5_11.4 (RedHat) package. > Same happens on Debian Wheezy (unfortunately I do not know that version > number now). > > I usually define in my .bash_profile some functions called "..", "...", > "" and so on. After the latest package was installed I continuously get > error messages shown above when I start a (sub-)bash from the login bash: > > bash: error importing function definition for `BASH_FUNC_..' > bash: error importing function definition for `BASH_FUNC_...' > bash: error importing function definition for `BASH_FUNC_' > > etc... And these functions cannot be used in the child bash. But no problem > is shown when bash is a login shell (called as "bash -l") and the definition > is read from the .bash_profile. You should open a bug report with Red Hat and Debian. They used a stricter version of the patches that result in bash refusing to import shell functions whose names are not valid shell identifiers. The official patches I released, while imposing this requirement early (bash32-052), allow names that are not valid identifiers when the shell is not running in Posix mode (as of bash32-054). Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
RE: Minor problems with bash-3.2-33.el5_11.4
Dear Chet, Thanks for your kind answer! Then I'm going to try to open these bugs for RedHat and Debian. Good byte! Tamás -Original Message- From: Chet Ramey [mailto:chet.ra...@case.edu] Sent: Monday, October 06, 2014 1:52 PM To: Tajthy, Tamas; bug-bash@gnu.org Cc: chet.ra...@case.edu Subject: Re: Minor problems with bash-3.2-33.el5_11.4 On 10/6/14, 4:04 AM, Tajthy, Tamas wrote: > Hi, > > I have a minor issue with the newest bash-3.2-33.el5_11.4 (RedHat) package. > Same happens on Debian Wheezy (unfortunately I do not know that version > number now). > > I usually define in my .bash_profile some functions called "..", "...", > "" and so on. After the latest package was installed I continuously get > error messages shown above when I start a (sub-)bash from the login bash: > > bash: error importing function definition for `BASH_FUNC_..' > bash: error importing function definition for `BASH_FUNC_...' > bash: error importing function definition for `BASH_FUNC_' > > etc... And these functions cannot be used in the child bash. But no problem > is shown when bash is a login shell (called as "bash -l") and the definition > is read from the .bash_profile. You should open a bug report with Red Hat and Debian. They used a stricter version of the patches that result in bash refusing to import shell functions whose names are not valid shell identifiers. The official patches I released, while imposing this requirement early (bash32-052), allow names that are not valid identifiers when the shell is not running in Posix mode (as of bash32-054). Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/ This email message and any attachments are for the sole use of the intended recipients and may contain proprietary and/or confidential information which may be privileged or otherwise protected from disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not an intended recipient, please contact the sender by reply email and destroy the original message and any copies of the message as well as any attachments to the original message. Local registered entity information: http://www.msci.com/legal/local_registered_entities.html
Re: Bash-4.3 Official Patch 30
Thank you for the clarification. -- Sent from my iPad > On Oct 6, 2014, at 4:12 AM, Chet Ramey wrote: > >> On 10/5/14, 9:45 PM, Ryan Cunningham wrote: >> This patch contains statements that add references to the patch directory, >> "bash-4.3-patched". You should reissue the patch without such statements if >> you find it feasible to do so. > > Why? I want those pathnames in there, rather than the ones that contain > references to a specific previous version, to make possible future patches > easier. > > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer > ``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
bash uses tmp files for inter-process communication instead of pipes?
In running a startup script, I am endeavoring not to use tmp files where possible, As part of this, I sent the output of a command to stdout where I read it using the "variable read" syntax: while read ifname hwaddr; do printf "ifname=%s, hwaddr=%s\n" "$ifname" "$hwaddr" act_hw2if[$hwaddr]="$ifname" act_if2hw[$ifname]="$hwaddr" printf "act_hw2if[%s]=%s, act_if2hw[%s]=%s\n" "${act_hw2if[$hwaddr]}" "${act_if2hw[$ifname]}" done <<<"$(get_net_IFnames_hwaddrs)" Note, I used the <<<"$()" form to avoid process substitution -- as I was told on this list that the <<< form didn't use process substitution. So I now get: >>/etc/init.d/boot.assign_netif_names#192(get_net_IFnames_hwaddrs)> echo eth5 a0:36:9f:15:c9:c2 /etc/init.d/boot.assign_netif_names: line 203: cannot create temp file for here-document: No such file or directory Where am I using a HERE doc? More importantly, at this point, where is it trying to write?/(Read). Simple Q. Why isn't it using some small fraction of the 90+G of memory that is free for use at this point? This really feels like "Whack-a-mole"
Re: bash uses tmp files for inter-process communication instead of pipes?
On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote: >done <<<"$(get_net_IFnames_hwaddrs)" > Where am I using a HERE doc? <<< and << both create temporary files.
Re: bash uses tmp files for inter-process communication instead of pipes?
Greg Wooledge wrote: On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote: done <<<"$(get_net_IFnames_hwaddrs)" Where am I using a HERE doc? <<< and << both create temporary files. According to Chet , only way to do a multi-var assignment in bash is read a b c d <<<$(echo ${array[@]}) Forcing a simple assignment into using a tmp file seems Machiavellian -- as it does exactly the thing the user is trying to avoid through unexpected means. The point of grouping assignments is to save space (in the code) and have the group initialized at the same time -- and more quickly than using separate assignments. So why would someone use a tmp file to do an assignment. Even the gcc chain is able to use "pipe" to send the results of one stage of the compiler to the next without using a tmp. That's been around for at least 10 years. So why would a temp file be used? Creating a tmp file to do an assignment, I assert is a bug. It is entirely counter-intuitive that such wouldn't use the same mechanism as LtR ordered pipes. I.e. cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for probably 20 years or more (I think DOS was the last shell I knew that used tmp files...) so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is cmd1 >& MEMVAR -- output is already in memory... so why would read a b c <<<${MEMVAR} need a tmp file if the text to be read is already in memory?
Re: bash uses tmp files for inter-process communication instead of pipes?
On Mon, Oct 6, 2014 at 10:38 PM, Linda Walsh wrote: > Greg Wooledge wrote: > >> On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote: >> >>>done <<<"$(get_net_IFnames_hwaddrs)" >>> >> >> Where am I using a HERE doc? >>> >> >> <<< and << both create temporary files. >> > > > According to Chet , only way to do a multi-var assignment in bash is > > read a b c d <<<$(echo ${array[@]}) > > Forcing a simple assignment into using a tmp file seems Machiavellian -- > as it does exactly the thing the user is trying to avoid through > unexpected means. > > The point of grouping assignments is to save space (in the code) and have > the group initialized at the same time -- and more quickly than using > separate assignments. > > So why would someone use a tmp file to do an assignment. > > Even the gcc chain is able to use "pipe" to send the results of one stage > of the compiler to the next without using a tmp. > > That's been around for at least 10 years. > > So why would a temp file be used? > > > Creating a tmp file to do an assignment, I assert is a bug. > > It is entirely counter-intuitive that such wouldn't use the same mechanism > as LtR ordered pipes. > > I.e. > > cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for > probably 20 years or more (I think DOS was the last shell I knew that used > tmp files...) > > so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is > > cmd1 >& MEMVAR -- output is already in memory... > > so why would read a b c <<<${MEMVAR} need a tmp file if the text to be > read is already in memory? > > > Because it's not a simple assignment, it's using a mechanism to send data to an external program and another one to read from a stream of data. Some shell use the buffer of a pipe as an optimization when the amount of data is small (which is probably the case of most heredocs/string).
Re: bash uses tmp files for inter-process communication instead of pipes?
On Mon, Oct 06, 2014 at 12:38:21PM -0700, Linda Walsh wrote: > According to Chet , only way to do a multi-var assignment in bash is > > read a b c d <<<$(echo ${array[@]}) The redundant $(echo...) there is pretty bizarre. Then again, that whole command is strange. You have a nice friendly array and you are assigning the first 3 elements to 3 different scalar variables. Why? (The fourth scalar is picking up the remainder, so I assume it's the rubbish bin.) Why not simply use the array elements? > Forcing a simple assignment into using a tmp file seems Machiavellian -- > as it does exactly the thing the user is trying to avoid through > unexpected means. You are working in a severely constrained environment. Thus, you need to adapt your code to that environment. This may (often will) mean you must forsake your usual practices, and fall back to simpler techniques. I don't know precisely what your boot script is attempting to do, or precisely when (during the boot sequence) it runs. Nor on what operating system you're doing it (though I'll go out on a limb and guess "some flavor of Linux"). Maybe the best solution here is to move your script to a different part of the boot sequence. If you run it after all the local file systems have been mounted, then you should be able to create temporary files, which in turns means << and <<< become available, should you need them. > The point of grouping assignments is to save space (in the code) and have > the group initialized at the same time -- and more quickly than using > separate assignments. Elegance must be the first sacrifice upon the altar, here. > So why would someone use a tmp file to do an assignment. It has nothing to do with assignments. Temp files are how here documents (and their cousin here strings) are implemented. A here document is a redirection of standard input. A redirection *from a file*, in fact, albeit a file that is created on the fly for you by the shell. > So why would a temp file be used? Historical reasons, and because many processes will expect standard input to have random access capabilities (at least the ability to rewind via lseek). Since bash has no idea what program is going to be reading the here document, why take chances? > Creating a tmp file to do an assignment, I assert is a bug. You are not *just* doing an assignment. You are doing a redirection. Furthermore, you have not even demosntrated the actual reason for the here document yet. > cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for > probably 20 years or more (I think DOS was the last shell I knew that used > tmp files...) Correct. Pipelines do not use temp files. > so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is Process substitution uses either a named pipe, or a /dev/fd/* file system entry, or a temp file, depending on the platform. > cmd1 >& MEMVAR -- output is already in memory... Now you're just making things up. That isn't bash syntax. > so why would read a b c <<<${MEMVAR} need a tmp file if the text to be > read is already in memory? Because you are using generalized redirection syntax that is intended for use with *any* command, not just shell builtins. Your example has now changed, from read a b c d <<< "${array[*]}" to read a b c <<< "$scalar" I don't know which one your *real* script is using, so it's hard to advise you. If you want to replace the former, I would suggest this: a=${array[0]} b=${array[1]} c=${array[2]} d="${array[*]:3}" If you want to replace the latter, I would suggest something like this: shopt -s extglob temp=${scalar//+([[:space:]])/ } a=${temp%% *} temp=${temp#* } b=${temp%% *} c=${temp#* } This may be overkill, since the consolidation of whitespace might not be necessary. It's impossible to tell since you have not shown the actual input, or the actual desired output, or even used actual variable names in your examples.
Re: bash uses tmp files for inter-process communication instead of pipes?
Greg Wooledge wrote: On Mon, Oct 06, 2014 at 12:38:21PM -0700, Linda Walsh wrote: According to Chet , only way to do a multi-var assignment in bash is read a b c d <<<$(echo ${array[@]}) The redundant $(echo...) there is pretty bizarre. Then again, that whole command is strange. You have a nice friendly array and you are assigning the first 3 elements to 3 different scalar variables. Why? (The fourth scalar is picking up the remainder, so I assume it's the rubbish bin.) Why not simply use the array elements? Forcing a simple assignment into using a tmp file seems Machiavellian -- as it does exactly the thing the user is trying to avoid through unexpected means. You are working in a severely constrained environment. That isn't the problem: the assignment using a tmp file is: strace -p 48785 -ff Process 48785 attached read(0, "\r", 1)= 1 write(2, "\n", 1) = 1 socket(PF_NETLINK, SOCK_RAW, 9) = 3 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, msg_iov(2)=[{"*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"read a b c d <<<${arr[@]}\0", 26}], msg_controllen=0, msg_flags=0}, 0) = 42 close(3)= 0 - Um... it used a socket.. to transfer it, then it uses a tmp file on top of that?! : rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0 open("/tmp/sh-thd-110678907923", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3 write(3, "one two three four", 18) = 18 write(3, "\n", 1) = 1 open("/tmp/sh-thd-110678907923", O_RDONLY) = 4 close(3)= 0 unlink("/tmp/sh-thd-110678907923") = 0 fcntl(0, F_GETFD) = 0 fcntl(0, F_DUPFD, 10) = 10 fcntl(0, F_GETFD) = 0 fcntl(10, F_SETFD, FD_CLOEXEC) = 0 dup2(4, 0) = 0 close(4)= 0 ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7fff85627820) = -1 ENOTTY (Inappropriate ioctl for device) lseek(0, 0, SEEK_CUR) = 0 read(0, "one two three four\n", 128)= 19 dup2(10, 0) = 0 fcntl(10, F_GETFD) = 0x1 (flags FD_CLOEXEC) close(10) = 0 - Why in gods name would it use a socket (still of arguable overhead, when it could be done in a local lib), but THEN it duplicates the i/o in a file? Thus, you need to adapt your code to that environment. This may (often will) mean you must forsake your usual practices, and fall back to simpler techniques. The above is under a normal environment. It's still broken. Maybe the best solution here is to move your script to a different part of the boot sequence. If you run it after all the local file systems have been mounted, then you should be able to create temporary files, which in turns means << and <<< become available, should you need them. Theoretically, they ARE mounted What I think may be happening is that $TMP is not set so it is trying to open the tmp dir in: "//sh-thd-183928392381" -- a network address. Elegance must be the first sacrifice upon the altar, here. --- Correctness before elegance. 1), use memory before OS services. 2) use in-memory services before file services, 3) Don't use uninitialized variables (TMP) -- verify that they are sane values before usage. 4) don't use network for tmp when /tmp or /var/tmp would have worked just fine. So why would someone use a tmp file to do an assignment. It has nothing to do with assignments. Temp files are how here documents (and their cousin here strings) are implemented. A here document is a redirection of standard input. A redirection *from a file* Nope: This type of redirection instructs the shell to read input from the current source until a line containing only delimiter (with no trailing blanks) is seen. All of the lines read up to that point are then used as the standard input for a command. "The current source" -- can be anything that input can be redirected from -- including memory. in fact, albeit a file that is created on the fly for you by the shell. Gratuitous expectation of slow resources... Non-conservation of resources, not for the user, but for itself. Note: a=$( one two three --- no tmp files used, but it does a file read on foo. b=$a -- the above uses no tmp files. b=$(echo "$a") --- THAT uses no tmp files. But b=<<<$a uses a tmp file. That's ridiculously lame. So why would a temp file be used? Historical reasons, and because many processes will expect standard input to have random access capabilities (at least the ability to rewind via lseek). Since bash has no idea what program is going to be reading the here document, why take c
Re: bash uses tmp files for inter-process communication instead of pipes?
On Mon, Oct 06, 2014 at 02:00:47PM -0700, Linda Walsh wrote: > How much of the original do you want?... wait... um... > But the point it it should already work... I think it is trying to read > from the network. In the code below, you have a function that *generates* a stream of data out of variables that it has directly in memory ("$nm") and the contents of files that it reads from disk ("$(<"$nm"/address)"). Then you feed this output to something else that tries to parse the fields back *out* of the stream of data, to assign them to variables inside a loop, so that it can operate on the separate variables. You're just making your life harder than it has to be. Combine these two blobs of code together. Iterate directly over the variables as you get them, instead of doing this serialization/deserialization step. netdev_pat=... # (and other variable assignments) cd "$sysnet" && for ifname in ...; do hwaddr=$(<"$ifname"/address) act_hw2if[$hwaddr]="$ifname" act_if2hw[$ifname]="$hwaddr" done On a side note, you are going out of your way to make your bash code look like perl (alias my=declare and so on). This is a bad idea. It means you are putting round pegs in square holes. It's best to treat each language as its own language, with its own style and its own strengths and weaknesses. All these extra layers you are introducing just make things worse, not better. > shopt expand_aliases > alias dcl=declare sub=function > alias int=dcl\ -i map=dcl\ -A hash=dcl\ -Aarray=dcl\ -a > alias lower=dcl\ -l upper=dcl\ -u string=dcl my=dcl > sub get_net_IFnames_hwaddrs () {# get names + addrs from /sys > vrfy_drivers > array pseudo_devs=(br bond ifb team) > string pseudo_RE="^+(${pseudo_devs[@]})$" > pseudo_RE=${pseudo_RE// /|} > string netdev_pat="+([_0-9a-z])+([0-9])" > sysnet=/sys/class/net > ( cd "$sysnet" && > for nm in $( printf "%s\n" $netdev_pat | grep -Pv "$pseudo_RE"); do > echo "$nm" "$(<$nm/address)" > done ) > } > > map act_hw2if #actual values (to be read in) > map act_if2hw > map XIF #tmp array to hold exchanged IF's > > sub read_actuals () { # parse output stream from above > my ifname hwaddr > while read ifname hwaddr; do > act_hw2if[$hwaddr]="$ifname" > act_if2hw[$ifname]="$hwaddr" > done <<<"$(get_net_IFnames_hwaddrs)" > }
Re: bash uses tmp files for inter-process communication instead of pipes?
Greg Wooledge wrote: On Mon, Oct 06, 2014 at 02:00:47PM -0700, Linda Walsh wrote: How much of the original do you want?... wait... um... But the point it it should already work... I think it is trying to read from the network. In the code below, you have a function that *generates* a stream of data out of variables that it has directly in memory ("$nm") and the contents of files that it reads from disk ("$(<"$nm"/address)"). --- That's because the 1st function parsed the output of a command like 'ip'... it has later been optimized to not call it but use /sys directly... BUT, there is no guarantee that I won't have to go back to the other format for some reason. When I split things up it's because of NOT wanting to take shortcuts. On a side note, you are going out of your way to make your bash code look like perl (alias my=declare and so on). This is a bad idea. --- I don't think it is perl. I think it is something that saves me on typing and increases clarity not perl:shell equiv arraydeclare -a map declare -A hash declare -A int declare -i string declare I threw in 'my' because it is a hell of alot shorter than declare. This is a bad idea It means you are putting round pegs in square holes. Extra layers? Where? I don't throw in extra layers to make things look like perl I throw them in to make the scripts maintainable. I used to code in the same fashion as all the shell books... It was unmaintainable and illegible. FWIW, I tried to move perl toward more legibility and that went over less well than an icecube's chance in hell. For them, being forced to upgrade to a POSIX standard would be an improvement. <<< (and you know how I feel PSX is a LowestCD) -- a barely passing grade, like a "D". While trying to do that with shell is, admittedly trying to put round pegs in square holes -- one does the best one can do with the tools available. Trying to focus on the "why"s of my algorithms and style when you and others have indicted no desire to see the whole context in the past, seems a bit like telling me I'm not watching the show the "right way" and I should "ignore the man over there behind the curtain".
Fwd: bash not using pipes or /tmp @ boot?
Not sure how but this went off into space, sorta... Greg Wooledge wrote: On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote: done <<<"$(get_net_IFnames_hwaddrs)" Where am I using a HERE doc? <<< and << both create temporary files. Yeah... where? assign_netif_names=/etc/init.d/boot.assign_netif_names start /etc/init.d/boot.assign_netif_names#208(read_actuals)> typeset -p act_hw2if declare -A act_hw2if='()' /etc/init.d/boot.assign_netif_names#209(read_actuals)> typeset -p act_if2hw declare -A act_if2hw='()' /etc/init.d/boot.assign_netif_names#210(read_actuals)> du -s /tmp /var/tmp 27776 /tmp 13156 /var/tmp /etc/init.d/boot.assign_netif_names#211(read_actuals)> date +%Y%m%d.%H%M /etc/init.d/boot.assign_netif_names#211(read_actuals)> touch /tmp/boot-20141006.1418 /etc/init.d/boot.assign_netif_names#212(read_actuals)> ls -l /tmp/boot-20141006.1418 -rw-r--r-- 1 root root 0 Oct 6 14:18 /tmp/boot-20141006.1418 ^^^ tmp is writeable. (so is var/tmp) The output: (6 lines...) /etc/init.d/boot.assign_netif_names#204(read_actuals)> get_net_IFnames_hwaddrs /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth0 00:15:17:bf:be:b2 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth1 00:15:17:bf:be:b3 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth2 00:26:b9:48:71:e2 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth3 00:26:b9:48:71:e4 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth4 a0:36:9f:15:c9:c0 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs)> echo eth5 a0:36:9f:15:c9:c2 /etc/init.d/boot.assign_netif_names: line 204: cannot create temp file for here-document: No such file or directory Maybe if bash didn't create tmp files in non-standard locations, this never would have been an issue (or just use pipes)..
Re: Bash-4.3 Official Patch 30
On 10/6/2014 6:43 AM, Chet Ramey wrote: On 10/6/14, 1:35 AM, tsugutomo.en...@jp.sony.com wrote: Hi, + char * + parser_remaining_input () + { + if (shell_input_line == 0) + return 0; + if (shell_input_line_index < 0 || shell_input_line_index >= shell_input_line_len) + return '\0'; /* XXX */ Do you mean return ""; ? Yes, good catch. It doesn't make a difference: clang and gcc both accept it as written and it behaves as desired. However, I'll change it for the next version. Changing it to return 0 instead of '\0' would probably be more clear. No need to return a pointer to a static empty string. Regards, -John wb8tyw@qsl.network
Re: bash uses tmp files for inter-process communication instead of pipes?
Greg Wooledge wrote: netdev_pat=... # (and other variable assignments) (cd "$sysnet" && for ifname in ...; do hwaddr=$(<"$ifname"/address) act_hw2if[$hwaddr]="$ifname" act_if2hw[$ifname]="$hwaddr" done) Except that either act_hw2if + pair were just assigned to in the sub process that was meant to isolate the change of directory from the rest of the program, OR we aren't in the right directory when we do the reads. Either way, they code as you have suggested won't work without overcoming another set of side effects.
Re: bash uses tmp files for inter-process communication instead of pipes?
On Tue, Oct 7, 2014 at 12:00 AM, Linda Walsh wrote: According to Chet , only way to do a multi-var assignment in bash is > > >>> read a b c d <<<$(echo ${array[@]}) >>> >> >> The redundant $(echo...) there is pretty bizarre. Then again, that >> whole command is strange. You have a nice friendly array and you are >> assigning the first 3 elements to 3 different scalar variables. Why? >> (The fourth scalar is picking up the remainder, so I assume it's the >> rubbish bin.) >> >> Why not simply use the array elements? >> >> Forcing a simple assignment into using a tmp file seems Machiavellian -- >>> as it does exactly the thing the user is trying to avoid through >>> unexpected means. >>> >> >> You are working in a severely constrained environment. >> > That isn't the problem: the assignment using a tmp file is: > >> strace -p 48785 -ff >> > Process 48785 attached > read(0, "\r", 1)= 1 > write(2, "\n", 1) = 1 > socket(PF_NETLINK, SOCK_RAW, 9) = 3 > sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, > msg_iov(2)=[{"*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"read a b c d > <<<${arr[@]}\0", 26}], msg_controllen=0, msg_flags=0}, 0) = 42 > close(3)= 0 > - > Um... it used a socket.. to transfer it, then it uses a tmp file on top > of that?! : > > rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, > {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0 > open("/tmp/sh-thd-110678907923", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3 > write(3, "one two three four", 18) = 18 > write(3, "\n", 1) = 1 > open("/tmp/sh-thd-110678907923", O_RDONLY) = 4 > close(3)= 0 > unlink("/tmp/sh-thd-110678907923") = 0 > fcntl(0, F_GETFD) = 0 > fcntl(0, F_DUPFD, 10) = 10 > fcntl(0, F_GETFD) = 0 > fcntl(10, F_SETFD, FD_CLOEXEC) = 0 > dup2(4, 0) = 0 > close(4)= 0 > ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, > 0x7fff85627820) = -1 ENOTTY (Inappropriate ioctl for device) > lseek(0, 0, SEEK_CUR) = 0 > read(0, "one two three four\n", 128)= 19 > dup2(10, 0) = 0 > fcntl(10, F_GETFD) = 0x1 (flags FD_CLOEXEC) > close(10) = 0 > - > > Why in gods name would it use a socket (still of arguable overhead, when > it could be done in a local lib), but THEN it duplicates the i/o in a file? > > Thus, you need >> to adapt your code to that environment. This may (often will) mean >> you must forsake your usual practices, and fall back to simpler >> techniques. >> > > The above is under a normal environment. It's still broken. > > > > > >> Maybe the best solution here is to move your script to a different part >> of the boot sequence. If you run it after all the local file systems >> have been mounted, then you should be able to create temporary files, >> which in turns means << and <<< become available, should you need them. >> > > Theoretically, they ARE mounted What I think may be happening > is that $TMP is not set so it is trying to open the tmp dir in: > > "//sh-thd-183928392381" -- a network address. > > > Elegance must be the first sacrifice upon the altar, here. >> > --- > Correctness before elegance. 1), use memory before OS services. > 2) use in-memory services before file services, 3) Don't use uninitialized > variables (TMP) -- verify that they are sane values before usage. > 4) don't use network for tmp when /tmp or /var/tmp would have worked just > fine. > > > > >> So why would someone use a tmp file to do an assignment. >>> >> >> It has nothing to do with assignments. Temp files are how here documents >> (and their cousin here strings) are implemented. A here document is a >> redirection of standard input. A redirection *from a file* >> > > Nope: > This type of redirection instructs the shell to read input from the >current source until a line containing only delimiter (with no > trailing >blanks) is seen. All of the lines read up to that point are then > used >as the standard input for a command. > > "The current source" -- can be anything that input can be > redirected from -- including memory. > > > in fact, >> albeit a file that is created on the fly for you by the shell. >> > > Gratuitous expectation of slow resources... Non-conservation > of resources, not for the user, but for itself. > > Note: > >> a=$(> echo "$a" >> > one > two > three > --- > no tmp files used, but it does a file read on foo. > > b=$a > -- the above uses no tmp files. > > b=$(echo "$a") > --- > THAT uses no tmp files. > > But > b=<<<$a > > uses a tmp file. > > That's ridiculously lame. > b=<<<$a is not
Re: bash uses tmp files for inter-process communication instead of pipes?
Pierre Gaston wrote: b=<<<$a is not doing anything so I wonder how much value this example has. --- I wondered about that.. think that was meant to be the b=<<<($a) w/o the copy that greg said was pointless. A pipe means 2 different processes, a tempfile for a heredoc does not. First) we are talking 2 separate processes... That I fit those into a heredoc paradigm was already a kludge. originally I had, (and wanted) producer-func call it prod() and process-func+store-in-global for future i.e. producer|process-func -> vars... but I want the vars in the parent... so processfunc <|<(producer) -- now process can store results but it's using bogus magic called "process substitution"... naw... it's a LtR pipe instead of a RtL pipe. or parent <|<(child) pipe instead of the standard: child|parent pipe. The parent being the one who's vars "live" on afterwards. I thought I was getting rid of this bogus problem (which shouldn't be a problem anyway -- since it's just another pipe but with parent receiving the data instead of child receiving it) by storing it in a variable transfer form <<<($VAR)... cuz was told that didn't involve a voodoo process-substitution. But it involves a heredoc that even though /tmp and /tmp/var are writeable, BASH can't use. Either it's writing to //network or trying to open a tmp file in the current (/sys) directory... eitherway is buggy. But a heredoc is just reading from an in-memory buffer. That bash is going the inefficient route and putting the buffer in "tmp", first, is the 2nd problem you are pointing out -- why do people who want a heredoc need a fork OR a tmpfile? The output I generated is < 512 bytes. Would it be too silly to simply put any heredoc output |> child reader vs. parent reader <|< child producer ??? If you want to ask your Q, you should be asking why fork OR *tmpfile. In my case, Why do I need a voodoo process that can't be done with pipes.. cuz I'm pretty sure a parent can spawn children and then read from them just as easily as write to them. Besides your comparison is simply ludicrous, bash provides several mechanisms do to several very different things. --- Having several that would work woudl be nice. Um... having 1 that would work even... how do you turn around parent/child I/O and get the vars on the other side without heredoc or procsub? Why no pipes?
Re: bash uses tmp files for inter-process communication instead of pipes?
On Tue, Oct 7, 2014 at 2:07 AM, Linda Walsh wrote: > > > Pierre Gaston wrote: > >> >> b=<<<$a is not doing anything so I wonder how much value this example has. >> > --- > I wondered about that.. think that was meant to be the > b=<<<($a) w/o the copy that greg said was pointless. > >> >> A pipe means 2 different processes, a tempfile for a heredoc does not. >> > > First) we are talking 2 separate processes... > That I fit those into a heredoc paradigm was already a kludge. > > originally I had, (and wanted) > > producer-func call it prod() > and process-func+store-in-global for future > > i.e. producer|process-func -> vars... but I want the vars in the parent... > so > > processfunc <|<(producer) -- now process can store results but it's using > bogus magic called "process substitution"... naw... > it's a LtR pipe instead of a RtL pipe. or > > parent <|<(child) pipe > > instead of the standard: > > child|parent pipe. The parent being the one who's vars "live" on > afterwards. > > I thought I was getting rid of this bogus problem (which shouldn't > be a problem anyway -- since it's just another pipe but with parent > receiving > the data instead of child receiving it) by storing it in a variable > transfer form > <<<($VAR)... cuz was told that didn't involve a voodoo > process-substitution. > > But it involves a heredoc that even though /tmp and /tmp/var are writeable, > BASH can't use. Either it's writing to //network or trying to open a > tmp file in the current (/sys) directory... eitherway is buggy. > > But a heredoc is just reading from an in-memory buffer. That bash is > going the inefficient route and putting the buffer in "tmp", first, is > the 2nd problem you are pointing out -- why do people who want a heredoc > need a fork OR a tmpfile? The output I generated is < 512 bytes. Would > it > be too silly to simply put any heredoc output write it to any file? shopt heredocsize=XXX[KMG]... and overflow to tmp > if needed. > > But more basically (no limits need be implemented): > why there is there a difference between : > > parent producer >|> child reader >vs. > parent reader <|< child producer > > ??? > > If you want to ask your Q, you should be asking why fork OR *tmpfile. > > In my case, Why do I need a voodoo process that can't be done with pipes.. > cuz I'm pretty sure a parent can spawn children and then read from them > just as easily as write to them. > > > > > > Besides your comparison is simply ludicrous, bash provides several >> mechanisms do to several very different things. >> > --- > Having several that would work woudl be nice. Um... having 1 that would > work even... how do you turn around parent/child I/O and get the vars on > the > other side without heredoc or procsub? Why no pipes? > > >
Re: bash uses tmp files for inter-process communication instead of pipes?
On Tue, Oct 7, 2014 at 2:25 AM, Dave Rutherford wrote: **it.. sorry for the fat finger post. Gmail puts the tiny formatting options right next to the big SEND button. Ratzen fracken.