On Mon, Oct 6, 2014 at 10:38 PM, Linda Walsh <b...@tlinx.org> wrote: > Greg Wooledge wrote: > >> On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote: >> >>> done <<<"$(get_net_IFnames_hwaddrs)" >>> >> >> Where am I using a HERE doc? >>> >> >> <<< and << both create temporary files. >> > > > According to Chet , only way to do a multi-var assignment in bash is > > read a b c d <<<$(echo ${array[@]}) > > Forcing a simple assignment into using a tmp file seems Machiavellian -- > as it does exactly the thing the user is trying to avoid through > unexpected means. > > The point of grouping assignments is to save space (in the code) and have > the group initialized at the same time -- and more quickly than using > separate assignments. > > So why would someone use a tmp file to do an assignment. > > Even the gcc chain is able to use "pipe" to send the results of one stage > of the compiler to the next without using a tmp. > > That's been around for at least 10 years. > > So why would a temp file be used? > > > Creating a tmp file to do an assignment, I assert is a bug. > > It is entirely counter-intuitive that such wouldn't use the same mechanism > as LtR ordered pipes. > > I.e. > > cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for > probably 20 years or more (I think DOS was the last shell I knew that used > tmp files...) > > so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is > > cmd1 >& MEMVAR -- output is already in memory... > > so why would read a b c <<<${MEMVAR} need a tmp file if the text to be > read is already in memory? > > > Because it's not a simple assignment, it's using a mechanism to send data to an external program and another one to read from a stream of data.
Some shell use the buffer of a pipe as an optimization when the amount of data is small (which is probably the case of most heredocs/string).