Re: "here strings" and tmpfiles

2019-04-07 Thread L A Walsh
On 3/20/2019 5:19 AM, Greg Wooledge wrote:
> On Wed, Mar 20, 2019 at 07:49:34AM +0700, Robert Elz wrote:
>   
>> However, using files for here docs makes here docs unusable in a shell
>> running in single user mode with no writable filesystems (whatever is
>> mounted is read only, until after file system checks are finished).
>> 
>
> Meanwhile, proposals based around /dev/fd/* would also make here docs
> unusable in a shell running early in the boot process, before all
> file systems are mounted.
>
> Just like that one time L. Walsh tried to write a bash boot script that
> used <() to populate an array, and it failed because she was running
> it too early in the boot sequence, and /dev/fd/ wasn't available yet.
>   

---
/dev/fd was available, and so was /proc that it symlinked to.
What wasn't available was "/tmp" being mounted as a writeable
file system. I.e. -- exactly the case we are talking about being
a problem *AGAIN*.

Various boot processes use /dev and /proc before any file systems
are mounted.  Requiring a mounted, writeable file system to run a shell
script during boot was the reason I had problems.



> So, my counterpoints are:
>
> 1) Leave it alone.  It's fine.
>   

No, it's not -- it's been biting people for the past 4
years or omre.
> 2) Don't use bash for scripts that run early in the boot sequence.
>   
---
unacceptable as bash is used as *THE* defacto linux shell.
> 3) Whatever features you *do* use in boot scripts, make sure they're
>available at the point in the boot sequence when the script runs.
>   

Pipes are available in the OS before any user scripts are run.

> 4) Whatever features you use in scripts *in general*, make sure you
>understand how they work.
>   

No... Do you understand how your TV works to watch it?   Or
your microwave, in order to heat food.   This attitude is why so
many people have resisted using computers -- because programmers who
made "friendly user interfaces", were outnumbered in the 1990's by
those who got liberal arts degrees and thought that qualified them
as a software programmer.   They often could write programs that
worked, but required more support and user training because most
of them don't know how to design something friendly. 

The features should behave according to the documents.  That's
why in some cases, I've tried to get wording improved - like the
person recently who couldn't find documentation for '+=-?' along
side ':+ := := :?', because it was buried in a passing sub-clause
in a prior section (I never could find it either and assumed it
was some old shell practice that will be supported to the end of
time,  but is no longer 'in favor', like $[integer exp] vs. using
$((integer exp)).
> Even if Chet changed how here docs work in bash 5.1, nobody would
> be safe to use those features in their "I'm feeding a password with
> a here string" scripts for at least 20 years, because there will
> still be people running older versions of bash for at least that long.
>   
---
So untrue.  If the system boots on bash5.1, because that's what
ships on linux 5.x from vendors, then that's what will be there.
We aren't porting OS-boot scripts from linux to machines that can't
run current software requirements.

Your script doesn't have to support Bourne Shell 1.0.  It might
have to support some posix implementation -- but bash doesn't even
support aliases working in interactive mode by default -- as required
to be posix compatible.  That means anyone relying on aliases to work
because they are using the posix requirements as a minimum, will be
surprised when a user uses bash and aliases are broken (don't work,
not enabled) by default -- they work in any posix compatible shell
which bash claims to be, but disassociates its posix mode from
some ancient-no alias mode such that toggling the posix bit resets
multiple features and doesn't save and restore those feature when
toggling it back.  It's like it's posix mode was designed to
not play well with normal bash function -- like it was designed to
be broken.

If you enter an optional mode, that an later exit, it's a
basic computer software 'given', that the previous mode should be
restored, by default.  Global effects are generally considered
a poor practice because of the tendency to cause unexpected effects
"at a distance" (far from the point they were changed).



> Thus, leave it alone.
>
>   



Re: "here strings" and tmpfiles

2019-04-07 Thread L A Walsh
On 3/22/2019 6:49 AM, Chet Ramey wrote:
> Yes, that's how bash chooses to implement it. There are a few portable
> ways
> to turn a string into a file descriptor, and a temp file is one of them (a
> child process using a pipe is another, but pipes have other issues).
>   
Such as?  That are more common that having no writeable tmp?

Pipes are the first thing that most unix programmers using
a unix-like shell on an unix-like OS think of.  From
Tuesday, 2015-Oct-13 13:51:03 (-0700) on this list:
Subject: my confusion on various I/O redirections syntaxes and indirect
methods

Chet Ramey wrote:
> On 10/12/15 7:39 PM, Linda Walsh wrote:
>> Does it also use a tmp file and use process-substitution, or is
>> that only when parens are present?
> 
> Here-documents and here-strings use temporary files and open them as
> the standard input (or specified file descriptor) for the command.
> 
>> read a < <( echo x)
>>
>> I'm under the impression, uses a tmp file.
> 
> Why would you think that? 



Well, we have 
"<< xxx"
as a HERE DOC using a tmp file, Some time ago, the ability to do 
"multiple assignments" at the same time was added (when I asked how to 
do that) that was told to use:

"read x y z <<< "one two three"

   (which I initially equated to something like:
(x y z)=(one two three)

That would be like the regular assignment:
xyz=(one two three)

but with the array syntax on the left, would do word
splitting on the left and assign to the individual vars; 
as I was searching for a way to do multiple assignments 
in the same statement).

Then came along a way to do a process in background and end up
with being able to read & process its data in the main (foreground) 
process w/this syntax:

readarray -t foregnd < <(echo  $'one\ntwo\nthree')

Which I envisioned as 
as implemented something like (C-ish example
off top of head using a perl-code I wrote to do the same):

  int savein,saveout;
  int pid;
  dup2(0, savein);
  dup2(1, saveout);

  int inout[2];

  #define stdin inout[0]
  #define stdout inout[1]

  pipe(&inout,O_NONBLOCK);
  dupto(stdin,0);
  dupto(stdout,1);

   setup_childsighandler(to close 0 when child exits);

  if ($pid=fork()) {  #parent

dupto(saveout,1);
shell("readarray -t uservar("xyz")");   #reads from pipe:inout[0]
#child handler closes 0
dupto(savein,0);

  } else if (pid==0) {  #child

close(0);
shell("echo $'a\nb\nc'");   #output goes out on pipe:inout[1]
exit(0);

  }

  ##parent continues -- no tmpfiles or named fifo's needed.
---

So I didn't realize instead of doing it simply using
native pipes like above, it was implemented some other way.

didn't understand the complexity of the need
for < <( to need a named pipe or fifo)

These examples and concepts came up when I 
was trying to write a bash script [running in early boot] that threw
out some error cases like /dev/fd/99 not found... [or
/tmp/tmpx2341 not found...]

> The documentation clearly says it uses a named
> pipe or a file descriptor associated with a /dev/fd filename (which happens
> to be a pipe in this case).

yeah with the "clear and unambiguous"[sic] syntax
of :

   <<  xxx
   <<< xxx
   <<< $(echo 'xxx')
   < < (xxx)

I can't imagine why'd I ever have been confused -- or,
given the pipe example above -- why any of the above
had to use [diskfile] based io.

So the fact that I get confused about what extra-complex
is used for which syntax isn't that surprising to me --
is it that surprising to you that given the complexities
chosen for implementation, why some people might be
confused about remembering the details of each when
they all could have been done without any [diskfile]
confusions??

==
(end quoted email)

Using tmp files instead of pipes is what MSDOS used to do to 
emulate pipes that unix had.  It was slow, clunky and not reliable
because the underlying file system wasn't always writeable.