Re: bug with edit-and-execute-command and multiline commands

2021-08-06 Thread Bob Proulx
Emanuele Torre wrote:
> Also here is a paste with some examples by rwp:
> https://paste.debian.net/plain/1206849

That paste is going to exire so here is the contents of it.

I wanted to show exactly what characters were in the file being edited
and therefore set VISUAL to od in order to dump it.

$ env VISUAL='od -tx1 -c' bash

$ echo $BASH_VERSION
5.1.8(1)-release

$ if true
> then echo yes
> else echo no
> fiC-xC-e
000  69  66  20  74  72  75  65  3b  20  74  68  65  6e  20  65  63
  i   f   t   r   u   e   ;   t   h   e   n   e   c
020  68  6f  20  79  65  73  3b  20  65  6c  73  65  20  65  63  68
  h   o   y   e   s   ;   e   l   s   e   e   c   h
040  6f  20  6e  6f  3b  20  66  69  0a
  o   n   o   ;   f   i  \n
051
if true; then echo yes; else echo no; fi
yes
> C-xC-e
000  69  66  20  74  72  75  65  3b  20  74  68  65  6e  20  65  63
  i   f   t   r   u   e   ;   t   h   e   n   e   c
020  68  6f  20  79  65  73  3b  20  65  6c  73  65  20  65  63  68
  h   o   y   e   s   ;   e   l   s   e   e   c   h
040  6f  20  6e  6f  3b  20  66  69  0a
  o   n   o   ;   f   i  \n
051
if true; then echo yes; else echo no; fi
yes
> C-xC-e
000  69  66  20  74  72  75  65  3b  20  74  68  65  6e  20  65  63
  i   f   t   r   u   e   ;   t   h   e   n   e   c
020  68  6f  20  79  65  73  3b  20  65  6c  73  65  20  65  63  68
  h   o   y   e   s   ;   e   l   s   e   e   c   h
040  6f  20  6e  6f  3b  20  66  69  0a
  o   n   o   ;   f   i  \n
051
if true; then echo yes; else echo no; fi
yes
> C-d bash: syntax error: unexpected end of file
exit

I edited in the C-xC-e and C-d so that it would be visible where I
sent those commands.

$ env VISUAL="ed -p 'ed> '" bash

$ if true
> then echo yes
> else echo no
> fiC-xC-e
41
ed> %p
if true; then echo yes; else echo no; fi
ed> q
if true; then echo yes; else echo no; fi
yes
> C-xC-e
41
ed> %p
if true; then echo yes; else echo no; fi
ed> q
if true; then echo yes; else echo no; fi
yes
> C-d bash: syntax error: unexpected end of file
exit

Just another rendition of the same thing.

Bob



Re: [OT] Linux Out-Of-Memory Killer

2010-11-03 Thread Bob Proulx
Marc Herbert wrote:
> I suggest that Linux kids do not try this at home: the OutOfMemory killer
> killed a few random processes of mine!

Off-Topic for this mailing list but if so then you should configure
your Linux kernel to avoid memory overcommit.  I have previously
ranted about this in some detail here:

  http://lists.debian.org/debian-user/2007/08/msg00022.html

And more recently with a little more rationale here:

  http://lists.debian.org/debian-user/2008/04/msg02554.html

Bob



Re: Trouble with PS1 special characters between \[ and \]

2010-11-05 Thread Bob Proulx
Lluís Batlle i Rossell wrote:
> I don't think this problem is related to any recent bash version
> only. I've seen this since years I think.
> 
> Nevertheless I'm using GNU bash, version 4.0.17(1)-release
> (x86_64-unknown-linux-gnu) now.

I am using 4.1.5(1)-release and I could not recreate your problem.

> My PS1 uses the "Change window title" sequence, to set the xterm
> window title to something close to the prompt text. Hence my PS1 is:
> 
> PS1='\[\033]2;\h:\u:\w\007\]\n\[\033[1;31m\]...@\h:\w]\$\[\033[0m\] '

This is something that you may have better luck if you put the window
title setting sequences into PROMPT_COMMAND instead.  Then you would
not need to use the \[...\] around the non-visible parts of PS1 since
they won't be there at all.  That would leave just your color changes
needing them.  This may be enough to avoid the problem you are having
regardless of its source.

The PROMPT_COMMAND won't expand the PS1 \u \h \w sequences though and
you would want to use the full variable names for them.  I have had
the following in my .bashrc for a long time.

  case $TERM in
xterm*)
  PROMPT_COMMAND='echo -ne "\033]0;${us...@${hostname}:${PWD/$HOME/~}\007"'
  ;;
  esac

Then add something like this for your adjusted PS1 prompt.

  PS1='\n\[\033[1;31m\]...@\h:\w]\$\[\033[0m\] '

That leading \n gives me pause though...

Bob



Re: ionice bash pid in bash_profile Fails

2010-11-22 Thread Bob Proulx
Roger wrote:
> # See ionice manfile - give high priority to Bash
> ionice -c 2 -n 0 -p `echo $$`

You don't need to use backticks to echo out the value of $$.  Just use
it directly.

  ionice -c 2 -n 0 -p $$

Bob



Re: Clear Screen

2010-12-01 Thread Bob Proulx
Ajay Jain wrote:
> I use bash on Xterm.
> While working you press Ctrl-L, so that the screen gets cleared and
> you see the currently line only.

That is what clearing the does.  It operates the same as the 'clear'
command line program in clearing the screen.

> But you may want to see the last outputs/prints. However, if you do
> a Ctrl-L/clear command, these prints go away.

Correct.  If you don't want the screen cleared then you will need to
avoid using Control-l to clear it.

> In that case, what can you use so that you clear the screen of the
> prints/outputs from last command. But in case you want to see the
> last output, you can just go scroll up/pageup.

Try Control-C instead.  By default that will interrupt the current
command and redraw the prompt leaving the previous output available.

If that doesn't do what you want then I don't think the functionality
you request exists.

Fundamentally there isn't any delineation between output from the
previous command and anything else you see on the screen.  The
terminal was built as a serial device with characters being typed in
from the keyboard and characters being output from the computer to the
display device.  The stream of characters being output is displayed
one after the other.  There isn't any knowledge there of what is a
prompt, what is one command output versus another command output.
Nothing keeps track of the screen at the level you are requesting to
clear characters but not to clear the characters from the previous
command.  That information does not exist.  Adding a way to track that
information is not easy and would introduce many problems and errors.
Therefore what you are asking for probably is unlikely to ever be
implemented.

Bob



Re: Clear Screen

2010-12-01 Thread Bob Proulx
Greg Wooledge wrote:
> ... I can see where his question is coming from.  If you're on an
> rxvt or something, with a reasonably-sized scrollback buffer, and
> you do a "tput clear" or its equivalent, you only lose the lines
> that were on the visible part of the terminal at that time -- the
> rest of the scrollback buffer is still there.

XTerm behaves the same, FWIW.  Personally when I want that I just hold
down the Enter key and push everything up and off the top.

> Perhaps the best solution for the original question is to send ${LINES:-24}
> newline characters to the terminal instead of clearing it.  That should
> push the information that's currently visible "up" into the scrollback
> buffer, where he wants it to be.

That seems possible.  Create a routine that does exactly that and then
bind it to the clear-screen function currently attached to C-l.  Then
it would work as you describe.  I have never done that, don't know
how, but it seems reasonable to me. :-)

Bob



Re: Recursively calling a bash script goes undetected and eats all system memory

2010-12-09 Thread Bob Proulx
Diggory Hardy wrote:
> With a simple script such as that below, bash can enter an infinite
> loop of eating memory until the system is rendered unusable:
> 
> #!/bin/bash
> PATH=~
> infinitely-recurse

Yes, of course.  It calls itself repeatedly and every copy consumes
just a little more system resources until they are all consumed.

> Save this as infinitely-recurse in your home directory and run - and
> make sure you kill it pretty quick.

This is a simpler variation on the classic fork bomb.

  http://en.wikipedia.org/wiki/Fork_bomb

> OK, so an obvious bug when put like this, though it bit me recently
> (mistakenly using PATH as an ordinary variable and having a script

Have you ever cut yourself on a kitchen knife?  I am sure the answer
is yes because we all have at one time or another.  As a thought
experiment design a kitchen knife in which it would be impossible to
cut yourself.  Take some time to think about it.  You could install
some type of safety guard.  You could use some type of box.  Perhaps
some pull cords to keep hands back in some way.

Now is the important question.  After having designed the safe kitchen
knife that wil prevent you from ever cutting yourself would you
actually use such a knife yourself?

> with the same name as a system program). Would it not be simple to
> add some kind of protection against this — say don't let a script
> call itself more than 100 times?

Detecting and avoiding this is actually not a simple thing to do at
all.  This is really a much harder problem than you think it is and
isn't something that can be fixed in bash.  The best result is simply
learning to avoid the problem.

Bob



Re: cd with multiple arguments?

2010-12-15 Thread Bob Proulx
Marc Herbert wrote:
> If the shell was "real" programming language, then we would not have
> such a massive ban on setuid scripts (I am not saying setuid is a
> great feature, this is not the point here; the point is why is the
> shell the only language under such a ban?)

The shell isn't the only one that introduces a security vulnerability
on most systems when setuid.  All interpreters are the same in that
regard.  On systems where you shouldn't suid scripts then you
shouldn't suid any of the set of sh/perl/python/ruby scripts either.
I think most people would consider at least one of those in that set a
real programming language. :-)

Bob



Re: cd with multiple arguments?

2010-12-16 Thread Bob Proulx
Marc Herbert wrote:
> Bob Proulx a écrit :
> > The shell isn't the only one that introduces a security vulnerability
> > on most systems when setuid.  All interpreters are the same in that
> > regard.  On systems where you shouldn't suid scripts then you
> > shouldn't suid any of the set of sh/perl/python/ruby scripts either.
> > I think most people would consider at least one of those in that set a
> > real programming language. :-)
> 
> None of these other languages has the same quoting complexity. You can
> find some FAQs saying: "Never setuid a shell script, use something
> less dangerous instead like Perl for instance".

I didn't say anything about quoting.  The topic here was security
vulnerabilities of an suid script.  For example the classic race
condition between stat'ing the #! interpreter and launching the
privileged process on the file.  If the system has that behavior then
any #! interpreter (including non-interpreters such as 'ls') are
vulnerable to an attack of slipping a different interpreter in at the
last moment.

That has nothing to do with quoting and is not specific to any
particular interpreter.  All that is required is that it not be
directly machine executable binary code such that exec(2) can't invoke
it directly but must instead invoke the specified program upon it.

If an FAQ reports that using Perl is okay to be setuid in that
environment then I think it is wrong.  Or at least not completely
general and portable because it is certainly dangerous on Unix
systems.  But it has been so many years since I have looked at that
problem that I don't remember the details.  I do remember using the
exploit on HP-UX systems years ago but I don't remember the specific
behavior here of all of the different kernels in popular use.  Please
don't make me expend precious brain cells remembering it.

Bob



Re: cd with multiple arguments?

2010-12-17 Thread Bob Proulx
Marc Herbert wrote:
> Sorry I did not know about this race condition. This is more or less
> the type of problems I had in mind:
> 
>  http://hea-www.harvard.edu/~fine/Tech/cgi-safe.html

In addition to the fine recommendations from the others I wanted to
specifically point out that the problems on that page are not from
launching a setuid script and providing a priviledge escalation path.
I just had time to skim it briefly but I didn't see setuid mentioned
there at all.  It is talking about other things.

Instead they stem from a script running unverified user provided
input.  CGI scripts are not normally setuid but are running as the web
server process owner and they usually allow connections from anonymous
attackers on the hostile internet.  By consuming and executing
untrusted input they allow an attack against the web server process
owner.  It is a problem, and a big one, but completely different from
having a local user attack against an setuid script and be able to
gain the priviledge of the script owner.

> The number of security recommendations on this page is impractical for
> any programmer but an expert one. This is just too complicated. I see
> this as yet another demonstration that shell scripting is very good
> for interactive use and relatively small system administration tasks but
> does not scale beyond that. Actually, I doubt any language could do
> that. Safety and "scalability" are more often than not opposed to
> convenience.

Using user provided input as commands is a problem no matter what
language you use.

> (OK: maybe Perl is just as bad)

Perl and Ruby and others do provide taintmode that tracks data flow
through the program.  That does help significantly.  But it *is* still
complicated.  That is why there have been so many successful attacks
in the past.  There isn't any magic sauce to make all of the
complication go away.  Attackers are as clever as you.  It is a
classic battle between armorer and weapons maker.

Bob

There are two types of complicated programs.  Those that were built up
from smaller simpler ones and those that do not work.



Re: Bug in shell: buffer overflow.

2010-12-31 Thread Bob Proulx
n...@lavabit.com wrote:
> echo $((256**8))
> echo $((72057594037927936*128))
> echo $((1))
> etc.

Unless great effort is made to perform math in arbitrary precision

  http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

all computer arithmetic is done using native machine integers.  As
documented in the bash manual in the ARITHMETIC EVALUATION section:

  ARITHMETIC EVALUATION
   Evaluation is done in fixed-width integers with no check for
   overflow, though division by 0 is trapped and flagged as an
   error.

Your expressions above are overflowing the value of your system's
maximum integer size.  You can read the system's maximum integer size
using getconf.

  $ getconf INT_MAX
  2147483647

Fixed integer calculations are native to the system hardware and very
fast and most of the time sufficient for most tasks.

If you need larger numbers then you would need to use a tool that
supports arbitrary precision arithmetic such as 'bc'.  The bc tool may
be called from the shell to perform your calculations.  It is a
powerful calculator.

  $ man bc

  $ echo $(echo "72057594037927936*128" | bc -l)
  9223372036854775808

Bob



Re: Bug in shell: buffer overflow.

2011-01-01 Thread Bob Proulx
Stephane CHAZELAS wrote:
> Bob Proulx wrote:
> [...]
> > Your expressions above are overflowing the value of your system's
> > maximum integer size.  You can read the system's maximum integer size
> > using getconf.
> >
> >   $ getconf INT_MAX
> >   2147483647
> [...]
> 
> POSIX requires that arithmetic expansion be using at least
> signed longs, so getconf INT_MAX wouldn't necessarily be
> correct.

Ah...  I didn't know that.  Thank you for that correction.

Unfortunately LONG_MAX is not available from getconf.  But on my
system in limits.h LONG_MAX is defined as 9223372036854775807 and the
at least one of the previous example calculations was:

echo $((72057594037927936*128))

which according to bc works out to be:

  9223372036854775808

Bob



Re: client server date sync

2011-01-12 Thread Bob Proulx
JNY wrote:
> I do not have access to NTP.
> I have a script which will write the date of my server to a file and scp it
> to my clients.  Is there a way to use the contents of this file to update
> the date of the clients?

I hesitate to help you with this issue since stepping the clock as you
propose is quite bad and has the potential for many problems.
Sometimes programs (think cron) will see multiple times again.
Sometimes they will not see a time at all.  It is a problem.

> something like:
> date 'cat datefile'

Because I can't stop myself from helping anyway I will note that GNU
date has the --set=STRING option that allows you to set the time from
a date string.  If you can ssh (since you mentioned scp) then you can
set the time using the string.  Use an unambiguous format such as that
provided by date -R.

Pull:
  date --date="$(ssh remoteserverhost date -R)"

Push:
  date -R | ssh remoteclienthost date --date='"$(cat)"'

The --date= just prints the time.  After you are happy with the
results change --date= to --set= which actually sets the system clock.

I think it would be much more productive use of your resources to
enable ntp instead.

Bob



Re: Arithmetic expansion peculiarity

2011-01-14 Thread Bob Proulx
Joe Lightning wrote:
> Description:
> Bash doesn't like leading zeros in arithmetic expansion.
> 
> Repeat-By:
> echo $(( 09 - 1 ))

This is bash FAQ E8.

  http://www.faqs.org/faqs/unix-faq/shell/bash/

Leading zeros denote octal.  In octal the valid digits are 0-7 with 8
and 9 outside of the octal range.

Bob



Re: multi-line commands in the history get split when bash is quit

2011-02-05 Thread Bob Proulx
Slevin McGuigan wrote:
> I am unsure whether or not this a bug. Anyhow, it is pretty annoying...
> 
> I use simple multi-line scripts very often in bash and use vi mode
> to edit them. By using
> # shopt -s cmdhist
> # shopt -s lithist
> I can achive multi-line editing. Which is fine.
> 
> But this ability "breaks" as soon as I close bash and open it again.
> 
> Is this a bug?
> Are there suggestions for workarounds?

Are you thinking that setting shopts should in some way be persistent
across program invocations?  That would be pretty annoying and a
severe bug if it did.

Are you forgetting to put your desired configuration into ~/.bashrc
where it is loaded when bash starts?

Are you forgetting to put
  source "$HOME/.bashrc"
into your ~/.bash_profile where it will source your .bashrc when you
log into your account?

Bob



Re: [BUG] Bash not reacting to Ctrl-C

2011-02-08 Thread Bob Proulx
Oleg Nesterov wrote:
>   $ sh -c 'while true; do /bin/true; done'

Be careful that 'sh' is actually 'bash'.  It isn't on a lot of
machines.  To ensure that you are actually running bash you should
call bash explicitly.  (At least we can't assume you are running bash
otherwise.)

Is the behavior you observe any different for this case?

  $ bash -c 'while true; do /bin/true || exit 1; done'

Or different for this case?

  $ bash -e -c 'while true; do /bin/true; done'

Bob



Re: [BUG] Bash not reacting to Ctrl-C

2011-02-09 Thread Bob Proulx
Oleg Nesterov wrote:
> Bob Proulx wrote:
> > Is the behavior you observe any different for this case?
> >   $ bash -c 'while true; do /bin/true || exit 1; done'
> > Or different for this case?
> >   $ bash -e -c 'while true; do /bin/true; done'
> 
> The same.

I expected that to behave differently for you because I expected that
the issue was that /bin/true was being delivered the signal but the
exit status of /bin/true is being ignored in your test case.  In your
test case if /bin/true caught the SIGINT then I expect the loop to
continue.  Since you were saying that it was continuing then that is
what I was expecting was happening.

> I do not know what "-e" does (and I can't find it in man), but how
> this can make a difference?

The documentation says this about -e:

  -e  Exit immediately if a pipeline (which may consist
  of a single simple command), a subshell command
  enclosed in parentheses, or one of the commands
  executed as part of a command list enclosed by
  braces (see SHELL GRAMMAR above) exits with a
  non-zero status.  The shell does not exit if the
  command that fails is part of the command list
  immediately following a while or until keyword,
  part of the test following the if or elif
  reserved words, part of any command executed in
  a && or list except the command following the
  final && or, any command in a pipeline but the
  last, or if the command's return value is being
  inverted with !.  A trap on ERR, if set, is
  executed before the shell exits.  This option
  applies to the shell environment and each
  subshell environment separately (see COMMAND
  EXECUTION ENVIRONMENT above), and may cause
  subshells to exit before executing all the
  commands in the subshell.

Using -e would cause the shell to exit if /bin/true returned a
non-zero exit status.  /bin/true would exit non-zero if it caught a
SIGINT signal.

Bob



Re: [BUG] Bash not reacting to Ctrl-C

2011-02-09 Thread Bob Proulx
Ingo Molnar wrote:
> Could you try the reproducer please?
> 
> Once you run it, try to stop it via Ctrl-C, and try to do this a
> couple of times.

I was not able to reproduce your problem using your (I believe to be
slightly incorrect) test case:

  bash -c 'while true; do /bin/true; done'

It was always interrupted with a single control-C on my amd64 Debian
Squeeze machine.  I expect this means that by chance it was always
bash running in the foreground process and /bin/true never happened to
be there at the right time.

> Do you consider it normal that it often takes 2-3 Ctrl-C attempts to
> interrupt that script, that it is not possible to stop the script
> reliably with a single Ctrl-C?

Since the exit status of /bin/true is ignored then I think that test
case is flawed.  I think at the least needs to check the exit status
of the /bin/true process.

  bash -c 'while true; do /bin/true || exit 1; done'

Bob



Re: [BUG] Bash not reacting to Ctrl-C

2011-02-09 Thread Bob Proulx
Oleg Nesterov wrote:
> That is why I provided another test-case, let me repeat it:

Sorry but I missed seeing that the first time through or I would have
commented.

>   #!./bash
>   perl -we '$SIG{INT} = sub {exit}; sleep'
>   echo "Hehe, I am going to sleep after ^C"
>   sleep 100

This test case is flawed in that as written perl will eat the signal
and ignore it.  It isn't fair to explicitly ignore the signal.

Instead try this improved test case with corrected signal handling.

  #!/bin/bash
  perl -we '$SIG{INT}=sub{$SIG{INT}="DEFAULT";kill(INT,$$);}; sleep' || exit 1
  echo "Hehe, I am going to sleep after ^C"
  sleep 100
  exit(0);

Does this get interrupted after one SIGINT now that it isn't being
caught and ignored?

To be clear I am simply trying to make sure the test cases are not
themselves creating the problem.

Bob



Re: Can someone explain this?

2011-02-11 Thread Bob Proulx
Dennis Williamson wrote:
> Yes, do your quoting like this:
> ssh localhost 'bash -c "cd /tmp; pwd"'

I am a big fan of piping the script to the remote shell.

  $ echo "cd /tmp && pwd" | ssh example.com bash
  /tmp

This has two advantages.  One is that you can pick your shell on the
remote host.  Otherwise it runs as whatever is configured for that
user in the password file.  If the remote host has csh configured then
this overrides it and provides a known shell on the remote end.  Two
is that since this is stdin it avoids having two different shells
parse the command line.  Quoting is then much simplified.

Bob



Re: how to workaroun 'nl' being added to input-line after 49 characters....

2011-02-13 Thread Bob Proulx
Linda Walsh wrote:
> I'm having a problem, I think,  due to my setting the prompt in
> 'root' mode, to a different color.  This results in me being able to
> enter only 49 characters on the input line before it wraps to the next
> line.

It sounds like you have forgotten to enclose non-printing characters
in your prompt with \[...\].

>#specific to linux console compat emulations
>_CRed="$(echo -en "\033[31m")"  #Red
>_CRST="$(echo -en "\033[0m")"   #Reset
>_CBLD="$(echo -en "\033[1m")"   #Bold
>export _prompt_open=""
>export _prompt_close=">"
>[[ $UID -eq 0 ]] && {
>_prompt_open="$_CBLD$_CRed"
>_prompt_close="#$_CRST"
>}
>export PS1='${_prompt_open}$(spwd "$PWD")${_prompt_close} ';
>
> Is there some easy way to tell bash either to not keep track of what
> it thinks is the screen width (and just allow it to wrap, if that's
> possible), or to reset bash's idea of where it thinks it is on the
> line?

The bash manual says:

  PROMPTING
  \[ begin  a sequence of non-printing characters, which could
 be used to embed a terminal  control  sequence  into  the
 prompt
  \] end a sequence of non-printing characters

Therefore you want something like:

   _CRed="$(echo -en "\033[31m")"  #Red
   _CRST="$(echo -en "\033[0m")"   #Reset
   _CBLD="$(echo -en "\033[1m")"   #Bold
   export _prompt_open=""
   export _prompt_close=">"
   [[ $UID -eq 1000 ]] && {
   _prompt_open="$_CBLD$_CRed"
   _prompt_close="#$_CRST"
   }
   export PS1='\[${_prompt_open}\]$(pwd "$PWD")\[${_prompt_close}\] ';

But I didn't test the above.

Bob



Re: how to workaroun 'nl' being added to input-line after 49 characters....

2011-02-13 Thread Bob Proulx
Linda Walsh wrote:
> Thanks for all the great suggestions on how to do the
> encoding differently -- how about ideas on the input line
> length being truncated?

You seem to have misunderstood.  Each and every one of those response
has addressed your issue of input line length problems.  The solution
to your line length problems is that you need to wrap non-printable
characters such as your color sequences in quoted square brackets in
order for readline to know that you are emitting non-printable
characters.  Otherwise it will count them as having taken up a column.
This is what all of the discussion around the \[...\] has been about.
Unless I have completely misunderstood your problem but I don't think
so.

Bob



Re: how to workaroun 'nl' being added to input-line after 49 characters....

2011-02-13 Thread Bob Proulx
Linda Walsh wrote:
> But anyway, something else is is awry.
> 
> Now my root prompt, instead of being red, looks like:
> 
> "\[\033[1m\]\[\033[31m\]Ishtar:root#\[\033[0m\] "
> 
> ;-/

That will be due to incorrect quoting.  Which suggestion did you
implement?  There were several.

> What version of bash has the \[ \] characters to keep it from
> counting chars?

All of them.

Try this *exactly* as I post it.  I did test this.  It probably isn't
optimal.  But it does what you are asking.  And it is only a slight
variation from what you are currently using.

   _CRed=$(tput setaf 1)  #Red
   _CRST=$(tput sgr0)   #Reset
   _CBLD=$(tput bold)   #Bold
   _prompt_open=""
   _prompt_close=""
   _prompt=">"
   [[ $UID -eq 0 ]] && {
   _prompt_open="$_CBLD$_CRed"
   _prompt_close="$_CRST"
   _prompt="#"
   }
   PS1='\[$_prompt_open\]$(pwd "$PWD")$_prompt\[$_prompt_close\] ';

Bob



Re: how to workaroun 'nl' being added to input-line after 49 characters....

2011-02-14 Thread Bob Proulx
Greg Wooledge wrote:
>   red=$(tput setaf 1) bold=$(tput bold) reset=$(tput sgr0)
>   PS1='\[$red\]\h\[$reset\]:\[$bold\]\w\[$reset\]\$ '
> 
> I tested that.  It works.

Nicely cleaned up!

>   PS1='\h:\w\$ '

For what it is worth I use something similar:

  PS1='\u@\h:\w\$ '

Bob



Re: bash tab variable expansion question?

2011-02-25 Thread Bob Proulx
gnu.bash.bug wrote:
> Andreas Schwab wrote:
> > Greg Wooledge wrote:
> > > directory again, I typed "cd $OLDPWD/foo" -- and the $OLDPWD
> > > became \$OLDPWD and did not do as I wished.
> >
> > Or use ~-/.
>
> What do you mean?
> 
> ~-/.  is no equal to $PWD

No.  But it is similar to $OLDPWD which is what Greg had written about.

Bob



Re: Problem with open and rm

2011-03-16 Thread Bob Proulx
Barrie Stott wrote:
> The script that follows is a cut down version of one that came from
> elsewhere.

Thank you for your bug report but neither 'open' nor 'rm' have
anything to do with bash.  This is not a bug in the bash shell.  This
mailing list is for bug reports in the bash shell.

> cp /tmp/x.html /tmp/$$.html
> ls /tmp/$$.html
> [ "$DISPLAY" ] && open /tmp/$$.html
> ls /tmp/$$.html
> rm -f /tmp/$$.html

You have invoked open and then not waited for it to finish but instead
removed the file it is going to work on.  That is the problem.  Open
is launching in the background.  Before it has a chance to run ls and
rm are called in the script.  Eventaully, some thousands of cpu cycles
later, the action associated with open is called and by that time the
file has been removed by the script.

> My two questions are: 
> Why?
> How can I change the script so that I can both view the file and
> have it removed?

See the man page for the Apple 'open' command.  That is an Apple
specific command not found on other systems.  I do not have it
available to me on my Debian GNU/Linux system for example.  But I see
an online man page for it here:

  
http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/open.1.html

Using that man page I see that open has a -W option.

 -W  Causes open to wait until the applications it opens (or that were
 already open) have exited.  Use with the -n flag to allow open to
 function as an appropriate app for the $EDITOR environment variable.
 

Therefore I infer that if you add the -W option to open that it will
then wait for the application.

Try this:

  #!/bin/sh
  cp /tmp/x.html /tmp/$$.html
  ls /tmp/$$.html
  [ "$DISPLAY" ] && open -W /tmp/$$.html
  ls /tmp/$$.html
  rm -f /tmp/$$.html

I do not have a Mac and have no way to test the above but the
documentation leads me to believe that it will work.

Bob



Re: Bash

2011-04-27 Thread Bob Proulx
David Barnet wrote:
> I am new to Ubuntu and Linux. I am trying to install Bash Shell

If you are new then the easiest way is to use the package manager to
install the software distribution's bash package.  Something like the
following will invoke sudo to get the access privilege and then APT to
install the bash package.

  $ sudo apt-get install bash

But I think it likely that bash is already installed on your system.

> and I get this error and cannot find a solution. When I type make
> install I get this error. If you can give a direction to head in to
> correct this, i would appreciate it. thanks
> 
> mkdir -p -- /usr/local/share/man/man1
> mkdir: cannot create directory `/usr/local/share/man/man1': Permission denied
> make: *** [installdirs] Error 1

On Ubuntu the /usr/local directory tree is writable only by the
systems superuser account.  In order to install you would need to use
the 'sudo' command there.

However if you are not familiar with permissions then I feel it is
safer to install into your $HOME directory instead.  Then you don't
need superuser access.

To set this up add the --prefix=$HOME option to configure.

  $ ./configure --prefix=$HOME
  $ make
  $ make check
  $ make install

That will set up to install into $HOME/bin/ and $HOME/share/man/man1/
and so forth.  Since this will be in your home directory you will have
permission for the action.  Until you are more familiar with system
users, groups and permissions this is the safer course of action.

Bob



Re: a "script-relative" version of env

2011-05-27 Thread Bob Proulx
E R wrote:
>  - I've noticed that the argument passed to the she-bang interpreter
> always seems to be an absolute path when commands are executed from a
> shell. Are there cases where that will not be true?

You might find reading Sven Mascheck's documentation interesting:

  http://www.in-ulm.de/~mascheck/various/shebang/

Bob




Re: last argument expansion has different output using the sh interpreter

2011-05-27 Thread Bob Proulx
Jacoby Hickerson wrote:
> Although, I am curious, is this is a matter of sh being continually
> updated to exclude all bash extensions or perhaps previously bash
> didn't interpret #!/bin/sh to be the POSIX compliant interpreter?

When bash is invoked as sh then bash complies with the POSIX sh.  That
is both required and desired.

On some GNU/Linux systems /bin/sh is a symlink to /bin/bash and bash
will support running as a POSIX sh.  But this isn't universal.  On
other systems /bin/sh is an actual native POSIX sh, on others it is a
symlink to /bin/ksh or /bin/zsh or /bin/dash or others.  But in all
cases /bin/sh is supposed to be a POSIX sh and conform to the
standard.  Bash tries to be compliant but some differences will creep
in regardless.  Doing so may cause problems for people who unknowingly
use features on one system that are not available on other systems.
Generally the wisdom of years is not to be generous in what you accept
but to be strict in what you accept.  It makes portability easier.

Having found that the bash specific feature isn't available in /bin/sh
this is a good thing for you since now your script will either need to
specify that it needs bash explicitly or you could use portable only
constructs and continue to use /bin/sh.  Personally I use bash for my
command line but /bin/sh and portable constructs for shell scripts.

Bob




Re: Copy in bash

2011-06-07 Thread Bob Proulx
vin_eets wrote:
> I am using windows 

Then you will need to be aware of windows specific conventions.

> I wanna copy a file in folder from one location to another folder in another
> drive.
> 
> Its working fine but not when file name contain spaces i.e. if the filename
> is a b c
> 
> What would be script to acheive this functionality

You will need to quote the filename to preserve the spaces.

  $ cp "a b c" /some/other/directory/

Bob



Re: Yet Another test option

2011-07-03 Thread Bob Proulx
Bruce Korb wrote:
> I wouldn't know.  I use it myself a bit and I am now playing with
> Lustre fs code where they get it wrong because it is inconvenient to
> get it right.  After seeing that, I thought I'd suggest it.
> ...
> P.S. this check is really for any version below 2.6.27:
> 
> - case $LINUXRELEASE in
> - # ext4 was in 2.6.22-2.6.26 but not stable enough to use
> - 2.6.2[0-9]*) enable_ext4='no' ;;
> - *)  . ;;
> 
> and might have been done correctly with a version compare operator.

Note that on Debian systems and derivatives you can use dpkg with the
--compare-versions option to perform this test.

  $ set -x
  $ dpkg --compare-versions 2.6.27 le $(uname -r) ; echo $?
  ++ uname -r
  + dpkg --compare-versions 2.6.27 le 2.6.39-2-amd64
  + echo 0
  0

  dpkg --compare-versions 2.6.27 le $(uname -r) || enable_ext4='no'

I seem to recall a similar command on Red Hat based systems but off
the top of my head the details escape me.

Bob



Re: Built-in printf Sits Awkwardly with UDP.

2011-07-19 Thread Bob Proulx
Ralph Corderoy wrote:
> ...  But a regular file ./foo on disk does look different and it
> still seems odd that
> printf '\n\n\n\n\n\n\n\n\n\n\n\n' >foo
> does a dozen one-byte write(2)s.

But the only reason you know that there is a long string of newlines
is that your eye is looking over the entire string.  It has all of the
data all at once.  You have a non-causal relationship with the data
because you are looking over all of the past history of the data.  By
the time we see it here it has all already happened.  You are not
looking at the data as it arrives.

In other words...  Are you trying to suggest that a program should try
to look-ahead at future characters?  Characters that may not yet be
written yet?  That would be a cool trick of time travel to be able to
know what is going to happen in the future.  When you as the program
see a newline character do you know if the next character that has no
yet been generated yet will be a newline?  Should it wait a timeout
until characters have stopped appearing?

It really isn't a problem that can always be solved perfectly in every
case.

Bob



Re: How to do? Possible?

2011-07-25 Thread Bob Proulx
Linda Walsh wrote:
> DJ Mills wrote:
> >> Because a subshell cannot access the global variables of the parent.
> > 
> > A subshell can access its parent's variables.  foo=bar; ( echo "$foo" )
> > 
> > A sub process cannot, unless the variables are exported.  It does
> > not sound like you need to do so here.
>
>   I'm not sure about your terminology -- a subshell is a
> subprocess.  By my understanding, "{}" defines a complex statement
> and "()" defines both a subshell which is in a separate process,

Yes, but it is a fork(2) of the parent shell and all of the variables
from the parent are copied along with the fork into the child process
and that includes non-exported variables.  Normally you would expect
that a subprocess wouldn't have access to parent shell variables
unless they were exported.  But with a subshell a copy of all
variables are available.

Bob



Re: How to do? Possible?

2011-07-25 Thread Bob Proulx
Linda Walsh wrote:
> I didn't know why it behaved differently, but as you informed me
> the difference is 'one's well-defined, and the other is not, I can
> see why there 'could' be a difference... ;-)
> 
> (which of course could change tomorrow, I suppose..)

Not the second well defined case.  It can't change without being in
conflict with the standards.  You would always be safe to use it in
any standard conforming shell.

If you are still not convinced then consider these next two examples.

  #!/bin/sh
  printfoovar() { echo $foo ;}
  foo="bar"
  ( printfoovar )

  #!/bin/sh
  printfoovar() { eval echo \$$foo ;}
  bar="hello"
  foo="bar"
  ( printfoovar )

Bob



Re: How to do? Possible?

2011-07-25 Thread Bob Proulx
Dennis Williamson wrote:
> Linda Walsh wrote:
> > GLOBAL="hi there"
> > {foo=GLOBAL echo ${!foo}; }

Note that this tickles a problem since foo isn't set before $foo is
expanded.  Use this following with a semicolon instead:

  GLOBAL="hi there"
  {foo=$GLOBAL; echo ${!foo}; }

> You had a missing dollar sign.
>
> I'm assuming you meant:
> 
> GLOBAL="hi there"
> {foo=$GLOBAL echo ${!foo}; }

Er..  No.  There is a missing semicolon as described above and in
other messages in the thread but the dollar size is intended to be
excluded so that foo contains the string "GLOBAL" and ${!foo} will
indirect through it.

The magic is the ${!parameter} syntax.  If you look at the bash
documentation for ${parameter} you will find the following
documentation.  The second paragraph explains the '!' part and is the
part needed to understand the indirection.

   ${parameter}
  The value of parameter is substituted.  The braces are
  required when parameter is a positional parameter with
  more than one digit, or when parameter is followed by a
  character which is not to be interpreted as part of its
  name.

   If the first character of parameter is an exclamation point
   (!), a level of variable indirection is introduced.  Bash uses
   the value of the variable formed from the rest of parameter as
   the name of the variable; this variable is then expanded and
   that value is used in the rest of the substitution, rather than
   the value of parameter itself.  This is known as indirect
   expansion.  The exceptions to this are the expansions of
   ${!prefix*} and ${!name[@]} described below.  The exclamation
   point must immediately follow the left brace in order to
   introduce indirection.

And so as you can see from this ${!foo} expands the value of foo to be
the string "GLOBAL" and then continues using that as an indirection
and expands the value of $GLOBAL.  With echo ${!foo} it would be very
similar to saying 'eval echo \$$foo'.  It would dynamically indirect
through and pick up the value of GLOBAL.

You can tell the difference by changing the value of GLOBAL between
echo statements.

Static direct binding:

  #!/bin/bash
  GLOBAL="one"
  foo=$GLOBAL
  { echo ${foo}; }
  GLOBAL="two"
  ( echo ${foo} )

Emits:

  one
  one

Dynamic indirect binding:

  #!/bin/bash
  GLOBAL="one"
  foo=GLOBAL
  { echo ${!foo}; }
  GLOBAL="two"
  ( echo ${!foo} )

Emits:

  one
  two

And the difference between using {...} and (...) is back to the
original discussion of the thread that subshells have access to
copies of all parent variables including variables that were not
explicitly exported.

Hope that helps to clarify things.

Bob



Re: The mailing list software interfered with my content

2011-08-03 Thread Bob Proulx
Eric Blake wrote:
> What, me urgent? wrote:
> >In my most recent post, the mailing list software replaced the string
> >"address@hidden" for a section of code snippet!
> 
> Not the list software, but merely the web archiver that you are
> viewing the mail in.  If you are actually subscribed to the list,
> rather than viewing a web archiver, your post came through just
> fine.  ...

And if the mangling isn't there then people complain about it not
mangling email addresses.  Personally I don't like mangling at all.
Let's face it, email addresses are useless if not known and once known
cannot be hidden.  But just the same I have experienced that people
complain vehemently if you don't mangle addresses in the web archive.
And then problems such as the above occur.  Sigh.  It is impossible to
please both sets of people.

Bob



Re: bug: return doesn't accept negative numbers

2011-08-05 Thread Bob Proulx
Eric Blake wrote:
> Linda Walsh wrote:
> >I guess I don't use negative return codes that often in shell, but
> >I use them as exit codes reasonably often.

For all of the reasons Eric mentioned you won't ever actually be able
to see a negative result of an exit code however.

> >'return' barfs on "return -1"...
> >
> >Since return is defined to take no options, and ONLY an integer,
> >as the return code, it shouldn't be hard to fix.
> 
> According to POSIX, it's not broken in the first place.  Portable
> shell is requires to pass an unsigned decimal integer, no greater
> than 255, for defined behavior.
> http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#return

The reason for that is for compatibility.  Originally the exit status
was returned in a 16 bit integer with the lower 8 bit byte holding the
exit code and the upper 8 bit byte holding the status.  The 
macros WTERMSIG et al were used to extract that information.  (I don't
know if all of that is still true but that is the way it used to be.)
And so in the C programming man page for exit(3) it says:

  man 3 exit

   The exit() function causes normal process termination and the
   value of status & 0377 is returned to the parent (see wait(2)).

Note the "status & 0377" part.  That is an octal 0377 or 255 decimal
or 0xFF hexadecimal or just the lower 8 bits.  Trying to return a
value that won't fit into 8 bits just isn't possible.  The traditional
man pages always described the size as an 8 bit value and specified
the range as 0 through 255 implying that it was an unsigned value.

  Historical documentation for exit(3)
  
http://www.freebsd.org/cgi/man.cgi?query=exit&apropos=0&sektion=3&manpath=2.9.1+BSD&format=html

Now you might argue that -1 is always going to be all ones in two's
complement.  Sure.  But traditionally it has always been unsigned.

Bob



Re: bug: return doesn't accept negative numbers

2011-08-07 Thread Bob Proulx
Linda Walsh wrote:
>   How about portable code using:
> 
>   (exit -1); return
> 
>   It's ugly, but would seem to be the portable/official way to
> do this.

Exit codes should be in the range 0-255.

Bob



Re: bug: return doesn't accept negative numbers

2011-08-07 Thread Bob Proulx
Linda Walsh wrote:
> Bob Proulx wrote:
> >Exit codes should be in the range 0-255.
> ---
>   I suppose you don't realize that 'should' is a subjective opinion that
> sometimes has little to do with objective reality.

Sigh.  Okay.  Keep in mind that turn about is fair play.  You are
giving it to me.  Please be gracious on the receiving end of it.  You
do realize that you should comply with documented interfaces.  Why?
Because that is the way the interface specification is documented.  If
you want things to work easily then most generally the easiest way is
to follow the documented practice.  Unless you have an exceptional
reason for doing something different from the documented interface
then actively doing anything else is actively trying to be broken.

>   It is true, that when you display a signed char, non-signed-
> -extended, in a 16, 32 or 64 bit format, you see 255, and certainly, no
> one (including myself) would suggest 'sign extending' an error code, as that
> would make it "other than what it was".  But just because it displays
> only as 0-255 under most circumstances is no reason for ignoring the
> implementation in all the main languages when the standard was written
> as accepting signed short integers.

You appear to be confusing programming subroutine return values with
program exit values.  Those are two completely different things.  I
agree they are related concepts.  We often try to force fit one into
the other.  But just the same they are different things.  In the
imortal words of Dr. Egon Spengler, "Don't cross the streams."

On the topic of exit values...  Back in the original V7 Unix the
exit(2) call was documented simply as:

  The low-order 8 bits of status are available to the parent process.

And that is all that it said.  And the code was written in assembly
without reference to whether it should be signed or not.  The oldest
BSD version of the sources I have access to was a little more helpful
and said:

  The low-order 8 bits of status are available to the parent
  process. (Therefore status should be in the range 0 - 255)

Since V7 dates to 1979 and the BSD version I looked at dates at least
to from that same era it has been this way for a very long time.  I
grep'd through the V7 sources and found not one instance where a
negative value was returned.  (However there are some instances where
no value was specified and the accumulator value was returned.)

>   The same 'open group' that wrote posix wrote a C standard , and they
> defined it's exit val as taking a  short int as well   So they were
> inconsistent.

People sometimes read the POSIX standard today and think it is a
design document.  Let me correct that misunderstanding.  It is not.
POSIX is an operating system non-proliferation treaty.  At the time it
was created there were different and divergent systems that was making
it impossible to write code for one system that would work on another
system.  They were all Unix systems.  But they weren't the same Unix
system.  Code from one system would break on another system.

What started out as small variances created very large problems.
People found it very difficult to write portable code because these
differences were getting out of hand.  POSIX was intended to document
the existing behavior so that if you followed the specification then
you would have some hope of being able to run successfully on multiple
systems.  And it was intended to curb the differences from becoming
greater than they were already.  It tries to reign in the divergent
behaviors so that there won't be more differences than already exist.

This is what POSIX says about it:

  void exit(int status);

  The value of status may be 0, EXIT_SUCCESS, EXIT_FAILURE, or any
  other value, though only the least significant 8 bits (that is,
  status & 0377) shall be available to a waiting parent process.

It says that because that was the existing practice common to all Unix
systems at the time.  (My understanding is that VMS uses a different
exit status convention.  And so that is the reason for using the
EXIT_SUCCESS and EXIT_FAILURE macros so that the same source could
compile on VMS and still function correctly there.)

My finishing gratuitous comment is that unfortunately people are now
in the present day using POSIX as a design document.  And design by
committee never produces great results.  But it is what we have to
work with today because there isn't any better alternative.

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> Has anyone ever come across an equivalent to Linux's readlink -f that
> is implemented purely in bash?
> 
> (I need readlink's function on AIX where it doesn't seem to be available).

Try this:

  ls -l /path/to/some/link | awk '{print$NF}'

Sure it doesn't handle whitespace in filenames but what classic AIX
Unix symlink would have whitespace in it?  :-)

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> readlink -f will fully resolve links in the path itself (rather than
> link at the end of the path), which was the behaviour I needed.

Ah, yes, well, as you could tell that was just a partial solution
anyway.

> It seems cd -P does most of what I need for directories and so
> handling things other than directories is a small tweak on that.

You might try cd'ing there and then using pwd -P to get the canonical
directory name.  I am thinking something like this:

  #!/bin/sh
  p="$1"
  dir=$(dirname "$p")
  base=$(basename "$p")
  physdir=$(cd "$dir"; pwd -P)
  realpath=$(cd "$dir"; ls -l "$base" | awk '{print$NF}')
  echo "$physdir/$realpath" | sed 's|//*|/|g'
  exit 0

Again, another very quick and partial solution.  But perhaps something
good enough just the same.

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> I always use sed for this purpose, so:
> 
>$(cd "$dir"; ls -l "$base" | sed "s/.*->//")
> 
> But, with pathological linking structures, this isn't quite enough -
> particularly if the target of the link itself contains paths, some of
> which may contain links :-)

Agreed!  Symlinks with arbitrary data, such as holding small shopping
lists in the target value, are so much fun.  I am more concerned that
arbitrary data such as "->" might exist in there more so than
whitespace.  That is why I usually don't use a pattern expression.
But I agree it is another way to go.  But it is easier to say
whitespace is bad in filenames than to say whitespace is bad and oh
yes you can't have "->" in there either.  :-)

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> readlink_f()
> {
> local path="$1"
> test -z "$path" && echo "usage: readlink_f path" 1>&2 && exit 1;

An extra ';' there that doesn't hurt but isn't needed.

> local dir
> 
> if test -L "$path"
> then
> local link=$(ls -l "$path" | sed "s/.*-> //")

I would be inclined to also look for a space before the " -> " too.
Because it just is slightly more paranoid.

local link=$(ls -l "$path" | sed "s/.* -> //")

> if test "$link" = "${link#/}"
> then
> # relative link
> dir="$(dirname "$path")"

As an aside you don't need to quote assignments.  They exist inside
the shell and no word splitting will occur.  It is okay to assign
without quotes here and I think it reads slightly better without.

dir=$(dirname "$path")

> readlink_f "${dir%/}/$link"
> else
> # absolute link
> readlink_f "$link"
> fi
> elif test -d "$path"
> then
> (cd "$path"; pwd -P) # normalize it
> else
> dir="$(cd $(dirname "$path"); pwd -P)"
> base="$(basename "$path")"

Same comment here about over-quoting.  If nothing else it means that
syntax highlighting is different.

dir=$(cd $(dirname "$path"); pwd -P)
base=$(basename "$path")

> echo "${dir%/}/${base}"
> fi
> }

And of course those are just suggestions and nothing more.  Feel free
to ignore.

Note that there is a recent movement to change that dash greater-than
combination into a true unicode arrow graphic emited by 'ls'.  I think
things paused when there were several different bike shed suggestions
about which unicode arrow symbol people wanted there.  I haven't seen
any actual movement for a while and I think that is a good thing.

Bob



Re: Coproc usage ... not understanding

2011-08-09 Thread Bob Proulx
Chet Ramey wrote:
> Linda Walsh wrote:
> > Ideas?
> 
> You're probably running into grep (and sort, and sed) buffering its
> output.  I haven't been able to figure out a way past that.

This may be a good point to mention this reference:

  http://www.pixelbeat.org/programming/stdio_buffering/

And the 'stdbuf' command the came out of it.

  
http://www.gnu.org/software/coreutils/manual/html_node/stdbuf-invocation.html#stdbuf-invocation

Bob



Re: Coproc usage ... not understanding

2011-08-10 Thread Bob Proulx
Greg Wooledge wrote:
> Linda Walsh wrote:
> > Bob Proulx wrote:
> > >This may be a good point to mention this reference:
> > >
> > >  http://www.pixelbeat.org/programming/stdio_buffering/
> 
> > Does it only work with gnu programs?   I.e. how would they know to 
> > not buffer
> 
> Sounds like the GNU version of "unubuffer" from the Tcl expect package.
> http://expect.sourceforge.net/

Similar in result.  Different in implementation.

The unbuffer expect script sets up a tty around the called program so
that instead of a pipe the program detects a tty.  Programs that
normally do not buffer when writing to a tty will then work as if they
were interactive even if they were not.  Wrapping commands is a core
functionality of expect scripts and so unbuffer is using it for the
side-effect.

The stdbuf utility works by setting up an LD_PRELOAD library
libstdbuf.so that replaces the libc calls and intercepts and overrides
the calls to do buffering.

Bob



Re: Why bash command "cd //" set path "//" ?

2011-08-26 Thread Bob Proulx
Andrey Demykin wrote:
> Why bash command "cd //" set path "//" ?
> 
> double slashes only in first position 
> 
> I found this in all version of the bash.
> Excuse me , if it is not a bug.
> Possibly it is a feature.

It is a feature.  In the old days of the Apollo OS two leading slashes
identified a network path.  In the consolidation of operating system
feature sets that feature was preserved and is still possible
depending upon the system.  It is allowed but not required.  Your
system may not use the feature but others do.  More recently Cygwin is
making good use of it in the Cygwin environment for the same purpose.

Therefore the standards say that two leading slashes are significant
and are handled specially.  It might be a network path.  But three or
more leading slashes are not significant and anywhere else in the path
multiple slashes are not significant.

Bob



Re: date command

2011-10-04 Thread Bob Proulx
s.pl...@juno.com wrote:
> Bash Version: 4.1
> 
> When using date command with -d option, if the date is between
> "2010-03-14 02:00" and "2010-03-14 02:59" inclusive, it gives an
> "invalid date" error.  You can test this with the following command:
> echo $(date -d "2010-03-14 02:00" +%s)
> Dates outside this range work fine.

First, 'date' is a coreutils command, not a 'bash' command.  Bugs in
'date' should be reported to the bug-coreutils mailing list not the
bug-bash mailing list.

Second, what you are seeing is almost certainly due to daylight saving
time being in effect and skipping over that time interval.  In my
US/Mountain timezone those dates are invalid and do not exist.

  $ TZ=US/Mountain date -d '2010-03-14 02:59'
  date: invalid date `2010-03-14 02:59'

The above is correct behavior since by Act of Congress DST changed
then and skipped that hour.

  $ zdump -v US/Mountain | grep 2010
  US/Mountain  Sun Mar 14 08:59:59 2010 UTC = Sun Mar 14 01:59:59 2010 MST 
isdst=0 gmtoff=-25200
  US/Mountain  Sun Mar 14 09:00:00 2010 UTC = Sun Mar 14 03:00:00 2010 MDT 
isdst=1 gmtoff=-21600
  US/Mountain  Sun Nov  7 07:59:59 2010 UTC = Sun Nov  7 01:59:59 2010 MDT 
isdst=1 gmtoff=-21600
  US/Mountain  Sun Nov  7 08:00:00 2010 UTC = Sun Nov  7 01:00:00 2010 MST 
isdst=0 gmtoff=-25200

See this FAQ entry for more information:

  
http://www.gnu.org/software/coreutils/faq/#The-date-command-is-not-working-right_002e

Bob



Re: Time delay on command not found

2011-10-10 Thread Bob Proulx
Bill Gradwohl wrote:
> when I typo a command, bash comes back with a command not found but
> hangs the terminal for way too long.

When you try to launch a command that does not exist anywhere on PATH
then bash must search every directory component of PATH in order to
look for the command.

> How do I get rid of the delay. I want it to release the terminal
> immediately.

Does your PATH contain directories on NFS or other networked
fileservers?  If so then those are typical sources of delay.

To easily see your directories in PATH:

  echo $PATH | tr : "\012"

Bob



Re: problem with tail -f that need to add to each line constant string at the beginning of each line

2011-10-29 Thread Bob Proulx
dan12341234 wrote:
> im running 4 scripts that perform ssh and tail -f on a file exist on 4
> servers and display on the screen.
> (1 script per servers)
> 
> ssh server1 tail -f /var/log/apache/access_log
> ssh server2 tail -f /var/log/apache/access_log
> ssh server3 tail -f /var/log/apache/access_log
> ssh server4 tail -f /var/log/apache/access_log
> 
> the problem is that the display dosent show any identification from which
> server each line
> 
> what i need is something with an echo servername, before each line printed
> to the screen

There are many specialized programs to do exactly what you are asking
such as dsh (distributed shell) and other tools.  You might want to
investigate those.

But you can do what you want by using sed to prepend to the line.

  ssh -n server1 'tail -f /var/log/syslog | sed --unbuffered "s/^/$(hostname): 
/"'
  ssh -n server2 'tail -f /var/log/syslog | sed --unbuffered "s/^/$(hostname): 
/"'

You might consider applying the commands as standard input instead.  I
think it is easier to avoid quoting problems that way.

  echo 'tail -f /var/log/syslog | sed --unbuffered "s/^/$(hostname): /"' | ssh 
-T server1

Bob



Re: problem with tail -f that need to add to each line constant string at the beginning of each line

2011-10-30 Thread Bob Proulx
dan sas wrote:
> what about if i want to run both commands simultaneous and send the
> output to a file how can i do that

This is getting further off topic for the bug bash mailing list.  It
really isn't too much about bash but just general help with how things
work.  A better place to ask these types of questions would be the
help-gnu-ut...@gnu.org mailing list.  If you have more questions in
the future let's please have the discussion there.

You could put the commands in the background.  To put commands in the
background use the '&' character at the end of the command line.  

   If  a  command  is terminated by the control operator &, the shell exe-
   cutes the command in the background in a subshell.  The shell does  not
   wait  for  the command to finish, and the return status is 0.  Commands
   separated by a ; are executed sequentially; the shell  waits  for  each
   command  to terminate in turn.  The return status is the exit status of
   the last command executed.

And also redirect the output to a file.

  ssh -n server1 'tail -f /var/log/syslog | sed --unbuffered "s/^/$(hostname): 
/"' >>syslog.tail.out &
  ssh -n server2 'tail -f /var/log/syslog | sed --unbuffered "s/^/$(hostname): 
/"' >>syslog.tail.out &

By this point it looks very much like you are trying to re-invent the
remote logging feature of a syslogd.  Because I now expect a new
question about how to start and restart this connection automatically.
(Am I right? :-)  Instead please investigate using one of the syslog
programs that does exactly that already.  See for example 'rsyslogd'
which supports remote logging.

Good luck!
Bob



Re: script to provide responses to prompts (badly written C program)

2011-10-31 Thread Bob Proulx
Fernan Aguero wrote:
> please accept my apologies, as this is my first post here. I'm sure
> I'm asking a very stupid questions, but I'm kind of stuck with this
> ...

The bug-bash mailing list is not quite the right place for this type
of question.  Better to ask it in help-gnu-utils instead.  The
bug-bash list is for all aspects concerning the development of the
bash shell.  That sort'a overlaps with the *use* of the shell and
shell *utilities* but not really.  The developers would be overwhelmed
with outside discussion that doesn't have anything to do with the
development of the shell.

I am going to send an answer to you but in another message.  I will CC
the help-gnu-utils mailing list.  Let's move any discussion and
followup to help-gnu-utils instead of here.

Bob



Re: bash-completion between do and done

2011-11-04 Thread Bob Proulx
Peng Yu wrote:
> Current, bash doesn't do command completion between do and done (for loop).
> I'm wondering if this feature can be added.

Of course bash does do command completion between do and done.  Can
you give an exact example test case?  On what version of bash?

Bob



Re: bash-completion between do and done

2011-11-05 Thread Bob Proulx
Chris F.A. Johnson wrote:
>I can confirm it on 3.05b, 3.0 and 4.2:
>
> while [ ${n:=0} -lt 5 ]
> do
>   se
>
>   All I get is a beep.

Hmm...  It is still completing.  But not command completion.  It is
doing filename completion instead of command completion.  It is out of
sync with the current action point.

Of course I was trying filename completion.  The simpler case is:

  while false; do l

But other completions do work fine.

  while false; do ls --so
  while false; do ls ~/.ba

As a workaround you can force command completion using 'C-x !' and
'M-!'.

Bob



Re: exit status issue

2011-11-18 Thread Bob Proulx
Dallas Clement wrote:
> Geir Hauge wrote:
> > Add ''set -x'' at the start of the function and examine the output
> > to see if it actually runs touch from PATH.
> 
> The strace output is showing that the correct 'touch' is being executed.

It would be a lot easier to use the 'sh -x' trace than using strace
for seeing what is going on in your script.  Try it first and see
commands are being run.  Because you are setting CHK_RESULT=1 as a
default all that it would take for your script to exit 1 would be to
not be setting that value to zero at a time that you think it should.

  sh -x ./scripttodebug

Your script is complicated enough that it isn't immediately obvious by
a casual inspection whether it is correct or not.  Almost certainly in
these cases it is an error in the programming of the script.  I would
much sooner suspect it than bash at this point.

>TMP=`grep $1 /proc/mounts|awk '{print $1}'`

You do not show how this is called but $1 option argument to grep may
be insufficiently quoted.

>echo "*** fsaccesstest failed to unmount $1. ***" >/dev/console

Writing directly to /dev/console is a little harsh.  You might
consider using the 'logger' program and writing to the syslog instead.

Your script makes me think that you might be using an NFS automounter
and trying to correct nfs client problems.  (shrug)

Bob



Re: exit status issue

2011-11-18 Thread Bob Proulx
DJ Mills wrote:
> Bob Proulx wrote:
> >  sh -x ./scripttodebug
> 
> I'm guessing you mean bash -x, not sh -x.  Two different shells.

It is a bash list so I probably should have said bash just to be
politically correct.  But the script doesn't use any bash specific
constructs so sh should be fine.  Saying sh is habit for me since I
always use sh when possible.

Bob




Re: help-b...@gnu.org mailing list

2011-11-21 Thread Bob Proulx
Chet Ramey wrote:
> Clark J. Wang wrote:
> > Chet Ramey wrote:
> > I just created help-b...@gnu.org .  I hope
> > that it becomes the list where
> > folks ask questions about bash and shell programming.  Please socialize
> > its existence and subscribe if you like.
> > 
> > So the "About bug-bash" description should be updated accordingly.
> 
> Maybe, but I don't admin that list, and it's not attached to the bash
> project on Savannah.

Hmm...  That's interesting.  It isn't attached to Savannah.  I will
try to figure out why not.

In the meantime this is what I changed the description of bug-bash to
say.  The last paragraph saying "There are no other GNU mailing lists
or gnUSENET newsgroups for BASH." has been removed.  The last two
paragraphs in the below were added to replace it.

  This list distributes, to the active maintainers of BASH (the Bourne
  Again SHell), bug reports and fixes for, and suggestions for
  improvements in BASH.  User discussion of BASH also occurs here.

  Always report the version number of the operating system, hardware,
  and bash (flag -version on startup or check the variable $BASH_VERSION
  in a running bash).

  The help-b...@gnu.org is a different list for discussion of bash and
  shell programming.

  General questions and discussion about GNU utilities may also be
  conducted on help-gnu-ut...@gnu.org.

And in a similar way I added the following description to help-bash.

  Bash is the GNU Project's Bourne Again SHell.  This list is for
  asking questions about bash and shell programming.  Also available
  is htlp-gnu-ut...@gnu.org for general questions about the shell
  utilities.

Any comments?

Bob



Re: help-b...@gnu.org mailing list

2011-11-22 Thread Bob Proulx
Clark J. Wang wrote:
> >  This list distributes, to the active maintainers of BASH (the Bourne
> >  Again SHell), bug reports and fixes for, and suggestions for
> >  improvements in BASH.  User discussion of BASH also occurs here.
> 
> My understanding is Chet wants "User discussion of BASH" to go to
> help-b...@gnu.org so I think "User discussion of BASH also occurs here."
> should be removed from here. Or people will be confused which list they
> should post to.

Good comment.  I removed that sentence.  When I read that I saw the
"of BASH" part and that seemed reasonable.  The problem is that people
have different definitions.  If they were really talking about bash
itself then that would be fine.  Discussion of bash itself.  But
instead they are talking about all manor of other things that are only
vaguely related to bash.

In addition to that removal I added another.  To encourage use of the
new list help-bash I added the second sentence to this paragraph as
encouragement.

  The help-b...@gnu.org is a different list for discussion of bash and
  shell programming.  If in doubt, post there first.

And in case anyone is curious as to what text we have been talking
about it is the text that shows up here, and related places:

  https://lists.gnu.org/mailman/listinfo/bug-bash

Bob



Re: exit status issue

2011-11-22 Thread Bob Proulx
Dallas Clement wrote:
> Okay, I simplified the script slightly, fixed the quoting and re-ran
> with set -x.  Here is the script and the output for the failure
> scenario.

Much more useful!  Thanks.

> + touch /mnt/array1/.accesstest
> + CHK_RESULT=1

It looks to me that touch is failing and reporting the failure and
that bash is handling it correctly.

> + touch /mnt/array1/.accesstest
> + CHK_RESULT=0

And then on a subsequent pass touch is reporting success.

This doesn't (yet) look like a problem with bash.

> The purpose of this function is simply to try and create or modify a
> test file on a RAID share.  It's just a periodic integrity check.

If I ever had even one single failure on a raid filesystem I would
start replacing hardware.  I wouldn't be trying to retry.  I wouldn't
be trying to umount and mount it again.  ANY failures would mean
something so severely broken that I would consider the system unusable
until repaired.

On the other hand, do you really mean an NFS mounted filesystem?
Perhaps using the nfs automounter?  If so then say so.  Don't say
"share" which implies a MS Windows SMB / CIFS network mount.  The NFS
client subsystem is notoriously bad.  The nfs automounter is worse.  I
am guessing you probably really mean that you are using an nfs mounted
filesystem using the automounter.  Pretty much everyone in that
situation has tried to recover from problems with it at one time or
another.

> The set -x output shows that the 'touch' command failed, but it
> doesn't show why.

That sounds like a bug in touch.  If there is an error then it should
emit an error message.

Perhaps you should try an alternate command to create the file.  You
could use bash itself.

  : >> "$1"/.accesstest

Or you could use a helper command.  Using a different program other
than 'touch' may cause it to emit a better error message.

Bob



Re: exit status issue

2011-11-22 Thread Bob Proulx
Dallas Clement wrote:
> > This doesn't (yet) look like a problem with bash.
> 
> Admittedly bash seems to do the right thing if you go by the set -x
> execution trace.  If you go by that, it would indeed seem that the
> call to touch is failing.

I have a higher level of trust in -x output since I haven't found any
problems with it.  But I have found a lot of programs that do not
handle file errors correctly.  Those are way too common.  Therefore I
am much more suspicous of them.  For example:

  $ perl -le 'print "hello";' >/dev/full
  $ echo $?
  0

Perl ate that error and never said anything about it.  One of a very
many programs with this problem.  BTW...  Bash gets it right.

  $ echo "hello" >/dev/full
  bash: echo: write error: No space left on device
  $ echo $?
  1

> But the strace output tells a different story.  According to it, the
> system calls that touch makes are succeeding.

Your test case is still too complex to be passed on to others.

> In fact, I created my own 'touch' and 'stat' programs, both of which
> log any errors.

That's excellent!

> I never see any errors logged, so I'm pretty sure they are getting
> executed which jives with what I'm seeing in the strace output.

Since it is your own code you would want to see that they logged the
successful result of the operation and also logged that they were
calling exit(0) and at that time show that bash logged them as a
non-zero exit.

For that test you could use 'false' to simulate the unsuccessful program.

> Is this a lower level scheduling problem?  It certainly could be.  I'm
> testing on Linux 2.6.39.4.   I'll rollback to an older kernel, libc,
> and libpthread and see if that makes any difference.

Scheduling?  Now you are scaring me.  I think you are going off into
the weeds with that idea.  I can't imagine how this can be anything
other than simple inline flow of control through your script.  Your
script as you have shown to us does not have any background
asynchronous processing.  Therefore it will be executing all of the
commands in order.  There isn't any out-of-order execution magically
happening.

I see by this that you are grasping at straws.  You are "just trying
things".  I have empathy for you and your troubles.  But that isn't
the scientific method.

  http://en.wikipedia.org/wiki/Scientific_method#Elements_of_scientific_method

> Not doing anything with NFS mounting thankfully.  It's a RAID array on
> a NAS device which is typically used for SMB / CIFS network mount.

Your system is mounting it using an SMB mount?  Then perhaps your
network protocol layer is causing trouble.  I think a bug in the SMB
layer is much more likely.

Bob



Re: Severe memleak in sequence expressions?

2011-11-30 Thread Bob Proulx
Marc Schiffbauer wrote:
> Greg Wooledge schrieb:
> > Marc Schiffbauer wrote:
> > > echo {0..1000}>/dev/null
> > > 
> > > This makes my system starting to swap as bash will use several GiB of
> > > memory.
> >
> > In my opinion, no.  You're asking bash to generate a list of words from 0
> > to 100 all at once.  It faithfully attempts to do so.
> 
> Yeah, ok but it will not free the mem it allocated later on (see
> other mail)

You are obviously not going to like the answers from most people about
what you are considering a bug.  But that is just the way that it is
going to be.  I think the shell is doing okay.  I think what you are
asking the shell to do is unreasonable.

Basically the feature starts out like this.  Brace expansion is a new
feature.  It didn't exist before.  Then csh comes along and thinks, it
would be really nice to have a way to produce a quick expansion of
strings in the order listed and the result is the "metanotation" of
"a{b,c,d}e" resulting in brace expansion.  More time goes by and bash
thinks, it would be really nice if that brace expansion feature also
worked for sequences and the result is the {n..m[..incr]} added to
brace expansion where it expands to be all terms between n and m
inclusively.  A lot of people use it routinely in scripts to generate
sequences.  It is a useful feature.  The shell is better to have it
than to not have it.

Then, BAM, someone comes along and says, I tried putting a number so
large it might as well be infinity into the end condition.  Bash both
consumed a lot of memory trying to generate the requested argument
list to pass to the program.  Then afterward bash didn't give the
memory it allocated back to the operating system.

To generate 0..9 uses 20 bytes.  10..99 uses 270 bytes.  Let's work
out a few:

  0..9 20
  10..99 270
  100..999 3,600
  1000.. 45,000
  1..9 540,000
  10..99 6,300,000
  100..999 72,000,000

In total to generate all of the arguments for {0..1000} consumes
at least 78,888,899 bytes or 75 megabytes of memory(!) if I did all of
the math right.  Each order of magnitude added grows the amount of
required memory by an *order of magnitude*.  This should not in any
way be surprising.  In order to generate 100 arguments
it might consume 7.8e7 * 1e10 equals 7.8e17 bytes ignoring the smaller
second order effects.  That is a lot of petabytes of memory!  And it
is terribly inefficient.  You would never really want to do it this
way.  You wouldn't want to burn that much memory all at once.  Instead
you would want to make a for-loop to iterate over the sequence such as
the "for ((i=1; i<=100; i++)); do" construct that Greg
suggested.  That is a much more efficient way to do a loop over that
many items.  And it will execute much faster.  Although a loop of that
large will take a long time to complete.

Put yourself in a shell author's position.  What would you think of
this situation?  Trying to generate an unreasonably large number of
program arguments is, well, unreasonable.  I think this is clearly an
abuse of the feature.  You can't expect any program to be able to
generate and use that much memory.

And as for whether a program should return unused memory back to the
operating system for better or worse very few programs actually do it.
It isn't simple.  It requires more accounting to keep track of memory
in order to know what can be returned.  It adds to the complexity of
the code and complexity tends to create bugs.  I would rather have a
simple and bug free program than one that is full of features but also
full of bugs.  Especially the shell where bugs are really bad.
Especially in a case like this where that large memory footprint was
only due to the unreasonably large argument list it was asked to
create.  Using a more efficient language construct avoids the memory
growth, which is undesirable no matter what, and once that memmory
growth is avoided then there isn't a need to return the memory it
isn't using to the system either.

If you want bash to be reduced to a smaller size try exec'ing itself
in order to do this.

  $ exec bash

That is my 2 cents worth plus a little more for free. :-)

Bob



Re: help-b...@gnu.org mailing list

2011-12-12 Thread Bob Proulx
Timothy Madden wrote:
> Is the 'htlp-gnu-ut...@gnu.org' list on the help-bash info page
> meant to be 'help-gnu-ut...@gnu.org' (note 'help' instead of 'htlp'
> in the list name) ?
> Can the list info on the web be updated ?

Thanks for reporting that error.  Fixed now.

Bob



Re: Ill positioned 'until' keyword

2011-12-14 Thread Bob Proulx
Peng Yu wrote:
> I looks a little wired why 'until' is the way it is now.
> ...
> until test-commands; do consequent-commands; done
> while ! test-commands; do consequent-commands; done

In the original Bourne shell there is no '!' operator.  The 'until'
was a way to negate the expression without using a '!' which didn't
exist in that shell.  An 'if' could operate using the 'else' clause.
But there wasn't any other way to do it in a while loop.  The addition
of '!' to the language was one of the best features.  IMNHO.  I use it
all of the time now.

Bob



Re: return values of bash scripts

2011-12-20 Thread Bob Proulx
Mike Frysinger wrote:
> kc123 wrote:
> > For example, my script below called crond.sh:
> > ...
> > content=`ps auxw | grep [c]rond| awk '{print $11}'`
> > ...
> > and output is:
> > CONTENT: /bin/bash /bin/bash crond
> > 
> > Why are there 2 extra arguments printed (/bin/bash) ?
> 
> because you grepped your own script named "crond.sh"
> 
> make the awk script smarter, or use pgrep

You are using a system that supports various ps options.  The
equivalent of the BSD 'ps aux' is the SysV 'ps -ef'.  They are
similar.  But then instead of using 'ps aux' BSD style try not
printing the full path by using 'ps -e'.  You are matching your own
grep becuase it is in the argument list.

Then this can be made smarter by simply matching it as a string
instead of as a pattern.

  ps -e | awk '$NF=="crond"'

  ps -e | awk '$NF=="crond"{print$1}'

Bob


signature.asc
Description: Digital signature


Re: Is the description of set -- missing in man bash or at least difficult to find?

2011-12-22 Thread Bob Proulx
Greg Wooledge wrote:
> Peng Yu wrote:
> > As a reasonable search strategy to search for how to set $@ is to
> > search for '$@' in man bash.
> 
> There is a "Special Parameters" section.  All of the parameters are listed
> there without their leading $ prefix.  For example, the documentation for
> $@ is found in the section marked with an @ under Special Parameters.
> 
> I submitted a patch to try to get the sections to be labeled with $@
> (etc.) so that people could find them, but the patch was not accepted.

+1 vote on getting the parameters listed with a leading dollar sign.
The individual single character is difficult to search for but the
combination of "$@" and so forth for the others is a useful search
string.  I have often wanted the manual to include the "$@"
combination instead of just the "@" name.

Bob



Re: how to understand echo ${PATH#*:}

2011-12-25 Thread Bob Proulx
lina wrote:
> how to understand
> echo ${PATH#*:}
> 
> the #*:
> I don't get it. why the first path before : was gone.

This is really a help-bash question.  Please send all follow-ups
there.

The documentation says:

   ${parameter#word}
   ${parameter##word}
  Remove matching prefix pattern.  The word is expanded to produce
  a pattern just as in pathname expansion.  If the pattern matches
  the  beginning of the value of parameter, then the result of the
  expansion is the expanded value of parameter with  the  shortest
  matching  pattern  (the ``#'' case) or the longest matching pat-
  tern (the ``##'' case) deleted.  If parameter is  @  or  *,  the
  pattern  removal operation is applied to each positional parame-
  ter in turn, and the expansion is the resultant list.  If param-
  eter  is  an array variable subscripted with @ or *, the pattern
  removal operation is applied to each  member  of  the  array  in
  turn, and the expansion is the resultant list.

Since PATH is a series of directories separated by colons "#*:" will
match and therefore remove the first element of the PATH.

  $ foo=one:two:three
  $ echo ${foo#*:}
  two:three

And using two pound signs "##" would match the loggest pattern and
remove all up through the last one.

  $ echo ${foo##*:}
  three

Bob



Re: minor bug in bash

2012-01-17 Thread Bob Proulx
Zachary Miller wrote:
>   If write() is interrupted by a signal after it successfully writes
>   some data, it shall return the number of bytes written.
> 
> consider SIGSTOP, which is non-maskable.  when the process continues, wouldn't
> this be a situation where the write was interrupted and thus could 
> legitimately
> return fewer bytes?

I am going to get myself in trouble by responding from memory instead
of researching and quoting W. Richard Stevens but...

When reading and writing there is a concept of high speed devices and
slow speed devices.  As I recall high speed devices include
filesystems (slow speed would be tty devices) and high speed devices
as I recall cannot be interrupted.  Therefore as a practical matter
this could never appear when reading or writing a file on a
filesystem.  This could only appear when reading or writing a "slow"
device such as a serial port or tty or other.  Assuming I recall this
correctly of course.

Bob



Re: test if shell is interactive

2012-01-22 Thread Bob Proulx
tapczan wrote:
> #!/bin/bash
> echo $-
> 
> Execution:
> 
> # ./a.sh
> hB
> 
> There is no 'i' so the session is non-interactive?
> It was invoked from interactive.
> Am I missing something?

Shell scripts are not interactive.  So what you are seeing above is
correct.

Bob



Re: test if shell is interactive

2012-01-22 Thread Bob Proulx
tapczan wrote:
> Bob Proulx wrote:
> > Shell scripts are not interactive.  So what you are seeing above is
> > correct.
> 
> So, is there any way to test if script (a.sh) was invoked from interactive
> session (human) or not (e.g. from cron)?

I usually check if the standard input file descriptor is attached to a
tty device or not.

  #!/bin/sh
  if [ -t 0 ]; then
echo has a tty
  else
echo does not have a tty
  fi
  exit 0

Or something like:

  $ test -t 0 && echo yes tty || echo no tty

Note: This discussion thread is much better suited for help-bash since
it isn't talking about a bug in bash.  In the future if you are just
asking questions that would be the better list to send them to.

Bob



Re: excess braces ignored: bug or feature ?

2012-02-17 Thread Bob Proulx
Greg Wooledge wrote:
> Mike Frysinger wrote:
> > can't tell if this is a bug or a feature.
> > 
> > FOO= BAR=bar
> > : ${FOO:=${BAR}
> > echo $FOO
> > 
> > i'd expect an error, or FOO to contain those excess braces.  instead, FOO 
> > is 
> > just "bar".
> 
> imadev:~$ : ${FOO:=BAR}
> imadev:~$ echo "$FOO"
> BAR
> 
> It looks OK to me.  You've got an argument word which happens to contain
> a substitution-with-side-effects as part of it.

Or slightly differently expressed it is this too:

  $ echo ${FOO:=BAR}
  BAR

  $ echo ${FOO:=BAR}
  BAR

  $ echo ${FOO:=${BAR}
  }}}

Seems reasonable to me.  In that context the bracket isn't special in
any way and is just another character in the string.  Just like this:

  $ echo 
  

Bob



Re: Inconsistent quote and escape handling in substitution part of parameter expansions.

2012-02-28 Thread Bob Proulx
John Kearney wrote:
> Eric Blake wrote:
> >> [ "${test}" = "${test//"'"/"'"}" ] || exit 999
> > 
> > exit 999 is pointless.  It is the same as exit 231 on some shells,
> > and according to POSIX, it is allowed to be a syntax error in other
> > shells.
>
> I was going for || exit "Doomsday" i,e. 666 = 999 = Apocalypse.

Yes.  But...  As we all know exit codes are only eight bits and that
limits you to 0-255 only!  Anything else and you have "jumped the
tracks" with implementations doing implementation defined things.
Maybe one of them will invoke the game of rogue!  :-)

Bob



Re: Bash scripting and large files: input with the read builtin from a redirection gives unexpected result with files larger than 2GB.

2012-03-04 Thread Bob Proulx
Chet Ramey wrote:
> Jean-François Gagné wrote:
> > uname output: Linux  2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC 
> > 2011 x86_64 GNU/Linux
> > Machine Type: x86_64-pc-linux-gnu
>
> Compile and run the attached program.  If it prints out `4', which it does
> on all of the Debian systems I've tried, file offsets are limited to 32
> bits, and accessing files greater than 2 GB is going to be unreliable.

Apparently all of the Debian systems you have tried are 32-bits
systems.  On the reporter's 64-bit amd64 system it will print out 8.

Additionally the bash configure script includes the AC_SYS_LARGEFILE
macro which will test the ability of the system to use large files and
if the system is capable it will define _FILE_OFFSET_BITS=64 and in
that case the size off_t will be 8 bytes too.  If you compile the test
program with -D_FILE_OFFSET_BITS=64 the result will also be 8 even on
32-bit systems.

By default the 32-bit bash will be large file aware on all systems
that support large files and will have been compiled with
_FILE_OFFSET_BITS=64.  I just looked in the config.log from a build of
bash and it included these lines in the resulting config.log file.

  configure:4710: checking for special C compiler options needed for large files
  configure:4805: result: no
  configure:4811: checking for _FILE_OFFSET_BITS value needed for large files
  ...
  configure:4922: result: 64
  ...
  ac_cv_sys_file_offset_bits=64

Bob



Re: using the variable name, GROUPS, in a read list

2012-03-07 Thread Bob Proulx
Greg Wooledge wrote:
> Jim Meyering wrote:
> > Is there a moral here, other than to avoid using special variable names?
> > Probably to prefer lower-case variable names.
> 
> You've nailed it.  Or more precisely, avoid all-upper-case variable names,
> because they tend to collide with environment variables and special shell
> variables.  Any variable name with at least one lower-case letter should
> be safe.
> 
> There is an unfortunate long-lived bad habit that's floating around out
> there, where people think they're supposed to use all-caps variable names
> in shell scripts.  That is not a good idea, and you've just stumbled upon
> one of the many reasons why.

I always used to use upper case shell variable names too.  (For
example TMPDIR, but that collides with other behavior.)  But due to
the collision potential I have been trying to train myself out of that
habit for the last few years.  Now I always use lower case variable
names.  (For example tmpdir, which shouldn't collide.)

So for this I would say the script should use lower case names instead
of upper case names.

Bob



Re: compgen is slow for large numbers of options

2012-03-14 Thread Bob Proulx
Richard Neill wrote:
> I don't know for certain if this is a bug per se, but I think
> "compgen -W" is much slower than it "should" be in the case of a
> large (1+) number of options.

I don't think this is a bug but just simply a misunderstanding of how
much memory must be allocated in order to generate `seq 1 50`.

> For example (on a fast i7 2700 CPU), I measure:
> 
> compgen -W "`seq 1 5`" 1794#3.83 s
> compgen -W "`seq 1 5 | grep 1794`"   #0.019 s
> 
> In these examples, I'm using `seq` as a trivial way to generate some
> data, and picking 1794 as a totally arbitrary match.

Right.  But in the first case you are generating 288,894 bytes of data
and in the second case 89 bytes of data.  That is a large difference.

You could probably speed it up a small amount more by using grep -x -F
and avoid the common substring matches.  And perhaps more depending
upon your environment with LC_ALL=C to avoid charset issues.

> In the first example, compgen is doing the filtering, whereas in the
> 2nd, I obtain the same result very much faster with grep.

Yes, since grep is reducing the argument size to 89 bytes.  That makes
perfect sense to me.  Anything that processes 89 bytes is going to be
much faster than anything that processes 288,894 bytes.

> If I increase the upper number by a factor of 10, to 50, these
> times become,  436 s (yes, really, 7 minutes!) and 0.20 s

That is an increase in argument size to 3,388,895 bytes and there will
be associated memory overhead to all of that increasing the size on
top of that value too.  Three plus megabytes of memory just for the
creation of the argument list and then it must be processed.  That
isn't going to be fast.  You would do much better if you filtered that
list down to something reasonable.

> respectively.  This suggests that the algorithm used by compgen is
> O(n^2) whereas the algorithm used by grep is 0(1).

On the counter it suggests that the algorithm you are using, one of
fully allocating all memory, is inefficient.  Whereas using grep as a
filter to reduce that memory to 89 bytes is of course more efficient.

I wrote a response on a similar issue previously.  Instead of posting
it again let me post a pointer to the previous message.

  http://lists.gnu.org/archive/html/bug-bash/2011-11/msg00189.html

Bob



Re: Possible bug: Race condition when calling external commands during trap handling

2012-05-03 Thread Bob Proulx
tillmann.crue...@telekom.de wrote:
> I have produced the following script as a small example:

A good small example!  I do not understand the problem but I do have a
question about one of the lines in it and a comment about another.

> trap "kill $?; exit 0" INT TERM

What did you intend with "kill $?; exit 0"?  Did you mean "kill $$"
instead?

>   local text="$(date +'%Y-%m-%d %H:%M:%S') $(hostname -s) $1"

Note that GNU date can use "+%F %T" as a shortcut for "%Y-%m-%d %H:%M:%S".
It is useful to save typing.

And lastly I will comment that you are doing quite a bit inside of an
interrupt routine.  Typically in a C program it is not safe to perform
any operation that may call malloc() within an interupt service
routine since malloc isn't reentrant.  Bash is a C program and I
assume the same restriction would apply.

Bob



Re: cd // produces unexpected results

2012-06-23 Thread Bob Proulx
Stefano Lattarini wrote:
> A little more info, quoting from the Autoconf manual:

And I will respond to the quoted part from the autoconf manual
realizing that Stefano isn't saying it but just passing the message
along. :-)

>   POSIX lets implementations treat leading // specially, but requires leading
>   /// and beyond to be equivalent to /.  Most Unix variants treat // like /.
>   However, some treat // as a "super-root" that can provide access to files
>   that are not otherwise reachable from /.  The super-root tradition began
>   with Apollo Domain/OS, which died out long ago, but unfortunately Cygwin
>   has revived it.

I don't think that should say "unfortunately" there.  Apollo Aegis
started it with an "unfortunate" network model.  But that was Apollo
not Cygwin.  And because of that it was swept into the POSIX standard.
It existed in the wild with Aegis Domain/OS and therefore the Portable
Operating System Interface standard needed to incorporate the behavior
as existing behavior in what was at one time a popular system.  And
then with the passage of time Domain/OS is now no longer seen.
However the standard still exists.

Cygwin had a very similar behavior model that needed to be handled.
Cygwin I think did okay by making use of the existing standard rather
than creating something new.  The alternative would have been a
different, unique, non-standard syntax and that would have been bad.
But they fit within the standard.  Any previously standard conforming
and well behaved script would still be standard conforming and well
behaved.  That being the whole point of the POSIX standard I see this
as a good thing.

Bob



Re: square bracket vs. curly brace character ranges

2012-09-14 Thread Bob Proulx
Marcel Giannelia wrote:
> But the other day I was on a fresh install (hadn't set
> LC_COLLATE=C yet, so I was in en_US.UTF-8), and this happened:
> 
> $ ls
> a  A  b  B  c  C

> $ ls {a..c}
> a  b  c

The above has nothing to do with file glob expansion.  Using 'ls'
there I think confuses the discussion.  You can see the same result
with 'echo' and avoid the confusing use of ls like this:

  $ echo {a..c}
  a  b  c

> $ ls [a-c]
> a  A  b  B  c

The expression [a-c] is really like saying [aAbBc] in your active
locale.  Using locale based character ranges outside of the C locale
is problematic and I always avoid them.

This result is due to the matching of file globs against files in the
directory.  If you were to remove files for example you would see
fewer matches.  (I am sure you already knew that but I wanted to say
it explicitly.  Since using ls in that context adds confusion.)

> Curly brace range expressions behave differently from square-bracket
> ranges. Is this intentional?

Yes.  Because bracket expansion is a file glob expansion and dependent
upon the locale.  Brace expansion is a bash extension and can do what
it wants.  (I am going to get in trouble for saying it that way,
because it isn't really correct, and I expect at least three
corrections. )

> The sheer number of threads we've got complaining about
> locale-dependent [a-c] suggests to me that the software should be
> changed to just do what people expect, especially since nothing is
> really lost by doing so.

I know that some projects are doing just that.  I don't know the plans
for bash.  I would like to see it addressed in libc so that it would
be uniform across all projects.  But that isn't likely to happen.  But
if libc isn't going to do it then it is beneficial if projects do it
themselves outside of libc.  Eventually in the future when libc
addresses the problem then those hacks can be removed.

Bob



Re: square bracket vs. curly brace character ranges

2012-09-15 Thread Bob Proulx
Hi Arnold,

Aharon Robbins wrote:
> You are ssuming here that everyone uses GLIBC.

I don't know so I will ask.  Isn't the problem endemic to glibc?  Do
other libc's such as HP-UX or AIX or other have this same issue?  I am
out of touch on the details of them these days.

> Not so.  The projects that are dong something about it* will have to
> maintain their own code anyway.  (More's the pity.)

As you say, what a pity.  But in spite of the effort I hope they all
do the rational range intperpretation anyway.  It would be so much
better for users.

However it does mean that some programs will work as expected and some
won't.  There will still be differences and those will be confusing.

> * I take credit here, having started it in gawk and pushed grep into it :-)
> I think sed will eventually pick it up, and bash too. Karl Berry coined
> the lovely phrase "Rational Range Interpretation".  The campaign for Rational
> Ragne Interpretation is in full swing! :-)

I knew you were involved but I didn't remember the details and so not
wanting to slight someone just referred to it in the abstract.  Chet
of course had the detail and pointed a reference.  I am very happy to
see things progress in that direction.  Thanks!

Bob



Re: a recursion bug

2012-10-02 Thread Bob Proulx
Yuxiang Cao wrote:
> I use ulimit -s to find stack size, which is 8192kbytes. Then I use
> valgrind to record the stack size which give me this information.
> test.sh: xmalloc: ../bash/unwind_prot.c:308: cannot allocate 172 bytes 
> (8359936 bytes allocated)
> So from the above information I think this is not a stack overflow,
> and that is a real fault in this program.

I disagree.  Mostly because if you change the stack size ulimit then
the example program will recurse more or less.  Therefore it is
definitely a stack limit policy that is limiting the behavior of the
example program and not a bash bug.

  $ ulimit -s
  8192

  $ ./deep-stack-trial | wc -l
  5340

  $ ulimit -s 4096
  $ ulimit -s
  4096

  $ ./deep-stack-trial | wc -l
  2668

Now a smaller stack size.  Now if the example test code is run it will
be stopped sooner.

  $ ulimit -s 16384
  bash: ulimit: stack size: cannot modify limit: Operation not permitted

Prevented by operating system policy.  Use a slightly smaller size.

  $ ulimit -s 16000
  $ ulimit -s
  16000

  $ ./deep-stack-trial | wc -l
  10441

Now a larger stack size.  Now if the example test code is run it will
be stopped later.

It is the operating system stack size policy limit that is stopping
the program.  If you have sufficient permission then you may increase
this value even to "unlimited".

  $ su -
  # ulimit -s unlimited
  # ulimit -s
  unlimited

I would not advise this however.  Those limits are placed there for
the reason of containing unreasonable programs from accidentally
creating unreasonable situations.  Or at least unexpected ones.

Just the same however if this is a limit that you personally disagree
with then it is a limit that you may change on your system.  If you
want you may change your system to allow an unlimited level of
recursion.  Then if your system has the memory resources for it your
program will be able to run to completion.  If your system resources
truly become exhausted then of course the program will still fail to
complete successfully.  But it won't be artifically limited by the
system policy.

Bash here in this context is simply running within the operating
system limits imposed by the policy of the system as reflected in the
stack size limits.

Bob



Re: a recursion bug

2012-10-02 Thread Bob Proulx
Linda Walsh wrote:
> Greg Wooledge wrote:
> >Linda Walsh wrote:
> >>>Yuxiang Cao wrote:
> test.sh: xmalloc: ../bash/unwind_prot.c:308: cannot allocate
> 172 bytes (8359936 bytes allocated)
> >
> >>Why shouldn't bash fail at the point it hits resource exhaustion and return
> >>an error condition -- like EAGAIN, ENOBUFS, ENOMEM... etc.
> >
> >It did fail, and it gave that message quoted up above.  (Bash isn't a
> >system call, so it can't "return ENOMEM".  Those constants are only
> >meaningful in the context of a C program inspecting errno, not a
> >utility returning an exit status.)

Careful.  That error message came from valgind not bash.  The OP ran
the program under valgrind to produce that message.

> >>Bash should catch it's own resource allocation faults and not rely on
> >>something external to itself to clean up it's mess.
> >
> >It did.
> ---
>   If it is catching the error in it's own code, then
> why does it not return a non-zero status = to ERRNO, for the command that
> caused the problem in the user shell script?

Note that it was *valgrind* not bash that at this point here.

> >>Dumping core means bash lost control.
> >
> >Where do you see a core dump?
> ---
> I see a system message from the original posters program:
> Segmentation fault (core dumped)

Core dumping is disabled by default on most modern systems.  But it
may be enabled explicitly or the operating system policy may be to
enable it generally.  That is certainly outside the context of bash.

  $ ulimit -c
  0
  $ ulimit -c unlimited
  $ ulimit -c
  unlimited

You must have either enabled it explicitly or are running on a system
where dumping is globally allowed by operating system policy.  If you
don't like that then you are able to change it or to use a different
system.

> Finding it is left as an exercise to the reader.

Eureka!  I found it!  :-)

> It is noted that many programs catch fatal signals in order
> to not leave evidence of their crashing on the user's system as
> coredumps are considered 'bad form'/unsightly.

Yes.  If I were you I would figure out why ulimit -c isn't zero for
you and then fix it.  Unless you are set up to debug the resulting
core then it isn't useful.  Also dumping a large core over the network
to an NFS mounted directory, an action that takes place in the kernel
and on at least some systems is not interruptable, a few times will
convince you that you don't want to wait for it.

Bob



Re: square bracket vs. curly brace character ranges

2012-10-02 Thread Bob Proulx
Linda Walsh wrote:
> Chet Ramey wrote:
> >http://lists.gnu.org/archive/html/bug-bash/2012-05/msg00086.html
> >...Campaign For Rational Range Interpretation...
>
> The next version of bash will have a shell option to enable this
> behavior.  It's in the development snapshots if anyone wants to try
> it out now.

> The above relies upon a hack to the algorithm -- use *USEFUL* hack
> in most cases, but still a hack.

Why do you think it is a hack?  Because it isn't in libc?  I might
agree with that.  But if it isn't in libc then in the application is
the only reasonable alternative.

> Note...before bash broke UTF-8 compatiblity, I could use
> en_US.UTF-8, but now I assert the current need to do the above
> is a bug.

This has been discussed a very great many times in a very great many
places and you have been part of that discussion many times.  Please
don't rehash all of that discussion again.  It isn't useful or
productive.

> I will make no claim about en_US.iso88591 or other locale-specific
> charsets.  However, UTF-8 defines collation order the same as ASCII in
> the bottom 127 chars.

No it doesn't.  That is the root of the problem.

> Bash ignores UTF-8's collation order.

No it doesn't.  It is using the system libc code which is obeying the
locale collation ordering.  Since that misconception is the basis of
your complaint and that basis is false it isn't useful for more
chatter about more details of it.

> For some reason, I am not allowed to use LC_COLLATE=UTF-8:
> -bash: warning: setlocale: LC_COLLATE: cannot change locale (UTF-8):
> No such file or directory

You are not allowed to use it because your system does not provide
such a file.  If it did then you would.

You might argue that the error message in that case could be more
friendly.  That would be a bash specific discussion and perhaps even a
useful one.  See 'man 3 setlocale' for more details.  On my system
locale files are stored down /usr/share/locale/* paths.

Bob



Re: a recursion bug

2012-10-03 Thread Bob Proulx
Linda A. Walsh wrote:
> Steven W. Orr wrote:
> >I think there's a fundamental misunderstanding between the
> >difference of an error code returned by a system call and the exit
> >status of a process. They're two completely different things.
> 
>   It's not a fundamental misunderstanding.  It's a fundamental belief
> in using data bandwidth and not wasting it.  If 0=ok, as it does in bash and
> with errno, and, it is the case (still is),  that errno's fit in 1 byte,
> there's no reason not to return the exact failure mode from a util...

I still think there is the misunderstanding of the OP's case between
bash and the run of it under valgrind.  I think you are still talking
about the valgrind part of the run.  But in any case, is there
anything in there that is about bash?  If so the we need an exact test
case.  If not then let's not discuss it here.

>   That's not to say that many or most do -- some even return a status
> of '0' on fatal errors (xfs_mkfile -- on running out of room returns a status 
> 0).

What xfs utils do or don't do is off topic for the bug-bash list.

> >Just for fun, look at the man page for grep. It is advertised to
> >return a 0, 1 or 2. The actual values of errno that might happen
> >in the middle are a separate problem.
> 
>   Like I said, it's a fundamental waste of bits.
> 
>   But -- if it encountered an error, should it issue a SEGV and
> coredump message, or should it terminate the wayward script/function
> and return to the prompt?

This is just the result of "Worse is Better".  It is one of the
fundamentals that made Unix the wondeful system that we know it to be
today.  You may not like it but it is a principle that has strongly
shaped the system.

  http://en.wikipedia.org/wiki/Worse_is_better

The problem you are fighting is that every program on the system is
governed by kernel stack limits.  If the program exceeds the policy
limit then it will get a SIGSEGV for stack growth failure.

Now it is possible for a program to go to extreme efforts to trap that
case and to deal explicitly with it.  But it isn't worth it.  Worse is
better.  Better is an easy to maintain portable program.  Trying to
deal with every possible problem on every possible system in every
possible context will yield a worse solution.  Don't do it.

In the immortal words of Smith & Dale:

  SMITH: Doctor, it hurts when I do this.
  DALE: Don't do that.

>   Hey you can do whatever, but if the linux kernel crashed on every
> resource strain, most people would consider that bad.

This is a reductio ad absurdum ("reduction to absurdity") argument
that doesn't apply here.  The linux kernel was not crashing.  This is
off the topic.

Bob



Re: a recursion bug

2012-10-03 Thread Bob Proulx
Greg Wooledge wrote:
> imadev:~$ bash-4.2.28 -c 'a() { echo "$1"; a $(($1+1)); }; a 1' 2>&1 | tail
> Pid 4466 received a SIGSEGV for stack growth failure.
> Possible causes: insufficient memory or swap space,
> or stack size exceeded maxssiz.
> 6534
> 6535
> 6536
> 6537
> 6538
> 6539
> 6540
> 6541
> 6542
> 6543
> imadev:~$ ls -l core
> -rw---   1 wooledgpgmr   19908052 Oct  3 08:38 core
> imadev:~$ file core
> core:   core file from 'bash-4.2.28' - received SIGSEGV
>
> That was executed on HP-UX 10.20.  I agree that bash should try not
> to dump core in this case, if it's reasonable to prevent it.

HP-UX is a traditional Unix system and traditionally the system always
enabled core dumps.  How useful those were to people is another
question.

So by the above you are suggesting that bash should be re-written to
use and maintain a userspace stack?  That would convert stack memory
use into heap memory use.

Or are you suggesting that bash should add code to trap SIGSEGV,
determine if it was due to insufficient stack memory and if so then
exit with a nicer message?

Or are you suggesting that bash should specify its own stack area so
as to avoid the system stack size limitation?

I could see either of those first two solutions being reasonable.

Bob



Re: a recursion bug

2012-10-03 Thread Bob Proulx
Chet Ramey wrote:
> > Pid 4466 received a SIGSEGV for stack growth failure.
> > Possible causes: insufficient memory or swap space,
> > or stack size exceeded maxssiz.
> 
> There's not actually anything you can do about that except use ulimit to
> get as much stack space as you can.

Well...  There is the gnulib c-stack module:

  http://www.gnu.org/software/gnulib/MODULES.html

  c-stack   Stack overflow handling, causing program exit.

Bob



Re: Clarification needed on signal spec EXIT

2012-10-16 Thread Bob Proulx
Francis Moreau wrote:
> --
> main_cleanup () { echo main cleanup; }
> submain_cleanup () { echo sub cleanup; }
> 
> trap main_cleanup EXIT
> 
> task_in_background () {
> echo "subshell $BASHPID"
> 
> while :; do
> # echo "FOO"
> sleep 1
> done
> echo "subshell exiting..."
> }
> 
> {
> trap submain_cleanup EXIT
> trap
> task_in_background
> } &
> 
> echo exiting...
> --
> 
> Sending TERM signal to the subshell doesn't make "submain_cleanup()"
> to be called.

And it does in ksh93.  Hmm...  And it does if I comment out the line
"trap main_cleanup EXIT".  It seems to only set the trap if no trap
handler was previously set.

Bob



Re: documentation bug (uid resetting in posix mode)

2012-10-30 Thread Bob Proulx
Stefano Lattarini wrote:
> Anyway, my /bin/sh is bash ...
>   $ ls -l /bin/sh
>   lrwxrwxrwx 1 root root 4 Jul  8  2010 /bin/sh -> bash
> I'm on Debian Unstable BTW (sorry for not specifying that earlier).

Let me say this aside on the issue since there is opportunity for some
confusion.  On Debian the default for new installations is that
/bin/sh is a symlink to dash.  But existing systems that are upgraded
will not get this change automatically and will remain as a symlink to
bash.  It must be specifically selected if desired.  Therefore the
selection of /bin/sh is bi-modal with many systems of either
configuration.  The dash NEWS file says:

  * The default system shell (/bin/sh) has been changed to dash for
new installations.  When upgrading existing installations, the
system shell will not be changed automatically.
  * One can see what the current default system shell on this machine
is by running 'readlink /bin/sh'.
  * Change it by running 'dpkg-reconfigure dash'. 

The 'dpkg-reconfigure dash' will present a question with the previous
value allowing the admin to reconfigure it to a different value.  The
selected value is sticky and will persist through upgrades until changed.

Bob



Re: documentation bug (uid resetting in posix mode)

2012-10-30 Thread Bob Proulx
Stefano Lattarini wrote:
> Hi Bob, thanks for the tips.  However ...
> 
> Bob Proulx wrote:
> > Stefano Lattarini wrote:
> >> Anyway, my /bin/sh is bash ...
> >>   $ ls -l /bin/sh
> >>   lrwxrwxrwx 1 root root 4 Jul  8  2010 /bin/sh -> bash
> >> I'm on Debian Unstable BTW (sorry for not specifying that earlier).
> > 
> > Let me say this aside on the issue since there is opportunity for some
> > confusion.  On Debian the default for new installations is that
> > /bin/sh is a symlink to dash.  But existing systems that are upgraded
> > will not get this change automatically and will remain as a symlink to
> > bash.  It must be specifically selected if desired. 
> >
> ... I'm not so sure all of scripts on my system are exempt from
> bashisms; so rather than risking obscure bugs, I'll keep bash as
> my system shell (for the current system, at least).  If it ain't
> broken, don't fix it ;-)

I wasn't suggesting that you change it.  I was simply noting that the
symlink has two standard values.  Which has caused confusion elsewhere
concerning whether it was bash behavior or dash behavior.  There isn't
a canonical configuration.

Knowledge is power, and all of that.

Bob





Re: documentation bug (uid resetting in posix mode)

2012-10-30 Thread Bob Proulx
Andreas Schwab wrote:
> Stefano Lattarini writes:
> > If it ain't broken, don't fix it ;-)
> 
> As you found out, it _is_ broken.

Okay.  But broken which way?  Which of these are you saying:

 1. Broken because bash normally drops privileges?
Or:
 2. Broken because called as /bin/sh Debian patched it to not drop
privileges?
Or:
 3. ??

Bob



Re: Any chance of multi-dimensional arrays?

2012-11-25 Thread Bob Proulx
Rene Herman wrote:
> All I want additionally is multi-dimensional arrays...

There are various naming conventions and schemes to simulate
multi-dimensional arrays using single dimension arrays.  Since you
want to continue with the shell and the shell has not (yet) provided
multi-dimensional arrays then the only option for you is to simulate
them.  That isn't too difficult and if you search the web there are
many different implementations with the biggest difference being
whether ordering matters or not.

Bob



Re: Cron jobs, env vars, and group ID ... oh my

2012-11-28 Thread Bob Proulx
Mun wrote:
> #! /bin/bash
> newgrp group1
> id -g -n // This shows my login group ID, not group1

The 'newgrp' command spawns a new child shell.  After that child shell
exits the new group evaporates.  The rest of the script continues with
the previous id.  This is a common misunderstanding of how newgrp
works.  People often think it changes the current shell.  It does
not.  It stacks a child shell.

Basically, newgrp does not do what you thought it did.  It can't be
used in this way.

You might try having the newgrp shell read commands from a secondary
file.

  newgrp group1 < other-script-file

Or you might consider using 'sudo' or 'su' for that purpose too.

Bob



Re: Question about the return value of 'local'

2012-12-13 Thread Bob Proulx
Francis Moreau wrote:
> I found that the return value of 'local' keyword is counter intuitive
> when the value of the assignment is an expression returning false. In
> that case the return value of local is still true. For example:
> 
>   local foo=$(echo bar; false)
> 
> returns true

Yes.  The creation of the local variable foo was successful.

> whereas:
> 
>   foo=$(echo bar; false)
> 
> returns false, that is removing the 'local' keyword has the opposite 
> behaviour.

The "local" function itself is either there or it isn't.  If it is
there then the return value is the return from local.  If it isn't
there then it isn't there and the return value is of whatever you are
checking.

If the local value is there then you may use it to assign multiple
values.  How does your thinking change when thinking about having
multiple values to local?

  local v1=true v2=false v3="green" v4=42

If the entire operation is successful then it returns 0.  If any of
the operands fail then it returns non-zero.

> The help of 'local' is rather obscure about the description on its return 
> value:
> 
> Returns success unless an invalid option is supplied, an
> error occurs, or the shell is not executing a function.
> 
> "an error occurs" is rather meaningless IMHO.
> 
> Could anybody explain me why 'local' returns true in this case ?

It returns 0 because the local variable was successfully created.

> Also I tried to find in the documentation, where the specification of
> the return value of an asignment is but have failed. Could anybody
> point me out the location ?

The bash manual contains this:

   local [option] [name[=value] ...]
  For each argument, a local variable named name is
  created, and assigned value.  The option can be any of
  the options accepted by declare.  When local is used
  within a function, it causes the variable name to have a
  visible scope restricted to that function and its
  children.  With no operands, local writes a list of
  local variables to the standard output.  It is an error
  to use local when not within a function.  The return
  status is 0 unless local is used outside a function, an
  invalid name is supplied, or name is a readonly
  variable.

See also 'export'.  Compare and contrast.

   export [-fn] [name[=word]] ...
   export -p
  The supplied names are marked for automatic export to
  the environment of subsequently executed commands.  If
  the -f option is given, the names refer to functions.
  If no names are given, or if the -p option is supplied,
  a list of all names that are exported in this shell is
  printed.  The -n option causes the export property to be
  removed from each name.  If a variable name is followed
  by =word, the value of the variable is set to word.
  export returns an exit status of 0 unless an invalid
  option is encountered, one of the names is not a valid
  shell variable name, or -f is supplied with a name that
  is not a function.

Bob



Re: |& in bash?

2013-01-22 Thread Bob Proulx
Greg Wooledge wrote:
> On Tue, Jan 22, 2013 at 06:56:31AM -0500, Steven W. Orr wrote:
> > By that logic,
> > [...alternate possibility omitted...]
> 
> So, in chronological order:
> [...list of actions omitted...]

I think the take-away here is that the shell evolved to require 142
unique command line scanning actions (obviously I am greatly
exagerating that number) in order to perform the complex task that it
must perform.  That is more steps than the human brain is casually
comfortable remembering.

As tired and sleepy humans our brains would like the number of actions
to be no greater than 2 in order to easily hold it in our head.  We
stretch it to 3, to 4 to 5 but at some point we run out of fingers and
the number might as well actually be 142.  By the time we reach the
number of steps that are really needed, what the shell does, it is too
many and doesn't stick in our heads.

Perhaps there is a one-sheet poster hint sheet with all of the shell
command line scanning steps nicely illustrated that we could post on
the wall so that we can glance up and refresh our memory of it when
needed.  Something like the [fill in your favorite programming
language here] operator precedence table.  (If you looked over at your
copy next to you then you know what I am talking about.)

Bob



Re: |& in bash?

2013-01-22 Thread Bob Proulx
Greg Wooledge wrote:
> It kinda reminds me of the Linux newcomers who don't know how to do
> gzip -dc foo.tar.gz | tar xvf - (and so on) because they've been trained
> to use GNU tar's "z" flag instead, and therefore that piece of their
> education hasn't been absorbed yet.

Just like:

  grep -r PATTERN

Has replaced:

  find . -type f -exec grep PATTERN {} +

And therefore they don't know how to write other directory traversal
tasks either.

  find . -type f -exec sed -n '/PATTERN/s/THIS/THAT/gp' {} +

Bob



Re: When cd'd into a symlinked directory, directory completion can sometimes erroneously parse ../

2013-02-06 Thread Bob Proulx
Chet Ramey wrote:
> i336 wrote:
> > mkdir -p dir1/dir2
> > ln -s dir1/dir2 ouch
> > touch idontexist
> > ls # ^ to see dir & files & prove they really exist :P
> > cd ouch
> > ls ../ido
> > ...
> 
> Filename completion uses whatever file system view -- logical or physical
> -- that you have selected for bash.  The default is a logical view.  `ls',
> on the other hand, doesn't know anything about that.  This is not solvable
> unless you decide to use a physical view consistently (set -o physical).

I always use 'set -o physical' everywhere to have consistent behavior.
The evil truth is better than a good lie.

Bob



Re: Shouldn't this script terminate on ^C?

2013-02-19 Thread Bob Proulx
Nikolaus Schulz wrote:
> Please consider this bash script:
> 
>   : | while true; do sleep 1; done
>   echo "After loop"
> 
> If I hit ^C while it runs, shouln't it terminate?
> 
> I have tested bash versions 4.2.37(1)-release, 4.1.5(1)-release,
> and 3.2.39(1)-release. (Debian Sid, Squeeze and Lenny.)
> 
> All these bash versions output "After loop".
> zsh and dash do exit immediately.  Which behaviour is correct?
> I couldn't find what POSIX says about this.

Read this reference for an analysis of this issue.

  http://www.cons.org/cracauer/sigint.html

Since this is being asked as a general question this would be a better
topic to start discussing first on on the help-bash mailing list.  As
currently stated it is too non-specific to be a bug report to
bug-bash.

Bob



Re: export in posix mode

2013-02-27 Thread Bob Proulx
Eric Blake wrote:
> James Mason wrote:
> > I certainly could be doing something wrong, but it looks to me like bash
> > - when in Posix mode - does not suppress the "-n" option for export.  
> > The version of bash that I'm looking at is 3.2.25.
> 
> So what?  Putting bash in posix mode does not require bash to instantly
> prohibit extensions.  POSIX intentionally allows for implementations to
> provide extensions, and 'export -n' is one of bash's extensions.
> There's no bug here, since leaving the extension always enabled does not
> conflict with subset of behavior required by POSIX.

If you are looking to try to detect non-portable constructs then you
will probably need to test against various shells including ash.  (If
on Debian then use dash.)

  https://en.wikipedia.org/wiki/Almquist_shell

The posh shell was constructed specifically to be as strictly
conforming to posix as possible.  (Making it somewhat less than useful
in Real Life but it may be what you are looking for.)  It is Debian
specific in origin but should work on other systems.

  http://packages.debian.org/sid/posh
  http://anonscm.debian.org/gitweb/?p=users/clint/posh.git;a=summary

Bob



Re: export in posix mode

2013-02-27 Thread Bob Proulx
Chet Ramey wrote:
> Keep in mind that posh is overly strict in some areas (e.g., it throws
> an error on `exit 1').  It may not be useful in isolation.

As I did mention I have found that posh is somewhat less than useful
in Real Life.  But you say it throws an error on exit 1?

  $ cat >/tmp/trial <<'EOF'
  #!/bin/posh
  echo "Hello from posh"
  exit 1
  EOF
  $ chmod a+x /tmp/trial
  $ /tmp/trial
  Hello from posh
  $ echo $?
  1

I see no error when using 'exit 1'.  Other than the expected exit
code.  What am I missing?

Bob



Re: export in posix mode

2013-02-27 Thread Bob Proulx
James Mason wrote:
> We considered setting up another shell as the implementation of
> "/bin/sh", but that's hazardous in the context of vast amounts of
> boot-time initialization scripting that hasn't been vetted as to
> avoidance of bash-isms.

You appear to be doing product QA.  Awesome!  Have you considered
setting up a chroot system or a VM in different configurations and
running continuous integration testing upon them?  Within limits of
kernel compatibility chroots are very lightweight.  I am a fan due to
the simplicity.  Or VMs are quite accessible these days for the full
system including kernel now that most recent cpus support full
virtualization.

> Changing product script code - just so you can look for these sorts
> of things - isn't practical (or safe) either.

Things generally are not that bad.  I have tried this type of thing on
occasion and usually while there are a few errors it isn't anything
bad enough to prevent the system from booting to a useful state.  And
if it is a test VM without a lot of non-essential stuff then the core
functionality is usually fine.

> So I guess if you take the view that bash POSIX mode exists only to
> make bash accept POSIX scripts, and not to preclude/warn about
> behavior that isn't going to be acceptable elsewhere, then you're
> right - it's not a bug.   If you care about helping people to be
> able to write scripts that work various places and don't exceed the
> POSIX specification, you're unhelpfully wrong (and you might
> contemplate why "bashisms" gives > 50K google hits).

It is not a bash issue anymore than it is an issue specific to csh,
ksh, pdksh, zsh or any other specific featureful shell.  If it is a
bug in one then it is a bug in all.

While the bash project does not exist for the purpose you describe
this does not prevent other projects such as dash and posh from
existing and being useful.  Whether the operating system as an entity
decides to use bash or dash or other as a /bin/sh is a decision for
them to make for themselves.  If your OS does something different then
I say take the issue up with your operating system.

Debian for example uses /bin/sh symlinked to dash for just that
reason.  Works great.  I don't see any problem.  Scripts that wish to
use bash features specify #!/bin/bash as the interpreter.  Scripts
that specify #!/bin/sh and use bash specific features will fail.  As a
local option the local admin may change the /bin/sh symlink from dash
to bash and run scripts from other systems that include bash features.
But that isn't by default.  However it is a useful option for people
stuck needing to run code that isn't cleanly written.

Bob



Re: export in posix mode

2013-02-27 Thread Bob Proulx
Chet Ramey wrote:
> I don't know what version you're using; I have 0.11.
> 
> $ ./posh
> \[\]${HOST}($SHLVL)\$ exit 1
> ./posh: exit: bad number
> 
> $ ./posh
> \[\]${HOST}($SHLVL)\$ exit 10
> ./posh: exit: bad number
> $ echo $?
> 1

I am using 0.11 too.  I was using the Debian packaged version.  Since
this originated with Debian it is packaged using "native" packaging
where the tar.gz file is used without any patches to an upstream since
upstream is Debian.  That means 0.11 without a 0.11-number -number on
the end.

I just now pulled the source code and did a build and the locally
compiled copy worked fine.

I can only assume this is some type of portability bug in the sources
compiled on your system.  I can't imagine that many people compile
that program on other systems.  It can't have gotten that much
exposure.

Bob



Re: Should this be this way?

2013-02-28 Thread Bob Proulx
Chet Ramey wrote:
> Linda Walsh wrote:
> > Greg Wooledge wrote:
> >>> How often, when at a terminal, do you type #!/bin/bash before every line?
> >>
> >> When I've put the contents into a file?  Every. single. time.
> > ---
> > Then when I press 'v' to edit the command line in a text editor --
> > maybe 'bash' should insert such a line?  It's converted your command line
> > into an editable file.  But it hasn't put the #!/bin/bash at the front.
> 
> This is a bad example.  The file that is the result of the vi-mode `v'
> command is run as if it were sourced with `.'.  It's not run as if it
> were a shell script.

Ah!  There is the answer.  Don't run it as a script.  Always source
these files instead.  ". ./file"  When sourced they will run in the
context of the current bash shell and the behavior will be as
expected.

I say that somewhat tongue-in-cheek myself.  Because sourcing files
removes the abstraction barriers of a stacked child process and
actions there can persistently change the current shell.  Not good as
a general interface for random actions.  Normal scripts are better.

Bob

Who still remembers when if the exec(2) failed then the shell
examined the first character.  If it was a '#' then shell ran the file
through csh.  If ':' then through ksh.  If neither then sh.  This may
have been a local hack though.  Clearly the Berkeley #! hack is better.



Re: If rbash is worthless, why not remove it and decrease bloat?

2013-03-15 Thread Bob Proulx
Linda Walsh wrote:
> Greg Wooledge wrote:
> > Honestly, a "restricted shell" is usually a pitiful thing that would be
> > a joke, except it's not even funny.  
>   Chet answered this in context:
> Chet Ramey wrote:
> > Posix has chosen not to standardize the restricted shell, either `rsh' or
> > `set -r'.
> 
> I had the erroneous belief that 'rbash' was something useful to some
> people or was part of the POSIX standard.

It may actually be useful to someone.  But no one has sighted one of
those someones in decades.  It would be great if we could actually
find a living person who uses it and would tell us why.  They must be
very rare.  They might be extinct.

It isn't impossible to use in a useful and secure way.  It is just so
difficult to use in a useful and secure way that I have never seen a
successful implementation.  However I have been on the receiving end
of unsuccessful implementations.  And they were trivial to break out
of so that I could do useful work.  Which in the end was good.

The problem is that it may actually be useful.  It basically just
creates a "petting zoo" type of low fence.  It may be argued that it
is useful that way.  It makes people feel good that they have a
barrier to keep the kids in.  But people in the know can still get
real work done.

> As it is neither and provides little or no increased security over
> chrooting a process as Chris mentioned:
> 
> Chris Down wrote:
> > For the record running rbash without a chroot does not make any sense
> > in reality, it's usually easy to break out of. 

I think that was _with_ a chroot both and not just a chroot by itself.

> Perhaps it would be doing a favor to users and allow some minor code
> cleanup to simply get rid of the 'rbash'/restricted functionality.

I know I wouldn't shed any tears to see it go away.  And there is
always rksh for those people who still want a restricted shell.

> It sounds like the idea isn't worth the increased bloat.
> 
> If it cannot be removed, then some people are using it with the false
> expectation that it provides some increased security.  Better to get
> rid of that than have someone think it is worth the extra bytes it takes
> to implement.

But is it worth the trouble to remove?  That would also create work
and would also potentially create an uproar in reverse for people who
are using the feature and would have it removed out from under them.

Features are easy to put in.  They are hard to take out.  Because
taking them out always hits someone.

Bob



  1   2   3   4   5   6   >