Re: bash sends SIGHUP to disowned children in non-interactive mode

2011-12-28 Thread Philip
Am Wed, 28 Dec 2011 14:48:45 -0500
schrieb Greg Wooledge :

> 
> If you want to disown something, you have to *stop* doing this double-fork
> nonsense.
> 
> #!/bin/bash
> set -m
> xterm &
> disown -h
> 
> Do not put (...) around the background job.  When you do that, the main
> bash process loses its parenthood over the xterm process, so it won't be
> able to manage it.
> 
> I'm not 100% sure if the set -m will be required to enable "disown" to work,
> but I'd try it that way.
> 
That "double-fork nonsense" was just to ensure that the forked process was not a
child of the shell. And I still don't understand why it sends a SIGHUP to such a
process on exit. Or maybe I do now..
The 'set -m' is exactly what is needed for everything to work. Then disown works
as expected and so does the double-fork. Without 'set -m' neither of the two 
ways
work properly.
I thought if the header
#!/bin/bash -i
is used that includes job management? Maybe that's the bug Stas Sergeev is 
talking
about in the previous thread?
Well, thanks a lot for the input. I thought I had job management covered with 
the
"-i" flag.

best regards



Re: bash sends SIGHUP to disowned children in non-interactive mode

2011-12-28 Thread Philip
> It may receive a SIGHUP, but you haven't demonstrated that it's bash
> sending the SIGHUP.  Since job control isn't enabled, it's most likely
> that the SIGHUP xterm sends to the process group when you hit the close
> button gets to xclock because it's in the same process group as bash.
> 
> Bash only sends SIGHUP to running jobs of an interactive login shell when
> the `huponexit' option is set.  It will resend SIGHUP to running jobs
> if they haven't been disowned, but the background-process-inside-a-subshell
> business means xclock isn't a child process or job of the shell anyway.
Yes that why used that background-process-inside-a-subshell because I thought
is was a good way to make sure that it won't be a job of the shell.
I didn't get anywhere with the strace yet but figured out (thanks to you and 
Greg)
that I get a different behavior with job control enabled.

So let's take these two scripts, clock1.sh and clock2.sh.
I know it sounds boring but please bear with me.
-
#!/bin/bash
set -m
xclock &
sleep 2
--
#!/bin/bash -i
xclock &
sleep 2
--
I run them with: xterm -hold -e './clock1.sh'
the -hold keeps the xterm window open after the scripts finish.
The first script works fine, xclock stays. I don't even have to
disown. The second script, xclock appears and disappears after
2 seconds. Received a SIGHUP when the script ended. The xterm
window is still there because of the -hold parameter.
I get the exact same behavior if I use "(xclock &)" instead.

So without job control bash might (I know, we don't know for sure)
send the SIGHUP to all processes started in the shell, whether child
by definition or not?
Shouldn't the -i flag enable job control by the way?

Regards,
Philip



echo '-e' doesn't work as expected, or does it?

2012-01-21 Thread Philip
Hi! Short question this time..

$ echo '-e'
does not print -e

$ echo '-e '
does print -e .

Is this the way as it ought to be? GNU bash, Version 4.1.5(1)-release (Debian 
stable)

Regards



Re: echo '-e' doesn't work as expected, or does it?

2012-01-21 Thread Philip
Am Sat, 21 Jan 2012 12:24:03 +0100
schrieb Philip :

> Hi! Short question this time..
> 
> $ echo '-e'
> does not print -e
> 
> $ echo '-e '
> does print -e .
> 
> Is this the way as it ought to be? GNU bash, Version 4.1.5(1)-release (Debian
> stable)
> 
> Regards
> 

Nevermind! Just found an older thread about this.



Hightlighting in bash

2011-03-10 Thread Philip Prindeville

Hi.

First off, this isn't a bug report so much as a feature request.

I do a lot of cross-compilation of linux and various packages for embedded 
distros.

Version bumps are always perilous because cross-compilation often suffers 
regression.

The problem is that a lot of the regressions don't cause the build to fail... 
it just generates invalid results.

Sometimes (not often but sometimes) an innocuous clue with will buried deep within 
thousands (or tends of thousands) of lines of "make all" output.

And by output, I mean STDERR.

My request is simple.  Using termcap/ncurses info (which you need anyway for the readline 
stuff), it would be nice to have the option of running commands in a pseudo-tty and then 
bracketing the output from STDERR with 

Of course, that also implies that your writes to wherever STDERR eventually 
goes are atomic and won't be interspersed with output from STDOUT.  I'll let 
someone more intimate with the particulars of stdio and tty drivers and line 
disciplines figure that bit out.

This would be nice because it would allow one to quickly identify and isolate 
potentially detrimental error messages from mundane but profuse output that 
logs commands being invoked, etc.

Does this seem doable?

Thanks,

-Philip




Re: Hightlighting in bash

2011-03-10 Thread Philip Prindeville

On 3/10/11 11:53 AM, Micah Cowan wrote:

(03/10/2011 11:42 AM), Philip Prindeville wrote:

My request is simple.  Using termcap/ncurses info (which you need anyway
for the readline stuff), it would be nice to have the option of running
commands in a pseudo-tty and then bracketing the output from STDERR with


This doesn't strike me as remotely within the domain of responsibilities
that should belong to bash, mainly because:

   1. it has nothing to do with anything that shells normally do.
   2. putting it in bash specifically, would rob other, non-bash related
commands, the possibility of having such a feature.

It wouldn't be difficult to write as a separate program, which is really
how this should be handled. You could redirect a pipeline's STDOUT and
STDERR to individual named pipes (FIFOs), and have a separate program
read from both pipes, inserting highlights around any data it copies
from the STDERR pipe.



You could, but that would entail modifying every Makefile (a couple of 
hundred), plus automake, etc.

-Philip




Re: IMPLEMENTATION [Re: Hightlighting in bash]

2011-03-10 Thread Philip Prindeville

On 3/10/11 12:50 PM, Micah Cowan wrote:

(03/10/2011 12:28 PM), Micah Cowan wrote:

(03/10/2011 11:53 AM), Micah Cowan wrote:

(03/10/2011 11:42 AM), Philip Prindeville wrote:

My request is simple.  Using termcap/ncurses info (which you need anyway
for the readline stuff), it would be nice to have the option of running
commands in a pseudo-tty and then bracketing the output from STDERR with





It wouldn't be difficult to write as a separate program, which is really
how this should be handled. You could redirect a pipeline's STDOUT and
STDERR to individual named pipes (FIFOs), and have a separate program
read from both pipes, inserting highlights around any data it copies
from the STDERR pipe.

The idea intrigued me somewhat, so I hacked up a Perl script that
attempts to do this (no guarantees of error-free code).

Find it at http://micah.cowan.name/hg/demarcate/raw-file/tip/demarcate

Set it to executable, and then try it out like:

mkfifo out
mkfifo err
./demarcate
   >out 2>err

Note that you can also just do:

   exec>out 2>err

instead of running a specific program under it; but note that

   1. this works somewhat poorly if your prompt is colorized, or clears
  graphical attributes
   2. your prompt will now be highlighted, since readline emits it to
  stderr.
   3. bash can apparently do line-editing this way; ksh93 seems to have
  a problem doing so.

Basically, I don't recommend this, but it can work for some needs.

(Idea for improving this program: allow for giving a shell command as an
argument, eliminating the need for silly named pipes; just have the
script redirect the specified command through normal pipes.)



A lot of programs don't behave properly (or perhaps, "don't behave the same") 
when they detect that stdout isn't a terminal.  But I think someone else mentioned this 
already.

-Philip




read builtin breaks autocompletion

2005-12-02 Thread Philip Rowlands
Configuration Information [Automatically generated, do not change]:
Machine: i586
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i586' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i586-mandrake-linux-gnu' 
-DCONF_VENDOR='mandrake' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H  -I.  -I.. -I../include -I../lib  -D_GNU_SOURCE  -O2 
-fomit-frame-pointer -pipe -march=i586 -mtune=pentiumpro 
uname output: Linux medusa-s2 2.6.13.4-x86_01doc #1 SMP Fri Oct 28 15:59:59 BST 
2005 i686 Intel(R) Xeon(TM) CPU 2.00GHz unknown GNU/Linux
Machine Type: i586-mandrake-linux-gnu

Bash Version: 3.0
Patch Level: 16
Release Status: release

Description:

Using the "read" builtin with readline and timeout breaks autocompletion.
On their own, timeout and readline options are fine; this bug only occurs
when they're used together.
I've tried this with bash 2 and 3, and across different Linux vendors.

Repeat-By:
[ effects due to readline's "show-all-if-ambiguous off" not shown ]
$ bash[TAB]
bash bash3bashbug  
$ read -e -t 1
[type nothing, times out after 1 second]
$ bash[TAB]
[ nothing happens, no completions listed ]


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: read builtin breaks autocompletion

2005-12-02 Thread Philip Rowlands

On Fri, 2 Dec 2005, Chet Ramey wrote:


I can't reproduce it, using bash-2.05b and bash-3.0 on MacOS X and Red
Hat Linux.  I don't use any custom completions or the bash-completion
package, though, so I don't know what effect those might have.


Hmm, I did take pains not to have any .inputrc or completion 
side-effects. A little more digging:


SHELL 1
$ echo $$
$ read -t 1

(gdb) print /x rl_readline_version
$1 = 0x500
(gdb) print /x rl_readline_state
$2 = 0x4000e

RL_STATE_INITIALIZED
RL_STATE_TERMPREPPED
RL_STATE_READCMD
RL_STATE_TTYCSAVED


SHELL 2
$ echo $$
$ read -e -t 1

(gdb) print /x rl_readline_state
$1 = 0x4800e

RL_STATE_INITIALIZED
RL_STATE_TERMPREPPED
RL_STATE_READCMD
RL_STATE_SIGHANDLER
RL_STATE_TTYCSAVED

Does this help narrow down the cause? If not, I'll have to go diving 
around in the bash source.



Cheers,
Phil


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: read builtin breaks autocompletion

2005-12-07 Thread Philip Rowlands

In builtins/read.def (from bash-3.0.tar.gz):

  677   old_attempted_completion_function = rl_attempted_completion_function;
  678   rl_attempted_completion_function = (rl_completion_func_t *)NULL;
  679   ret = readline (p);
  680   rl_attempted_completion_function = old_attempted_completion_function;


I suspect that, in the case of timeout, the old function is never 
restored, and remains NULL. (I'm puzzled that this isn't repeatable, 
unless readline is completely disabled.)


I hope this makes sense to somebody, because the signal handling code 
and add_unwind_protect looks... scary, and out of my depth to patch.



Phil


___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Re: read builtin breaks autocompletion

2006-01-22 Thread Philip Rowlands

Ping

On Wed, 7 Dec 2005, Philip Rowlands wrote:


In builtins/read.def (from bash-3.0.tar.gz):

  677   old_attempted_completion_function = rl_attempted_completion_function;
  678   rl_attempted_completion_function = (rl_completion_func_t *)NULL;
  679   ret = readline (p);
  680   rl_attempted_completion_function = old_attempted_completion_function;


I suspect that, in the case of timeout, the old function is never 
restored, and remains NULL. (I'm puzzled that this isn't repeatable, 
unless readline is completely disabled.)


I hope this makes sense to somebody, because the signal handling code 
and add_unwind_protect looks... scary, and out of my depth to patch.



Phil



___
Bug-bash mailing list
Bug-bash@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-bash


Wanted: bash enhancement... non-blocking 'wait'

2010-09-03 Thread Philip Prindeville

 I wanted to check in and see if there was a chance of this feature being 
accepted upstream before I spent any time on it...  so here goes.

The "wait [n]" command is handy, but would be even handier is:

wait [[-a] n]

instead, which asynchronously checks to see if process 'n' has completed.

I.e. this would be like calling waitpid() with options=WNOHANG, instead of 0.

Why bother?

Well, sometimes in scripts (especially /etc/init.d/ scripts) it's useful to do 
something like:

timeout=30
interval=1

pppd ... updetach&
pid=$!

let -i n=0

while [ $n -lt $timeout ]; do
  wait -a $pid
  status=$?

  if [ $status -ne 128 ]; then break; fi

  sleep $interval
fi

if [ $status -ne 0 ]; then
  echo "Couldn't start PPPD or failed to connect.">&2
  exit 1
fi


as an example. (In this case, the original instance of pppd would have forked 
and detached when it successfully connected, causing the original instance to 
exit with 0; anything else would return non-zero.)

Does this seem like a reasonable enhancement?

Thanks,

-Philip




Re: Wanted: bash enhancement... non-blocking 'wait'

2010-09-03 Thread Philip Prindeville

 On 9/3/10 10:44 AM, Eric Blake wrote:

On 09/02/2010 04:44 PM, Philip Prindeville wrote:

I wanted to check in and see if there was a chance of this feature being
accepted upstream before I spent any time on it... so here goes.

The "wait [n]" command is handy, but would be even handier is:

wait [[-a] n]

instead, which asynchronously checks to see if process 'n' has completed.


What's wrong with using the existing 'kill -0 pid' to check if pid still 
exists, rather than inventing a new argument to 'wait'?



Well, in theory, if you waited long enough (to look for the process), and it 
had exited and a new process with that id was created, you'd detect the wrong 
process.

At least with 'wait' you're guaranteed it's still in your process-group, right?

Besides, 'wait' is just a lot easier to read and understand... not everyone 
knows the system call semantics of kill().




[BUG] Bad call to dircolors in /etc/bash.bashrc on LMDE

2018-11-01 Thread Philip Hudson
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib
-D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector-strong -Wformat
-Werror=format-security -Wall
uname output: Linux asdf 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1
(2018-10-03) x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 30
Release Status: release

Description:
At line 44 of /etc/bash.bashrc the line

   eval $(dircolors)

is incorrect: it errors if $SHELL is tcsh. All other calls correctly
specify 'dircolors -b'.

Note: Arises on Linux Mint Debian Edition 2. I have not been able to
determine whether this report is more properly addressed to GNU, or
downstream to Debian or Linux Mint. Apologies if it's not you.

Repeat-By:
In a tcsh interactive shell session, invoke 'bash' or 'bash -i'.

Fix:
Change line 44 to:

   eval $(dircolors -b)

Consistent with all the other calls to dircolors.


-- 
Phil Hudson  http://hudson-it.ddns.net
Pretty Good Privacy (PGP) ID: 0x4E482F85



Brokenness in bash-3.2 Makefile

2010-01-12 Thread Philip A. Prindeville
I'm trying to cross-build bash-3.2 with certain cross-compilation flags (like 
LDFLAGS="--sysroot=$(STAGING_DIR)/usr") but since LDFLAGS_FOR_BUILD is set to 
$(LDFLAGS), this isn't working.

It would be nice to be able to set the two orthogonally.

Anyone have a patch I could test?

Thanks,

-Philip