Re: xtrace output on new file descriptor

2009-12-10 Thread Brian J. Murrell
On Thu, 2009-12-10 at 22:20 -0500, Chet Ramey wrote: > > Nothing good. Pity. > The next version of bash will allow you specify an arbitrary > file descriptor where the xtrace output will be written. Cool. I wonder how long it will take the distros to pick that up though. Sure, I could build

xtrace output on new file descriptor

2009-12-10 Thread Brian J. Murrell
I'm wondering if anyone has any tricks to preserve stderr on filedescriptor 2 and get xtrace output on a different file descriptor. I've pulled hair trying to get the redirection right for this but just come up with the right combination. I'd imagine it involves stashing away fd 2, duplicating fd

Re: preventing pipe reader from existing on writer exiting

2009-10-01 Thread Brian J. Murrell
On Wed, 2009-09-30 at 23:13 +0200, Andreas Schwab wrote: > > Just make sure the write side of the pipe is not closed prematurely. Hrm. Yes, of course. John's solution of having a null writer keeping it open is one way -- which I might just use. > $ (n=0; while [ $n -lt 10 ]; do cat /dev/zero;

preventing pipe reader from existing on writer exiting

2009-09-30 Thread Brian J. Murrell
Let's say I have the following (contrived, simplified example): $ mknod /tmp/fifo $ cat /dev/zero > /tmp/fifo & $ cat < /tmp/fifo When the first cat exits (i.e. is terminated) the second cat stops. The problem is that I want to be able to restart the first cat and have the second cat just keep r

trying to make sense of BASH_LINENO

2009-03-20 Thread Brian J. Murrell
I'm trying to write a "stack trace" function and BASH_LINENO doesn't make sense sometimes and doesn't appear too accurate at others. Here's my test script: shopt -s extdebug trap 'backtrace' ERR set -E backtrace() { echo "FUNCNAME: ${funcna...@]}" echo "BASH_SOURCE: ${bash_sour...@]}"

Re: manually pipe processes

2009-03-05 Thread Brian J. Murrell
On Thu, 2009-03-05 at 13:47 -0500, Greg Wooledge wrote: > > That might be a little more heavy-handed than you were looking for, > but since you're already hitting /tmp it shouldn't be terrible to add > a FIFO there. Yeah. I really didn't want to use a named pipe for this. Thanx for the help tho

manually pipe processes

2009-03-05 Thread Brian J. Murrell
Hi, I want to effect this pipeline: tar cf - /etc | tar xf - | tee /tmp/outfile manually. It seems that some form of file descriptor manipulation (i.e. moving, duplication, etc.) should be able to achieve this, but I can't seem to figure it out. Why would I want to do this? Because I want (sp

Re: Bash RFE: Goto (especially for jumping while debugging)

2008-09-22 Thread Brian J. Murrell
On Mon, 2008-09-22 at 20:44 +0100, Richard Neill wrote: > How about... > --- > #!/bin/bash > > #initialisation stuff goes here. > if false; then > > #lots of stuff here that I want to skip. > #Bash doesn't have a multi-line comment feature. > #Even if it did, one can't do a mu

Re: broken pipe

2008-02-13 Thread Brian J. Murrell
On Wed, 2008-02-13 at 16:00 -0500, Brian J. Murrell wrote: > > find / -type f -print 2>&1 | head -20 || true Doh! This of course won't work. The first solution should though. b. signature.asc Description: This is a digitally signed message part

Re: broken pipe

2008-02-13 Thread Brian J. Murrell
On Wed, 2008-02-13 at 14:56 -0600, Michael Potter wrote: > Bash Bunch, > > I googled a bit and it see this problem asked several times, but I > never really saw a slick solution: > > given this: > > set -o pipefail > find / -type f -print 2>&1 |head -20 > echo ${PIPESTATUS[*]} > > prints this:

time limiting command execution

2008-01-17 Thread Brian J. Murrell
I am trying to write a function to (wall-clock) timelimit a command's execution time but damned if I cannot eliminate all of the races. Here is my current iteration of the function: 1 timed_run() { 2local SLEEP_TIME=$1 3shift 4 5set +o monitor 6 7# start command running 8

waiting for process in process substitution

2007-03-14 Thread Brian J. Murrell
to illustrate that the cat should exit here exec 3>&- while [ -f foo ]; do sleep 1 done exit But it's just so ugly. For what it's worth, the process in the substitution is expect and what I am feeding it from the script is an expect script

Re: reading the first colums of text file

2007-02-04 Thread Brian J. Murrell
k I was purely considering the cost of fork/exec. In any case, dead horses and all. b. -- My other computer is your Microsoft Windows server. Brian J. Murrell signature.asc Description: This is a digitally signed message part ___ Bug-bash mailing list Bu

Re: reading the first colums of text file

2007-02-04 Thread Brian J. Murrell
On Sat, 2007-02-03 at 23:30 -0500, Paul Jarc wrote: > "Brian J. Murrell" <[EMAIL PROTECTED]> wrote: > > < <(cat $file) > > http://partmaps.org/era/unix/award.html LOL. Too right. I am just so used to using process redirection to solve the old "but my

Re: reading the first colums of text file

2007-02-03 Thread Brian J. Murrell
1 rest; do echo $column1 done Or if you need the data in the calling shell's context: while read column1 rest; do # the goodies are in $column1 done < <(cat $file) Probably a dozen other ways to do it too. b. -- My other computer is your Microsoft Windows server. Brian J. Murre

BASH_COMMAND

2006-08-25 Thread Brian J. Murrell
ND' ERR $ false false given the explanation in the manpage. I must be misunderstanding something. b. -- My other computer is your Microsoft Windows server. Brian J. Murrell signature.asc Description: This is a digitally signed message part ___ B

PATCH: direct xtrace to a file descriptor

2006-07-28 Thread Brian J. Murrell
rintf (xtrace_stream, "%s", (arg2 && *arg2) ? arg2 : "''"); } - fprintf (stderr, " ]]\n"); + fprintf (xtrace_stream, " ]]\n"); + fflush (xtrace_stream); } #endif /* COND_COMMAND */ @@ -785,11 +825,12 @@ { WORD_LIST *