Pesky here-document warnings

2010-11-16 Thread Mun
Hi,

I often get the following e-mailed to my by my system with the subject
of "Output from your job ":

-- Delimiter BEGIN 

sh: line 497: warning: here-document at line 494 delimited by end-of-file 
(wanted ``(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c 
[:alnum:])`')

--- Delimiter END -

I have no idea which job could be producing this error?  Does it look
familiar to anyone?  Any ideas on how I can track down the culprit?

Regards,

-- 
Mun



Re: Pesky here-document warnings

2010-11-16 Thread Mun
Hi Chet, Greg,

Thanks for your replies.

On Tue, Nov 16, 2010 at 01:22 PM PST, Chet Ramey wrote:
CR> > Hi,
CR> > 
CR> > I often get the following e-mailed to my by my system with the subject
CR> > of "Output from your job ":
CR> > 
CR> > -- Delimiter BEGIN 

CR> > 
CR> > sh: line 497: warning: here-document at line 494 delimited by end-of-file 
(wanted ``(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c 
[:alnum:])`')
CR> > 
CR> > --- Delimiter END 
-
CR> > 
CR> > I have no idea which job could be producing this error?  Does it look
CR> > familiar to anyone?  Any ideas on how I can track down the culprit?
CR> 
CR> The shell thinks you have an unterminated here document.  Have you looked at
CR> like 494 of the script producing this error message?

That's just it, I don't have any cron scripts that use here-documents.
Most of my cron jobs are fairly trivial scripts less than 100 lines long.

I'm running on a Red Hat Enterprise Linux 5.5 system, and my guess is
that the system is running something on my behalf.  But I don't know
what.

At first I was able to ignore the messages because I could not detect
any anomalies.  But now I've reached my threshold where the messages
(sometimes up to about ten in a day) are starting to annoy me.

Thanks again for the replies.

Regards,

-- 
Mun



Re: Pesky here-document warnings

2010-11-19 Thread Mun
Hi Greg,

On Wed, Nov 17, 2010 at 05:28 AM PST, Greg Wooledge wrote:
GW> On Tue, Nov 16, 2010 at 03:13:13PM -0800, Mun wrote:
GW> > That's just it, I don't have any cron scripts that use here-documents.
GW> 
GW> at jobs.

I don't have any 'at' jobs (that I know of).

GW> > Most of my cron jobs are fairly trivial scripts less than 100 lines long.
GW> 
GW> Not cron.  (Unless RHEL's cron spits out emails that look like at's?  I've
GW> never seen a job ID number in a cron email.)
GW> 
GW> > I'm running on a Red Hat Enterprise Linux 5.5 system, and my guess is
GW> > that the system is running something on my behalf.  But I don't know
GW> > what.
GW> 
GW> On HP-UX:
GW> 
GW> List scheduled jobs:
GW>   at -l [job-id ...]
GW> 
GW>   at -l -q queue
GW> 
GW> On Debian:
GW> 
GW>atq [-V] [-q queue]

I ran atq as myself as well as root an nothing showed up.  So now I'm
wondering what can possible create an at job on the fly.

GW> > At first I was able to ignore the messages because I could not detect
GW> > any anomalies.  But now I've reached my threshold where the messages
GW> > (sometimes up to about ten in a day) are starting to annoy me.
GW> 
GW> Then you should have about 10 intervals per day to find a scheduled job.
GW> (Hint: do it as whoever's receiving the e-mail.)

Maybe I'll have to kick off a cron job to run atq until I can figure out
the culprit.

GW> You could also try to find the place on disk where the at jobs live.
GW> It might be /var/spool/at (RH 5.2) or /var/spool/cron/atjobs (Debian 5.0,
GW> HP-UX 10.20), or somewhere else entirely.

Looked there; directory is empty.

Thanks for the excellent suggestions.

-- 
Mun



Re: [bash 4.2] In vi mode, cc failed to change the whole line

2011-02-16 Thread Mun
Hi,

On Wed, Feb 16, 2011 at 08:43 PM PST, Clark J. Wang wrote:
CJW> For example, in vi insert mode, I first enter a command like this:
CJW> 
CJW> # hello world
CJW> 
CJW> Then I press ESC and type cc, the cursor just moves to the beginning (under
CJW> the char `h') and the whole line is not emptied. If I type more chars after
CJW> cc, only the first `h' char is replaced and following `ello world' keeps
CJW> unchanged. Note that other vi mode commands like cw and c$ work fine.

I noticed a similar phenomenon when the vi command 'dd' is entered.
Bash no longer deletes the entire line, it just moves the cursor to the
first char.  I'm using readline v6.2 and bash v4.2 on RHEL5.6 .

Regards,

-- 
Mun


CJW> I'm using Debian 6.0 (i686) and here's some of my system info:
CJW> 
CJW> # bash --version
CJW> GNU bash, version 4.2.0(1)-release (i686-pc-linux-gnu)
CJW> Copyright (C) 2011 Free Software Foundation, Inc.
CJW> License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html
CJW> >
CJW> 
CJW> This is free software; you are free to change and redistribute it.
CJW> There is NO WARRANTY, to the extent permitted by law.
CJW> # dpkg -l | grep -E 'readline|ncurses'
CJW> ii  libncurses55.7+20100313-5   shared
CJW> libraries for terminal handling
CJW> ii  libncurses5-dev5.7+20100313-5
CJW> developer's libraries and docs for ncurses
CJW> ii  libncursesw5   5.7+20100313-5   shared
CJW> libraries for terminal handling (wide character support)
CJW> ii  libreadline5   5.2-7GNU
CJW> readline and history libraries, run-time libraries
CJW> ii  libreadline6   6.1-3GNU
CJW> readline and history libraries, run-time libraries
CJW> ii  ncurses-base   5.7+20100313-5   basic
CJW> terminal type definitions
CJW> ii  ncurses-bin5.7+20100313-5
CJW> terminal-related programs and man pages
CJW> ii  ncurses-term   5.7+20100313-5
CJW> additional terminal type definitions
CJW> ii  readline-common6.1-3GNU
CJW> readline and history libraries, common files
CJW> # ldd /usr/local/bash-4.2.0/bin/bash
CJW> linux-gate.so.1 =>  (0xb773)
CJW> libncurses.so.5 => /lib/libncurses.so.5 (0xb76ec000)
CJW> libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb76e8000)
CJW> libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb75a1000)
CJW> /lib/ld-linux.so.2 (0xb7731000)
CJW> #
CJW> 
CJW> By the way I don't understand why there's no libreadline in the output of
CJW> `ldd bash'. Anyone can explain?
CJW> 
CJW> -- 
CJW> Clark



Re: [bash 4.2] In vi mode, cc failed to change the whole line

2011-02-17 Thread Mun
Hi Chet,

On Thu, Feb 17, 2011 at 06:00 PM PST, Chet Ramey wrote:
CR> On 2/16/11 11:43 PM, Clark J. Wang wrote:
CR> > For example, in vi insert mode, I first enter a command like this:
CR> > 
CR> > # hello world
CR> > 
CR> > Then I press ESC and type cc, the cursor just moves to the beginning 
(under
CR> > the char `h') and the whole line is not emptied. If I type more chars 
after
CR> > cc, only the first `h' char is replaced and following `ello world' keeps
CR> > unchanged. Note that other vi mode commands like cw and c$ work fine.
CR> 
CR> Thanks for the report.  Please try the attached patch; it fixes the problem
CR> in the testing I've done.

The patch works perfectly for me.  Thanks very much.

Regards,

-- 
Mun



Cron jobs, env vars, and group ID ... oh my

2012-11-28 Thread Mun
Hi,

I need to run a script via cron that in turn launches a script to set up the
requisite environment variables needed for a successive script that is called.
Moreover, I need to change my group ID in order for the scripts called within
the cron job to run properly.

I have included a representation of my script below.  It turns out that within
my cron script, I am not seeing my group ID being changed (I call "id" for
debug purposes).  And even more puzzling is the behavior I'm seeing with the
environment variables.

I am using a here-document so that once the environment variables are set by
"setUpEnvVars" (see below) they can be accessed by the successive calls (Note:
Those calls are replaced by debug statements in the script below).  When I
echo or use the environment variables, they appear unset.  Although, printenv
correctly lists the environment variables with proper values.  Environment
variables that get set up via my profile (like $HOME) _are_ accessible within
my here-document.

In summary, I need help with two things:

1. How can I change my group ID within the cron script so that it applies to
   all successive calls--specifically within the here-document.

2. How can I access the environment variables set up by "setUpEnvVars" (see
   below) within the here-document?

If I am approaching this problem incorrectly, please let me know.

-- Delimiter BEGIN 

#! /bin/bash

newgrp group1
id -g -n // This shows my login group ID, not group1

export SHELL="/bin/bash -i"

/path/setUpEnvVars arg1 arg2 &> /home/mun/out.log <

Re: Cron jobs, env vars, and group ID ... oh my

2012-11-28 Thread Mun
Hi Greg, Bob,

Thanks for your replies.
Please see my comments below.


On Wed, Nov 28, 2012 at 09:50 AM PST, Greg Wooledge wrote:
GW> On Wed, Nov 28, 2012 at 09:10:02AM -0800, Mun wrote:
GW> > I need to run a script via cron that in turn launches a script to set up 
the
GW> > requisite environment variables needed for a successive script that is 
called.
GW> > Moreover, I need to change my group ID in order for the scripts called 
within
GW> > the cron job to run properly.
GW> 
GW> This belongs on help-bash, not bug-bash.

Oops, my apologies; I forgot about the help-bash list.  I will post a variant
of my original post to help-bash (without this group ID problem, since you
folks have addressed that issue already).

GW> > #! /bin/bash
GW> > 
GW> > newgrp group1
GW> > id -g -n // This shows my login group ID, not group1
GW> 
GW> Ah, the fundamental question here is "how does newgrp(1) work".

I see.  I will try Bob's suggestion to read input from another file.

Best regards,

-- 
Mun

GW> 
GW> Quoting the HP-UX man page:
GW> 
GW>   The newgrp command changes your group ID without changing your user ID
GW>   and replaces your current shell with a new one.
GW> 
GW> And a demonstration (from bash):
GW> 
GW> imadev:~$ id
GW> uid=563(wooledg) gid=22(pgmr) groups=1002(webauth),208(opgmr)
GW> imadev:~$ echo $$
GW> 8282
GW> imadev:~$ newgrp opgmr
GW> imadev:~$ echo $$
GW> 4859
GW> imadev:~$ id
GW> uid=563(wooledg) gid=208(opgmr) groups=22(pgmr),1002(webauth)
GW> imadev:~$ exit
GW> imadev:~$ echo $$
GW> 8282
GW> 
GW> So, you can see that this is utterly useless in a script.  Try using
GW> sudo(1) instead if it's available.
GW> 
GW> P.S., newgrp works very differently from within ksh, where it is a
GW> shell builtin.  Still useless in a script, though.




Re: Is direxpand available yet (to fix dirspell)?

2013-01-10 Thread Mun
Hi,

On Wed, Jan 09, 2013 at 05:34 AM PST, Chet Ramey wrote:
CR> On 1/8/13 5:26 PM, John Caruso wrote:
CR> 
CR> > I forgot to mention that I've tested this with bash 4.2.10 and 4.2.24,
CR> > and neither of them appear to have the direxpand option.  I checked the
CR> > bash source but couldn't suss out (in a brief look) how minor bash
CR> > versions are accounted--there's no 4.2.10 or 4.2.24 source, just 4.2
CR> > source plus a bunch of patches, and it's not clear if those patches have
CR> > made it into an official bash release or which release number that is.
CR> 
CR> Bash-4.2.x is bash-4.2 with patches 1-x applied.

Where can one find the Bash-4.2.x downloads?  I looked on the GNU and
cwru.edu web sites and couldn't find them.

Thanks and regards,

-- 
Mun



Re: Is direxpand available yet (to fix dirspell)?

2013-01-10 Thread Mun
Hi,

On Thu, Jan 10, 2013 at 05:11 PM PST, Chet Ramey wrote:
CR> -BEGIN PGP SIGNED MESSAGE-
CR> Hash: SHA1
CR> 
CR> On 1/10/13 7:21 PM, Eric Blake wrote:
CR> 
CR> > There is no upstream pre-built tarball with all of the patches applied;
CR> > you have to do it yourself (or use a pre-built downstream binary from
CR> > your distro of choice).
CR> 
CR> This is one of the things that the git repository can do for you.  Grab,
CR> for instance,
CR> 
CR> http://git.savannah.gnu.org/cgit/bash.git/snapshot/bash-master.tar.gz
CR> 
CR> and you will get the head of the master branch (which is bash-4.2.42
CR> right now).
CR> 
CR> You can go to http://git.savannah.gnu.org/cgit/bash.git/ , choose one
CR> of the branches (e.g., bash-4.2 patch 41), get to that branch's commit
CR> page, and download a tarball for that commit.  Bash-4.2.41's tarball is
CR> 
http://git.savannah.gnu.org/cgit/bash.git/snapshot/bash-8dea6e878b47519840d57eb215945f3e1fac7421.tar.gz

This is exactly what I was looking for; thanks very much.

-- 
Mun



Query regarding ${parameter:-word} usage

2009-12-23 Thread Mun
Hi,

I am moving from ksh93 to bash and have a question regarding the usage
of ${parameter:-word} parameter expansion.

In ksh, I use ${*:-.} as an argument to commands.  For example:

   function ll
   {
  ls --color -Flv ${*:-.}
   }

This technique passes '.' as an arg to 'ls' if I didn't pass any args on
the command line (as I'm sure you all already know).  But this does not
work with bash; nor have I been able to come up with a technique that
accomplishes the same thing.  My only workaround so far is to put an
'if' loop around the 'ls' that tests $# and takes the appropriate branch
depending on the number of args (i.e., 0 or non-zero).

Any suggestions would be welcomed.  Thanks in advance.

-- 
Mun




Re: Query regarding ${parameter:-word} usage

2009-12-23 Thread Mun
Hi,

Thanks for your replies.
Please see my comments below.

On Wed, Dec 23, 2009 at 05:30 PM PST, Matthew wrote:
MW> 
MW> 
MW> Mun wrote:
MW> > I am moving from ksh93 to bash and have a question regarding the usage
MW> > of ${parameter:-word} parameter expansion.
MW> >
MW> > In ksh, I use ${*:-.} as an argument to commands.  For example:
MW> >
MW> > function ll
MW> > {
MW> >ls --color -Flv ${*:-.}
MW> > }
MW> >
MW> > This technique passes '.' as an arg to 'ls' if I didn't pass any args on
MW> > the command line (as I'm sure you all already know).  But this does not
MW> > work with bash; nor have I been able to come up with a technique that
MW> > accomplishes the same thing.  My only workaround so far is to put an
MW> > 'if' loop around the 'ls' that tests $# and takes the appropriate branch
MW> > depending on the number of args (i.e., 0 or non-zero).
MW> >
MW> > Any suggestions would be welcomed.  Thanks in advance.
MW> 
MW> Not sure why the above doesn't work, though you probably mean to use
MW> "$@" and not $* (presence/absence of ""s is intentional). This seems to
MW> work for me:
MW> 
MW> function ll
MW> {
MW> ls --color -Flv "${@:-.}"
MW> }

I tried the above and got the following error:

bash: $@: unbound variable

Note that I am running the following version:
GNU bash, version 4.0.0(1)-release (x86_64-unknown-linux-gnu)

and my options are set as shown below.

Keep in mind that I'm porting my ksh environment to bash, so perhaps I
have something messed up in my environment.  If the above is supposed to
work, then I'll try some additional experiments after the holidays and
see if I can narrow down the issue.

allexport   on
braceexpand on
emacs   off
errexit off
errtraceoff
functrace   off
hashall on
histexpand  on
history on
ignoreeof   on
interactive-commentson
keyword off
monitor on
noclobber   on
noexec  off
noglob  off
nolog   off
notify  off
nounset on
onecmd  off
physicaloff
pipefailoff
posix   off
privileged  off
verbose off
vi  on
xtrace  off

Happy Holidays,

-- 
Mun




Re: Query regarding ${parameter:-word} usage

2009-12-23 Thread Mun
Hi Jan,

On Wed, Dec 23, 2009 at 10:52 PM PST, Jan Schampera wrote:
JS> 
JS> 
JS> Mun schrieb:
JS> 
JS> > nounset on
JS> 
JS> Something sets -u in your startup scripts (or in the script or whatever)

That's it!  I'm not sure why I had nounset being turned on in my
.bash_profile, but there it was.

Thanks so much to everyone who replied for the help!

-- 
Mun




Sourcing a script renders getopts impotent (or is it just me?)

2010-09-16 Thread Mun
Hi,

Platform : Red Hat Enterprise Linux v5.5
Bash : GNU bash, version 4.1.0(1)-release (x86_64-unknown-linux-gnu)

I have a script which uses getopts that I need to source in my
interactive shell.  The problem is that if I source it, getops behaves
as if no arguments were passed into the script.  Although, if I simply
run the script in a sub-process, getopts works correctly.

As an experiment, I echo'd all args prior to the getopts statement in
the script, and when the script was sourced all args were correctly
displayed.  So I'm at a loss as to why getopts doesn't seem to work when
the script is sourced.

On a side note, there is some error checking being done within the
script.  I would like the script execution to terminate but leave the
interactive shell running upon error detection (i.e., don't exit out of
the terminal sessioni).  Is there any way to accomplish that objective
in bash?

Regards,

-- 
Mun



Re: Sourcing a script renders getopts impotent (or is it just me?)

2010-09-17 Thread Mun
Hi Dennis,

On Thu, Sep 16, 2010 at 05:18 PM PDT, Dennis Williamson wrote:
DW> Use return instead of exit when you have an error and you're sourcing
DW> the script. You can make it conditional.
DW> 
DW> Try setting OPTIND=1 to make your script work when it's sourced.
DW> Initialize your script's variables since they will be carried over
DW> between runs when you source the script.

That was in fact the problem.  Once I set OPTIND in the script, it
worked correctly :)

DW> #!/bin/bash
DW> invoked=$_   # needs to be first thing in the script

Setting 'invoked' is a great idea.

Thanks for the quick and very helpful reply!

Regards,

-- 
Mun

DW> OPTIND=1# also remember to initialize your flags and other variables
DW> 
DW> . . .# do some stuff
DW> 
DW> if some error condition
DW> then
DW> if [[ $invoked != $0 ]]
DW> then
DW> return 1# the script was sourced
DW> else
DW> exit 1# the script was executed
DW> fi
DW> fi
DW> 
DW> 
DW> 
DW> On Thu, Sep 16, 2010 at 4:06 PM, Mun  wrote:
DW> > Hi,
DW> >
DW> > Platform : Red Hat Enterprise Linux v5.5
DW> > Bash     : GNU bash, version 4.1.0(1)-release (x86_64-unknown-linux-gnu)
DW> >
DW> > I have a script which uses getopts that I need to source in my
DW> > interactive shell.  The problem is that if I source it, getops behaves
DW> > as if no arguments were passed into the script.  Although, if I simply
DW> > run the script in a sub-process, getopts works correctly.
DW> >
DW> > As an experiment, I echo'd all args prior to the getopts statement in
DW> > the script, and when the script was sourced all args were correctly
DW> > displayed.  So I'm at a loss as to why getopts doesn't seem to work when
DW> > the script is sourced.
DW> >
DW> > On a side note, there is some error checking being done within the
DW> > script.  I would like the script execution to terminate but leave the
DW> > interactive shell running upon error detection (i.e., don't exit out of
DW> > the terminal sessioni).  Is there any way to accomplish that objective
DW> > in bash?
DW> >
DW> > Regards,
DW> >
DW> > --
DW> > Mun
DW> >
DW> >
DW> 




Proper use of nohup within here-document?

2018-03-22 Thread Mun
Hi,

I'm using Bash version 4.1.2 on RedHat EL 6.8 (I realize these are old
releases).

In one of my bash scripts I have a here-document and at the end of
it's execution it will pop-up an gxmessage.  My problem is that when
the here-document exits, the gxmessage closes.

Within the here-document I'm essentially calling gxmessage thusly:
$ nohup gxmessage -timeout 0 -file abc.txt > /dev/null 2>&1 &

Any ideas on how I can allow the here-document to exit but allow the
gxmessage process to live on?

Thanks,

-- 
Mun