Problem with overridden functions and BASH_SOURCE Array

2011-12-14 Thread dethrophes
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2 -Wall
uname output: Linux modt005 2.6.38-13-generic #52-Ubuntu SMP Tue Nov 8 16:48:07 
UTC 2011 i686 i686 i386 GNU/Linux
Machine Type: i686-pc-linux-gnu

Bash Version: 4.2
Patch Level: 8
Release Status: release

Description:
Problem with overridden functions having wrong source file name in 
BASH_SOURCE Array
see example in Repeat-By section

Repeat-By:
  sourced_file.sh
function test_function {
echo 1
echo "${BASH_SOURCE[0]}(${BASH_LINENO[0])${FUNCNAME[0]}"
}

main1.sh 
source sourced_file.sh
function test_function {
echo 2
echo "${BASH_SOURCE[0]}(${BASH_LINENO[0])${FUNCNAME[0]}"
}
test_function
main2.sh 
if true; then
source sourced_file.sh
function test_function {
echo 2
echo 
"${BASH_SOURCE[0]}(${BASH_LINENO[0])${FUNCNAME[0]}"
}
test_function
fi

running main1.sh you get as expected 
2
main.sh(4)test_function

running main2.sh you get unexpectedly 
2
sourced_file.sh(4)test_function




Re: Problem with overridden functions and BASH_SOURCE Array

2011-12-19 Thread dethrophes


> The difference between main1 and main2 is the fact that bash always reads
> an entire command before executing any of it, and the if statement is a
> compound command.

Ok that insight gives me a way to work around the problem thanks.


> I will have to see if the function definition can do a better job of
> carrying around the source file and line information, but that's a pretty
> significant change.
Actually from what I have seen it already seems to work for BASH_LINENO. Its
just the source file that goes weird.
-- 
View this message in context: 
http://old.nabble.com/Problem-with-overridden-functions-and-BASH_SOURCE-Array-tp32975117p33003158.html
Sent from the Gnu - Bash mailing list archive at Nabble.com.




Re: Please remove iconv_open (charset, "ASCII"); from unicode.c

2012-03-10 Thread dethrophes

Am 10.03.2012 22:22, schrieb Chet Ramey:

On 3/6/12 11:47 PM, John Kearney wrote:

Hi chet can you please remove the following from the unicode.c file

localconv = iconv_open (charset, "ASCII");

Yeah, that was a typo.  It should be iconv_open ("ASCII", "UTF-8");
as originally in gnulib/coreutils.  Thanks for pointing it out; I
overlooked it.

Chet

Still pointless, though no longer dangerious :). save the overhead and 
just do a direct assignment, which is already done so just skip it 
altogether.


this is why in the last version I sent I had something like the 
following at the start of the function, instead of doing outside the 
function which makes it harder to keep track of what is actually going on.

if ( c <= 0x7f ){
  s[0]=c;
  return 1;
}




Re: Can somebody explain to me what u32tochar in /lib/sh/unicode.c is trying to do?

2012-03-10 Thread dethrophes

Am 10.03.2012 23:17, schrieb Chet Ramey:

On 3/7/12 12:07 AM, John Kearney wrote:

You really should stop using this function. It is just plain wrong, and
is not predictable.


It may enocde BIG5 and SJIS but is is more by accident that intent.

If you want to do something like this then do it properly.

basically all of the multibyte system have to have a detection method
for multibyte characters, most of them rely on bit7 to indicate a
multibyte sequence or use vt100 SS3 escape sequences. You really can't
just inject random data into a txt buffer. even returning UTF-8 as a
fallback is a bug. The most that should be done is return ASCII in error
case and I mean U+0-U+7f only and ignore or warn about any unsupported
characters.

Using this function is dangerous and pointless.

I mean seriously in what world does it make sense to inject utf-8 into a
big5 string? Or indead into a ascii string. Code should behave like an
adult, not like a frightened kid. By which I mean it shouldn't pretend
it knows what its doing when it doesn't, it should admit the problem so
that the problem can be fixed.

Wow.  Do you really think that personal insults are a good way to advance
an argument?

Listen: bottom line.  It's a fallback function.  It's called in the
unlikely event that iconv isn't available at all and we're not in a
UTF-8 locale.  Any fallback is as good as another, though maybe the
best one would be to return \u or \U (before you ask,
Posix leaves the \u/\U failure cases unspecified).  The real question
is what to do with invalid input data, since any transformation is
going to "inject random data" into the buffer.  Maybe the identity
function would be better after all.  But then you'd ask whether or
not it makes sense to inject a C-style escape sequence into a big5
string.

Chet
 I guess I was a bit terse wouln't call it a personal insult though. 
Though I guess I do have pretty thick skin, sorry if you felt it was 
meant as one.


My point is the fallback function/handler should report an error/warning 
not do anything and move on.
Trying to reover an irrecoverable error is just making it more difficult 
to figure out what is going on.
Basically this is a script/enviroment error, so report the error, don't 
hide it.



Its a similar problem with the iconv fallback of returning UTF-8. If 
iconv says it can't encode the unicode value in the destination charset 
do we really know better? Again it is better to report the error an move 
on. because injecting utf-8 into big5 or whatever is also wrong. because 
if utf-8 is the destination charset then it would have already been 
detected or iconv would have worked so contextually we this is wrong.
  if (iconv (localconv, (ICONV_CONST char **)&iptr, &sn, &optr, 
&obytesleft) == (size_t)-1)

return n;   /* You get utf-8 if iconv fails */
now don't forget we know at this point that iconv knows the source and 
destination charsets so we have unicode character unsupported in 
destination charset.


or here
  n = u32toutf8 (c, s);
  if (utf8locale || localconv == (iconv_t)-1)
return n;
If destination charset is utf-8 OR destiation charset NOT utf-8 and 
icconv didn't recognise detination charset encode it as uft-8.


Lets say CTYPE=BIG5 and you try to encode a unicode char U+F000 which is 
an invalid big5 char(at least I think it is).

so iconv returns an error.
now the code inserts the utf-8 encoding of U+F000, which is an invalid 
string sequence.

this isn't helping anyody.





Re: Can somebody explain to me what u32tochar in /lib/sh/unicode.c is trying to do?

2012-03-11 Thread dethrophes

Am 11.03.2012 00:02, schrieb dethrophes:

Am 10.03.2012 23:17, schrieb Chet Ramey:

On 3/7/12 12:07 AM, John Kearney wrote:

You really should stop using this function. It is just plain wrong, and
is not predictable.


It may enocde BIG5 and SJIS but is is more by accident that intent.

If you want to do something like this then do it properly.

basically all of the multibyte system have to have a detection method
for multibyte characters, most of them rely on bit7 to indicate a
multibyte sequence or use vt100 SS3 escape sequences. You really can't
just inject random data into a txt buffer. even returning UTF-8 as a
fallback is a bug. The most that should be done is return ASCII in 
error

case and I mean U+0-U+7f only and ignore or warn about any unsupported
characters.

Using this function is dangerous and pointless.

I mean seriously in what world does it make sense to inject utf-8 
into a

big5 string? Or indead into a ascii string. Code should behave like an
adult, not like a frightened kid. By which I mean it shouldn't pretend
it knows what its doing when it doesn't, it should admit the problem so
that the problem can be fixed.
Wow.  Do you really think that personal insults are a good way to 
advance

an argument?

Listen: bottom line.  It's a fallback function.  It's called in the
unlikely event that iconv isn't available at all and we're not in a
UTF-8 locale.  Any fallback is as good as another, though maybe the
best one would be to return \u or \U (before you ask,
Posix leaves the \u/\U failure cases unspecified).  The real question
is what to do with invalid input data, since any transformation is
going to "inject random data" into the buffer.  Maybe the identity
function would be better after all.  But then you'd ask whether or
not it makes sense to inject a C-style escape sequence into a big5
string.

Chet
 I guess I was a bit terse wouln't call it a personal insult though. 
Though I guess I do have pretty thick skin, sorry if you felt it was 
meant as one.


My point is the fallback function/handler should report an 
error/warning not do anything and move on.
Trying to reover an irrecoverable error is just making it more 
difficult to figure out what is going on.
Basically this is a script/enviroment error, so report the error, 
don't hide it.



Its a similar problem with the iconv fallback of returning UTF-8. If 
iconv says it can't encode the unicode value in the destination 
charset do we really know better? Again it is better to report the 
error an move on. because injecting utf-8 into big5 or whatever is 
also wrong. because if utf-8 is the destination charset then it would 
have already been detected or iconv would have worked so contextually 
we this is wrong.
  if (iconv (localconv, (ICONV_CONST char **)&iptr, &sn, &optr, 
&obytesleft) == (size_t)-1)

return n;   /* You get utf-8 if iconv fails */
now don't forget we know at this point that iconv knows the source and 
destination charsets so we have unicode character unsupported in 
destination charset.


or here
  n = u32toutf8 (c, s);
  if (utf8locale || localconv == (iconv_t)-1)
return n;
If destination charset is utf-8 OR destiation charset NOT utf-8 and 
icconv didn't recognise detination charset encode it as uft-8.


Lets say CTYPE=BIG5 and you try to encode a unicode char U+F000 which 
is an invalid big5 char(at least I think it is).

so iconv returns an error.
now the code inserts the utf-8 encoding of U+F000, which is an invalid 
string sequence.

this isn't helping anyody.




Or lets put it another way.

lets say you type
rm FileDoesntExist
now "FileDoesntExist" doesn't exist but instead of reporting an error rm 
deletes "FileDoesExist". I think we can agree this is unexpected 
behavior, though you could also argue that it is a fall-back behavior.


This is equivalent to what the function is currently doing, it has 
checked and knows it shouldn't output UTF-8, so it tries to encode to 
the correct charset, which doesn't work for whatever 
reason(unrecognized/unsupported destination charset, character not 
present in destination charset), however instead of reporting the 
encoding/trans-coding error it outputs UTF-8 anyway.











Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 06:04, schrieb Clark J. Wang:

On Mon, Mar 12, 2012 at 12:22, Yongzhi Pan  wrote:


Tested in GNU bash, version 3.00.16(1)-release and 4.1.2(1)-release.

Upon login, home dir is displayed as tilde in PS1:
pan@BJ-APN-2 ~$ echo $PS1
\[\033[35m\]\u@\h \w$ \[\033[0m\]
pan@BJ-APN-2 ~$ pwd
/export/home/pan/

After a cd command, which change directory to $HOME (not changed at all),
it is displayed as the complete path:
pan@BJ-APN-2 ~$ cd
pan@BJ-APN-2 /export/home/pan$

The reason is that my home in passwd has a trailing slash:
pan@BJ-APN-2 /export/home/pan$ grep ^$USER: /etc/passwd
pan:x:896:1::/export/home/pan/:/bin/bash


You can also reproduce this by directly setting HOME to
`/export/home/pan/'.



This is tricky to find. I hope it will display tilde even if home dir entry
in passwd has one or more trailing slash.


I personally don't think this needs to be fixed. :)

I agree, or to be more acurate I think you should fix the passwd entry.

PS: I read the source code and do not know where this is done, maybe in
y.tab.c?



as a workaround to your problem you could have something like this in 
your bashrc

if shopt extglob &>/dev/null ; then
  HOME="${HOME/%+(\/)}"   # strip all trailing forward slashes
else
   while [ "${HOME}" != "${HOME%\/}" ] ; do
 HOME="${HOME%\/}"
   done
fi

I think it should hide your problem.






Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 16:18, schrieb Roman Rakus:

On 03/13/2012 04:08 PM, dethrophes wrote:

Am 13.03.2012 06:04, schrieb Clark J. Wang:
On Mon, Mar 12, 2012 at 12:22, Yongzhi Pan  
wrote:



Tested in GNU bash, version 3.00.16(1)-release and 4.1.2(1)-release.

Upon login, home dir is displayed as tilde in PS1:
pan@BJ-APN-2 ~$ echo $PS1
\[\033[35m\]\u@\h \w$ \[\033[0m\]
pan@BJ-APN-2 ~$ pwd
/export/home/pan/

After a cd command, which change directory to $HOME (not changed at 
all),

it is displayed as the complete path:
pan@BJ-APN-2 ~$ cd
pan@BJ-APN-2 /export/home/pan$

The reason is that my home in passwd has a trailing slash:
pan@BJ-APN-2 /export/home/pan$ grep ^$USER: /etc/passwd
pan:x:896:1::/export/home/pan/:/bin/bash


You can also reproduce this by directly setting HOME to
`/export/home/pan/'.


This is tricky to find. I hope it will display tilde even if home 
dir entry

in passwd has one or more trailing slash.


I personally don't think this needs to be fixed. :)

I agree, or to be more acurate I think you should fix the passwd entry.
PS: I read the source code and do not know where this is done, 
maybe in

y.tab.c?



as a workaround to your problem you could have something like this in 
your bashrc

if shopt extglob &>/dev/null ; then
  HOME="${HOME/%+(\/)}"   # strip all trailing forward slashes
else
   while [ "${HOME}" != "${HOME%\/}" ] ; do
 HOME="${HOME%\/}"
   done
fi

I think it should hide your problem.





Is it all necessary?
HOME="${HOME%\/}"

RR

not really just being thorough.
the request was remove/handle all/multiple trailing forward slashes. 
e.g. /home/

HOME="${HOME%\/}"  # only removes 1

HOME="${HOME%%+(\/)}"  # removes all but only if extglob supported and 
enabled


the while loop was a fallback in case extglob is either not enabled or 
not present.


so it depends on your needs I would suggest that
HOME="${HOME%\/}"
HOME="${HOME%\/}"
HOME="${HOME%\/}"
 should cover most real world cases
or this on its own isn't pretty but does the job
while [ "${HOME}" != "${HOME%\/}" ] ; do
 HOME="${HOME%\/}"
   done




Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 16:27, schrieb Eric Blake:

On 03/13/2012 09:18 AM, Roman Rakus wrote:


as a workaround to your problem you could have something like this in
your bashrc
if shopt extglob&>/dev/null ; then
   HOME="${HOME/%+(\/)}"   # strip all trailing forward slashes
else
while [ "${HOME}" != "${HOME%\/}" ] ; do
  HOME="${HOME%\/}"
done
fi

I think it should hide your problem.





Is it all necessary?
HOME="${HOME%\/}"

That only strips one trailing slash.  If you want to strip multiple
trailing slashes, then you have to go with something more complex; but
the above if/shopt/else/loop approach is overkill, compared to this
one-liner:

$ foo=/a/b///
$ echo ${foo%%/}
/a/b//
$ echo ${foo%${foo##*[^/]}}
/a/b

Be aware that both approaches will misbehave if HOME is a root directory
(/ or //), where you _don't_ want to strip trailing slashes.  So you
really want:

case $HOME in
   *[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
esac


Thanks I didn't see the one liner right away so I got lazy and did a loop.
Your solution is of course much better, though a lot harder for most 
people to understand.


I only do if shopt else as psuedo code to show options.




Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 16:42, schrieb Eric Blake:

On 03/13/2012 09:27 AM, Eric Blake wrote:

Be aware that both approaches will misbehave if HOME is a root directory
(/ or //), where you _don't_ want to strip trailing slashes.  So you
really want:

case $HOME in
   *[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
esac

Actually, shortening /// to / is okay (it's only // that must not
unconditionally be shortened to /, due to POSIX specification and Cygwin
behavior of //), so a modified version would be:

case $HOME in
   *[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
   / | // ) ;;
   *) HOME=/ ;;
esac


wouldn't this be better?

case "$HOME" in
  / | // ) ;;
  * ) HOME="${HOME%${HOME##*[^/]}}" ;;
esac




Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 17:53, schrieb Eric Blake:

On 03/13/2012 09:47 AM, dethrophes wrote:

Am 13.03.2012 16:42, schrieb Eric Blake:

On 03/13/2012 09:27 AM, Eric Blake wrote:

Be aware that both approaches will misbehave if HOME is a root directory
(/ or //), where you _don't_ want to strip trailing slashes.  So you
really want:

case $HOME in
*[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
esac

Actually, shortening /// to / is okay (it's only // that must not
unconditionally be shortened to /, due to POSIX specification and Cygwin
behavior of //), so a modified version would be:

case $HOME in
*[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
/ | // ) ;;
*) HOME=/ ;;
esac


wouldn't this be better?

case "$HOME" in
   / | // ) ;;
   * ) HOME="${HOME%${HOME##*[^/]}}" ;;
esac

Nope, because that strips /// into the empty string, but you really want
it collapsed into the single slash.

Also, I intentionally omitted the redundant "" around the variable
assignment.

Ok thanks that makes sense now, so you shorten 3 or more forward slashes 
into 1 forward slash.


the missing "" in the case isn't redundant.
i.e. case "$HOME" in

in the assignment they are redundant but I just find it good coding practice to always 
"", because it means I'm less likely to forget.

case "$HOME" in
   *[^/]* ) HOME=${HOME%${HOME##*[^/]}} ;;
   / | // ) ;;
   *) HOME=/ ;; # //+(/)
esac






Re: [bug] Home dir in PS1 not abbreviated to tilde

2012-03-13 Thread dethrophes

Am 13.03.2012 18:13, schrieb Andreas Schwab:

dethrophes  writes:


the missing "" in the case isn't redundant.
i.e. case "$HOME" in

The word is not subject to word splitting and filename expansion, so
there is no need to quote.

Andreas.


Ok thanks for clarifying that.



Re: Saving command history for non-interactive shell

2012-03-17 Thread dethrophes

Am 16.03.2012 15:56, schrieb Greg Wooledge:

On Fri, Mar 16, 2012 at 02:33:35PM +, Lars Peterson wrote:

Is there a way to configure bash so that commands from a non-interactive
shell are preserved in the history? I'm more interested in saving commands
invoked via ssh vs shell scrpts.

 From CHANGES, for bash 4.1:

l.  There is a new configuration option (in config-top.h) that forces bash to
 forward all history entries to syslog.

However, that only applies to commands that bash is already adding to
its history.  So you'd also have to do a "set -o history" command at
some point, since non-interactive shells don't do that by default.
That might be tricky to arrange.

And of course you'd have to force the ssh user to use your specially
compiled bash with the SYSLOG_HISTORY option, and not some other shell.

If the larger context is "I want to know everything my users are doing",
you're going to end up frustrated.  Unix simply wasn't designed to lock
users down.  Quite the opposite -- it was designed to give users full
power.  Users can make system calls without going through a shell, by
writing C code and so on.  They can also invoke processes without using
a shell, if processes are the thing you actually want to track, rather
than, for instance, file system operations.

If any of the above resembles your actual goal, then you need to look
into "accounting" ("process accounting", etc.).  It's a huge topic, and
logging shell commands doesn't even come close to addressing it.
Just a suggestion but if its only about ssh then you could chroot to a 
new base and replace the bash with a bash script.

or change the default shell. to something like this.
file
/bin/bash
  #!/bin/realbash
 exec /bin/realbash -o history "${@}"
i.e. so that when bash is called you enable history. haven't really 
tried it but I think it should be possible.


or do something like
/bin/bash
  #!/bin/realbash
  exec logapp /bin/realbash -o xtrace "${@}"

/bin/logbash
  #!/bin/bash
  exec logapp /bin/bash -o xtrace "${@}"

testsctipt.sh
  #!/bin/logbash
  do something here .


just some rough ideas.


I mean depending on your setup you could either change the account 
default shell, only allow execution of the special logging shell, or 
just specify the logging shell in you hash bang entry.



heck you could even do something really crazy like manually read the 
bash input/script usinf read and eval each line of code,

something like
while read -re ; do
  echo "${REPLY}"
  eval "${REPLY}"  ## not saying its a good idea just that there are a 
lot of ways to skin this particular fish.

done



or even something like
/bin/logscript
  #!/bin/bash
  trap 'echo "${BASH_COMMAND}" >> Logfile' DEBUG
  source "${1}"





Re: Saving command history for non-interactive shell

2012-03-17 Thread dethrophes

Am 17.03.2012 22:10, schrieb dethrophes:

Am 16.03.2012 15:56, schrieb Greg Wooledge:

On Fri, Mar 16, 2012 at 02:33:35PM +, Lars Peterson wrote:
Is there a way to configure bash so that commands from a 
non-interactive
shell are preserved in the history? I'm more interested in saving 
commands

invoked via ssh vs shell scrpts.

 From CHANGES, for bash 4.1:

l.  There is a new configuration option (in config-top.h) that forces 
bash to

 forward all history entries to syslog.

However, that only applies to commands that bash is already adding to
its history.  So you'd also have to do a "set -o history" command at
some point, since non-interactive shells don't do that by default.
That might be tricky to arrange.

And of course you'd have to force the ssh user to use your specially
compiled bash with the SYSLOG_HISTORY option, and not some other shell.

If the larger context is "I want to know everything my users are doing",
you're going to end up frustrated.  Unix simply wasn't designed to lock
users down.  Quite the opposite -- it was designed to give users full
power.  Users can make system calls without going through a shell, by
writing C code and so on.  They can also invoke processes without using
a shell, if processes are the thing you actually want to track, rather
than, for instance, file system operations.

If any of the above resembles your actual goal, then you need to look
into "accounting" ("process accounting", etc.).  It's a huge topic, and
logging shell commands doesn't even come close to addressing it.
Just a suggestion but if its only about ssh then you could chroot to a 
new base and replace the bash with a bash script.

or change the default shell. to something like this.
file
/bin/bash
  #!/bin/realbash
 exec /bin/realbash -o history "${@}"
i.e. so that when bash is called you enable history. haven't really 
tried it but I think it should be possible.


or do something like
/bin/bash
  #!/bin/realbash
  exec logapp /bin/realbash -o xtrace "${@}"

/bin/logbash
  #!/bin/bash
  exec logapp /bin/bash -o xtrace "${@}"

testsctipt.sh
  #!/bin/logbash
  do something here .


just some rough ideas.


I mean depending on your setup you could either change the account 
default shell, only allow execution of the special logging shell, or 
just specify the logging shell in you hash bang entry.



heck you could even do something really crazy like manually read the 
bash input/script usinf read and eval each line of code,

something like
while read -re ; do
  echo "${REPLY}"
  eval "${REPLY}"  ## not saying its a good idea just that there are a 
lot of ways to skin this particular fish.

done



or even something like
/bin/logscript
  #!/bin/bash
  trap 'echo "${BASH_COMMAND}" >> Logfile' DEBUG
  source "${1}"




though if they are your scripts youd be better of doing it differently.


I do this

sRunProg cmd "arg 1" "arg 2" "arg 3"
sErrorOut "message line 1" "message line 2"
sLogOut "message line 1" "message line 2"
sDebugOut "message line 1" "message line 2"


which gives me log files like this

"rm" "--interactive=never" 
"/mnt/DETH00/media/New/New.Folder/sigs.sha256"  #C: Wed Feb 15 23:32:11 
CET 2012 : 18336 : CoreFuncs.sh(87  ) : SimpleDelFile   :
"ln" "--symbolic" "/mnt/DETH00/media/file1" 
"/mnt/DETH00/media/New/New.Folder/file1"  #C: Wed Feb 15 23:32:11 CET 
2012 : 18336 : move.sh (255 ) : Move_int:
#L: Fri Feb 17 14:31:13 CET 2012 : 8262  : SortFiles.sh(135 ) : 
main: "/home/dethrophes/scripts/bash/SortFiles.sh" 
"--SupportedOptions"



so I can actually source the log file as a bash script if I want to ,
furthermore the if the output is to the console I colorize the 
errors/command etc.
and errors get logged, output to the console in red and if gui mode 
enables sent to the x notify system, and a function call trace gets 
logged as well.
#D: Fri Jan 13 13:22:03 CET 2012 : 7319  : LogFuncs.sh (46  ) : 
main: "[4]/home/dethrophes/scripts/bash/GenFuncs.sh(46):source"
#D: Fri Jan 13 13:22:03 CET 2012 : 7319  : GenFuncs.sh     (484 ) : 
source  : 
"[3]/home/dethrophes/scripts/bash/GenFuncs.sh(484):CleanRevision"
#D: Fri Jan 13 13:22:03 CET 2012 : 7319  : GenFuncs.sh     (131 ) : 
CleanRevision   : 
"[2]/home/dethrophes/scripts/bash/GenFuncs.sh(131):ReturnString"
#D: Fri Jan 13 13:22:03 CET 2012 : 7319  : GenFuncs.sh     (127 ) : 
ReturnString: 
"[1]/home/dethrophes/scripts/bash/LogFuncs.sh(127):TaceEvent"
#E: Fri Jan 13 13:22:03 CET 2012 : 7319  : GenFuncs.sh (127 ) : 
ReturnString: "RETURN 
/home/dethrophes/scripts/bash/LogFuncs.sh(127):ReturnString ELEVEL=0 
\"echo \"\${@}\


I find this approach easier to work with.
most of this is done in 
https://github.com/dethrophes/Experimental-Bash-Module-System/blob/master/bash/LogFuncs.sh

if it is of interest.







Re: Saving command history for non-interactive shell

2012-03-19 Thread dethrophes

Am 19.03.2012 13:39, schrieb Greg Wooledge:

On Fri, Mar 16, 2012 at 06:15:35PM -0400, Chet Ramey wrote:

There is nothing stopping you from using history in a non-interactive
shell -- it's just not enabled by default.

Turn on history with `set -o history' and set HISTFILE and HISTSIZE as you
like.  You can probably set some of the right variables in .ssh/environment
and set BASH_ENV to a file that will run the commands you want.

The problem is, that doesn't actually work.

imadev:~$ ssh localhost bash<<'EOF'>  set -o history

HISTFILE=~/.bash_history
HISTFILESIZE=500
echo hello world
EOF

wooledg@localhost's password:
hello world
imadev:~$ tail -2 .bash_history
rm statistical.tcl.rej
less sched.tcl.rej

I blame this part of the documentation, although perhaps I should be
looking at the code instead:

   When an interactive shell exits, the last $HISTSIZE lines
   are copied from the history list to $HISTFILE.

I read that as "the HISTFILE doesn't get updated when a NON-interactive
shell exits".

This one works, though:

imadev:~$ ssh localhost bash<<'EOF'

set -o history
HISTFILE=~/.bash_history
HISTFILESIZE=500
echo hello world
history -a
EOF

wooledg@localhost's password:
hello world
imadev:~$ tail -6 .bash_history
rm statistical.tcl.rej
less sched.tcl.rej
HISTFILE=~/.bash_history
HISTFILESIZE=500
echo hello world
history -a

However, since the original intent was "I want to log commands that are
launched through ssh, without modifying what the client sends", it's not
clear to me how to wrap all of that prefix and postfix code around the
client's commands.





have you tried something like this

create a file called /usr/bin/ssh_gatekeeper.sh make it executable
put this into it
#!/bin/bash
## Disconnect clients who try to quit the script (Ctrl-c)
trap jail INT
jail()
 {
   kill -9 $PPID
   exit 0
 }

[ -n "$SSH_ORIGINAL_COMMAND" ] || exit 0
case "$SSH_ORIGINAL_COMMAND" in
  bash*)
/bin/bash --init-file /usr/logssh/bin/logbashrc
;;
  *) $SSH_ORIGINAL_COMMAND ;;
esac
exit 0

in /etc/sshd_config add
ForceCommand /usr/bin/ssh_gatekeeper.sh

in /usr/logssh/bin/logbashrc or somewhere

set -o history
HISTFILE=~/.bash_history
HISTFILESIZE=500
trap 'history -a' EXIT SIGINT



Or something like that it should work I think, or at least point in a 
direction.




Re: I think I may have found a possible dos attack vector within bash.

2012-03-20 Thread dethrophes

Am 20.03.2012 17:47, schrieb Eamonn Smyth:

Without sounding alarmist, I can break my machine using bash. I also have a
fix. I shall be officially releasing the c code this weekend at the
hackathon london.

As my code following correctly implements the logic the dos attack vector
is negated.

The replacement code

 /*Do openql maths Now*/
 //Exploiting the Fundamental Theorem of Arithmetic
 int i;
 int vcount = 0;

 for (c=0;cvcount == gptr[i]->Xsize){
 gptr[i]->vcount = 0;
 gptr[i]->get++;

 }

 if (gptr[i]->get>  (gptr[i]->begin + (gptr[i]->groupsize -1)))
 gptr[i]->get = gptr[i]->begin;


 int get = gptr[i]->get;

 printf("%s",lookup[gptr[i]->get]);//This line is writing the
machine states on turings tape.

 gptr[i]->vcount++;
 }
 if (i == levels)
 printf("\n");
 }
 //printf("End Of Turing Tape.\n");//Realized 19th March 2012  A Few
Days before the Hackathon.
}

As the maintainers of bash it should be easy for you using your knowledge
base of bash schemantics to implement.

As apposed to me learning bash.

This will constitute my first patch contribution to linux and gnu.

Cheers.
Eamonn.

without sounding alarmist we can all break our machines using bash
try "rm -R /*"  ;)
or if you can't elevate the privileges this will still give you a headache.
"rm -R ~/*"  ;)

The trick is breaking somebody else machine and even that isn't that big 
a problem, so you need to be more specific as to how you broke something.


Firstly what version of bash are you using, please use bashbug to get 
the exact information.


Secondly when you say dos? you mean a windows command prompt or you 
actually mean DOS 6.22, dosbox, or a text box what do you consider dos?.


what os are you on

What has any of this to do with linux?

but anyway bash isn't secure, it can't be because of how it works. The only 
context in which it is valid to talk about bash attacks is if by manipulating 
the data used by a trusted bash script you can compromise that script, and even 
in that case its unlikely to be a problem in bash but rather a poorly written 
bash script.

saying you've found a bash exploit is like saying you found a c exploit, kinda 
a  /non sequitur/. because if you wrote the script it has bash privileges 
anyway unless you're talking about having used the -s options?

or are you a just 10ish days early?



Re: I think I may have found a possible dos attack vector within bash.

2012-03-20 Thread dethrophes

Am 20.03.2012 18:04, schrieb Greg Wooledge:

On Tue, Mar 20, 2012 at 04:47:51PM +, Eamonn Smyth wrote:

Without sounding alarmist, I can break my machine using bash. I also have a
fix. I shall be officially releasing the c code this weekend at the
hackathon london.

You included some C++ code (or possibly C code, if you're allowed to
declare variables after the start of a block now).  But I don't see any
bash commands.  How exactly is bash involved with your code or the bug
you found?

ANSI c99 adopted c++ flexibility with variable declaration. though I 
still consider it bad practice to rely on it needlessly.




Re: I think I may have found a possible dos attack vector within bash.

2012-03-20 Thread dethrophes

Thanks Greg that makes more sense.
I would have recognised DoS, dos though :) showing my age I guess.

I'm inclined to doubt though that it can be defined as a Bash DoS 
whatever it is, otherwise a lot of installation/bash scripts would be up 
for the chop ;).


Am 20.03.2012 19:00, schrieb Greg Wooledge:

On Tue, Mar 20, 2012 at 06:47:17PM +0100, dethrophes wrote:

Secondly when you say dos? you mean a windows command prompt or you
actually mean DOS 6.22, dosbox, or a text box what do you consider dos?.

He meant DoS, or "Denial of Service".  He believes he has found some sort
of security bug/exploit.

He responded to me privately (his C code has nothing whatsoever to do with
the issue), and I do not believe that what he found qualifies as a denial
of service exploit.  If he wishes to bring the actual issue to the bug-bash
mailing list, then we can discuss it publically, but I'll respect his
desire to maintain privacy at this time.






Re: which file in bash source code (tarball) contain a print output function

2012-03-20 Thread dethrophes


not sure if its what your looking for but you could look at
builtins/printf.def

as a starting point. it implements the printf builtin function.

Am 20.03.2012 20:29, schrieb runicer:

I have bash-4.2.tar.gz What inside this, All source code .c/.h ,
configuration file. I want to find where is the print standard output
function and add my script like sed 's,Hello,Hi,gI'before it printed. The
result will be every standard output with the hello word will change to Hi
word.





Re: sed problem

2012-04-02 Thread dethrophes

Am 02.04.2012 15:25, schrieb Dennis Williamson:

Wrong list. Your question is not about Bash and it's not about a bug in
Bash.

bash completion not bash.
http://bash-completion.alioth.debian.org/ 



Re: Expanding aliases to full command before execution

2012-04-04 Thread dethrophes

Am 04.04.2012 17:27, schrieb jrrand...@gmail.com:

On Tue, Apr 3, 2012 at 5:22 PM, jrrand...@gmail.com  wrote:

Hi everyone,

In bash, is it possible to expand my aliases either before they are executed
or when they are stored in the history file?
For example, if I have:  alias ll='ls -l'  defined in my .bashrc, when I
execute "ll" from the command line, I'd like my history file to contain "ls
-l".  Is there any way to accomplish this?

Thanks,
Justin


I seem to have constructed a solution to my own problem.  I wrote a
function that creates the desired behavior for me and saves it in
$expanded if the argument was in fact an alias.

function expand_alias() # expand an alias to full command
{
 if [ "$1" = "." ]; then
 argument="\\$1"
 else
 argument="$1"
 fi
 match=$( alias -p | grep -w "alias $argument=" )
 if [ -n "$match" ]; then
 expanded="`echo $match | sed -e s/[^=]*=// | sed 's/^.\(.*\).$/\1/'`"
 else
 expanded="$1"
 fi
}


Thanks,
Justin


or you could just do

function expand_alias() # expand an alias to full command
{
if expanded=$( alias "${1}" ); then
expanded="${expanded#*=}"
else
expanded="$1"
fi
}

though depending on what your actually trying to do you'd be getter of 
using type, as not all aliases are aliases a lot are functions these days.




case $(type -t "$c") in
  "")
echo No such command "$c"
return 127
;;
  alias)
c="$(type "$c"|sed "s/^.* to \`//;s/.$//")"
;;
  function)
c=$(type "$c"|sed 1d)";\"$c\""
;;
  *)
c="\"$c\""
;;
esac

bash -xvc "$c"



Re: status on $[arith] for eval arith vsl $((arith))??

2012-04-08 Thread dethrophes

ever thought of going the depreciation route.
Something like what microsoft do with vc.

I.e. give a warning for depreciated constructs. With a hint as to how to 
do it better?


Am 08.04.2012 21:30, schrieb Chet Ramey:

On 4/8/12 3:02 PM, Maarten Billemont wrote:


Any particular reason for not removing old undocumented functionality, or is 
that mostly the nature of this beast - dragging along and maintaining ancient 
code for the sake of compatibility?=

Because, as Linda discovered, there is still working code out there using
it.  Maybe we'll get to a point where it's all gone, but we're not there
yet.





Re: Exit status of "if" statement?

2012-04-09 Thread dethrophes

If false; then
echo jj
fi
always has to return 0 otherwise a lot of code using ERREXIT/ERRTRACE 
would break.


if you want to handle an error case you should use elif or else

your example could be written like this

if cmd1
then
cmd2
elif cmd3
then
   cmd4
fi

or possibly like this

if cmd1
then
cmd2
else
  if cmd3
  then
 cmd4
  fi
fi

or if you wanted to be more cryptic
cmd1&&  { cmd3 || true  ; } || { cmd3&&  cmd4 ; }


though it would be advisable to append a "|| true" to cmd2 and cmd4, 
showing explicitly that you are ignoring the return value.


if cmd1
then
cmd2 || true
elif cmd3
then
   cmd4 || true
fi

hth
John

Am 10.04.2012 04:26, schrieb bsh:

Janis Papanagnou  wrote:

Dan Stromberg wrote:

What should be the behavior of the following?
if cmd1
then
 cmd2
fi&&  if cmd3
then
cmd4
fi

Hello Daniel and Janis!

If cmd1 is true then execute cmd2;
   cmd2 defines the exit code for the first if
depending on cmd2 return value,
   if true then the subsequent if is executed
 if cmd3 is true then execute cmd4;
   cmd4 defines the exit code for the second if

I see a problem, which I cannot immediate test on a
command line available to me now.

First of all, the manpage plainly indicates:

"Usage: if if-list;then list[;elif list;then list]... [;else list];fi
... If the if-list has a non-zero exit status and there is
no else-list, then the if command returns a zero exit status."


Playing around, it appears that cmd1 and cmd3 have no
direct impact on the exit codes of the two if's, while
cmd2 and cmd4 do (if cmd1 or cmd3 evaluate true).

Yes. cmd1 and cmd3 control the if condition, and the resulting
exit code is defined by the last command executed, either cmd2
or cmd4.

... And because of this, it is impossible to discern whether
the return code is the result of a failed if-list or the
last command in the if-body code. This strikes me as poor
programming discipline.

BTW, the reason that the manpage keeps talking about command
"lists", instead of individual commands, is because the "if"
criterion is the return code of the _last_ command in a command
_list_. That is to say, the following is possible and even
recommended:

ifcmd1
   cmd2
   cmd3
then  ...
fi

It is the return code of cmd3 which determines the flow-of-
control. This construction reinforces readable and structured
localization of code.


Is this the defined behavior in POSIX shell?  In
bash?  In bash symlinked to /bin/sh? In dash?

I think it is defined that way in all POSIX complient shells.

Yes:

POSIX XSH: (2.9.4) Compound Commands: The if Conditional Construct:
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_04

"The exit status of the 'if' command shall be the exit status of
the 'then' or 'else' compound-list that was executed, or zero,
if none was executed."

=Brian







Re: inconsistency with "readonly" and scope

2012-04-11 Thread dethrophes

Am 11.04.2012 20:50, schrieb Greg Wooledge:

"declare" when used in a function acts like "local", and creates a variable
with scope local to that function.  So does "declare -r".  But "readonly",
which is otherwise the same as "declare -r", creates variables with global
scope.

Is this intended?

Tested with 2.05b, 3.something, and 4.2.20.

I've also noticed weird behavior with "declare -gr" the r sometimes 
seems to override the g, but not specific to functions It seems to be 
specific either to the source file or to the compound statement. I 
haven't been able to figure out exactly whats going on there. I haven't 
been able to reproduce it in a simple example. this is most readily 
noticeable with set -o nounset




Re: inconsistency with "readonly" and scope

2012-04-12 Thread dethrophes

Am 12.04.2012 14:27, schrieb Chet Ramey:

On 4/11/12 4:12 PM, dethrophes wrote:


I've also noticed weird behavior with "declare -gr" the r sometimes seems
to override the g, but not specific to functions It seems to be specific
either to the source file or to the compound statement. I haven't been able
to figure out exactly whats going on there. I haven't been able to
reproduce it in a simple example. this is most readily noticeable with set
-o nounset

An example would help.  The above is supposed to create a global readonly
variable.  I suspect that you see the `global' as being `overridden'
because it's being created in a subshell environment.

Chet



I only see the problem in large complex cases, I've tried to reproduce a 
simple example a couple of times but without success.

no its not a subshell problem, I avoid subshells as much as possible.
also when I expereimented with the problem a couple of times I 
discovered the following-

declare -g works
declare -gr doesn't work
my current workaround is I've just stopped using the readonly attribute 
which works. or just stopped using declare altogether.

or do something like this
var=x
typeset -r var

I've also seen some strange behavior with
declare in a conditional sourced global context. it only seems to happen 
when the code size is 1s+ lines long.
I think this is exasperated by the fact that i source my files inside 
conditional statements. the result is that I now even in a global 
context set the global flag.


any ideas what could be causing it would help me try to figure out how 
to test for it.


Sorry about the weak description but I've been working around it now for 
months waiting for it to make sense and I'm no closer to figuring out 
whats going on.
the problem is so erratic it has to be some sort of partial lost context 
or something.


can you give me any tips on how to debug it if I can reproduce it again? 
is their an easy way to track the validity of a variable?










Re: inconsistency with "readonly" and scope

2012-04-12 Thread dethrophes

Am 12.04.2012 22:11, schrieb Steven W. Orr:

On 4/12/2012 2:16 PM, dethrophes wrote:

Am 12.04.2012 14:27, schrieb Chet Ramey:

On 4/11/12 4:12 PM, dethrophes wrote:

I've also noticed weird behavior with "declare -gr" the r sometimes 
seems
to override the g, but not specific to functions It seems to be 
specific
either to the source file or to the compound statement. I haven't 
been able

to figure out exactly whats going on there. I haven't been able to
reproduce it in a simple example. this is most readily noticeable 
with set

-o nounset
An example would help. The above is supposed to create a global 
readonly

variable. I suspect that you see the `global' as being `overridden'
because it's being created in a subshell environment.

Chet


It took me a second to reproduce it, but here it is:

--
#! /bin/bash

A()
{
typeset v1=Hello

B
echo "IN B:$v1"
}

B()
{
typeset -r v1=Goodbye

:
}
typeset -r v1=abc
A
echo "v1:$v1"
--

This is 4.0.35(1).

The v1 that's set to abc is the global one.
A defines a local copy that's not readonly and B defines one that is. 
When I run it I get:


950 > ./foo.sh
./foo.sh: line 5: typeset: v1: readonly variable
./foo.sh: line 13: typeset: v1: readonly variable
IN B:abc
v1:abc

This means that the typeset failed is both A and B and the references 
in the routines fell back to the global instance of v1.


Does this help?






I only see the problem in large complex cases, I've tried to reproduce a
simple example a couple of times but without success.
no its not a subshell problem, I avoid subshells as much as possible.
also when I expereimented with the problem a couple of times I 
discovered the

following-
declare -g works
declare -gr doesn't work
my current workaround is I've just stopped using the readonly 
attribute which

works. or just stopped using declare altogether.
or do something like this
var=x
typeset -r var

I've also seen some strange behavior with
declare in a conditional sourced global context. it only seems to 
happen when

the code size is 1s+ lines long.
I think this is exasperated by the fact that i source my files inside
conditional statements. the result is that I now even in a global 
context set

the global flag.

any ideas what could be causing it would help me try to figure out 
how to test

for it.

Sorry about the weak description but I've been working around it now for
months waiting for it to make sense and I'm no closer to figuring out 
whats

going on.
the problem is so erratic it has to be some sort of partial lost 
context or

something.

can you give me any tips on how to debug it if I can reproduce it 
again? is

their an easy way to track the validity of a variable?










I don't think it helps me but thanks for the try.
I would say zhats correct behavior. the code in the functions is only 
executed when you call the functions. So the first executed readonly 
variable is preserved.
Anyway my problem isn't with how readonly is preserved, its with how 
when I set readonly the variable seems to have a limited context even if 
I declare it explicitly global.






Re: inconsistency with "readonly" and scope

2012-04-12 Thread dethrophes

Am 12.04.2012 22:38, schrieb Steven W. Orr:

On 4/12/2012 4:21 PM, dethrophes wrote:

Am 12.04.2012 22:11, schrieb Steven W. Orr:

On 4/12/2012 2:16 PM, dethrophes wrote:

Am 12.04.2012 14:27, schrieb Chet Ramey:

On 4/11/12 4:12 PM, dethrophes wrote:

I've also noticed weird behavior with "declare -gr" the r 
sometimes seems
to override the g, but not specific to functions It seems to be 
specific
either to the source file or to the compound statement. I haven't 
been able

to figure out exactly whats going on there. I haven't been able to
reproduce it in a simple example. this is most readily noticeable 
with set

-o nounset
An example would help. The above is supposed to create a global 
readonly

variable. I suspect that you see the `global' as being `overridden'
because it's being created in a subshell environment.

Chet


It took me a second to reproduce it, but here it is:

--
#! /bin/bash

A()
{
typeset v1=Hello

B
echo "IN B:$v1"
}

B()
{
typeset -r v1=Goodbye

:
}
typeset -r v1=abc
A
echo "v1:$v1"
--

This is 4.0.35(1).

The v1 that's set to abc is the global one.
A defines a local copy that's not readonly and B defines one that 
is. When I

run it I get:

950 > ./foo.sh
./foo.sh: line 5: typeset: v1: readonly variable
./foo.sh: line 13: typeset: v1: readonly variable
IN B:abc
v1:abc

This means that the typeset failed is both A and B and the 
references in the

routines fell back to the global instance of v1.

Does this help?





I only see the problem in large complex cases, I've tried to 
reproduce a

simple example a couple of times but without success.
no its not a subshell problem, I avoid subshells as much as possible.
also when I expereimented with the problem a couple of times I 
discovered the

following-
declare -g works
declare -gr doesn't work
my current workaround is I've just stopped using the readonly 
attribute which

works. or just stopped using declare altogether.
or do something like this
var=x
typeset -r var

I've also seen some strange behavior with
declare in a conditional sourced global context. it only seems to 
happen when

the code size is 1s+ lines long.
I think this is exasperated by the fact that i source my files inside
conditional statements. the result is that I now even in a global 
context set

the global flag.

any ideas what could be causing it would help me try to figure out 
how to test

for it.

Sorry about the weak description but I've been working around it 
now for
months waiting for it to make sense and I'm no closer to figuring 
out whats

going on.
the problem is so erratic it has to be some sort of partial lost 
context or

something.

can you give me any tips on how to debug it if I can reproduce it 
again? is

their an easy way to track the validity of a variable?










I don't think it helps me but thanks for the try.
I would say zhats correct behavior. the code in the functions is only 
executed

when you call the functions. So the first executed readonly variable is
preserved.
Anyway my problem isn't with how readonly is preserved, its with how 
when I
set readonly the variable seems to have a limited context even if I 
declare it

explicitly global.




Ok, but do I have a point? I get your problem, but it seems like 
having a global variable that is allowed to be reinstantiated in an 
inner (albeit dynamic) scope should behave the same regardless of 
whether the global is readonly. Inside A, I don't get why I can't 
create a new variable whose failure (or success) depends on whether a 
previous outer scope variable exists as readonly.


Note that if v1 at the global scope is not readonly, the one in A *is* 
readonly, and the one in B is not, then there's no problem.


To me, this begs the question of whether all mainline code should be 
placed into a containing 'main' function so as to prevent this problem 
from occurring. Even if I do this, it still leaves me vulnerable to 
readonly variables that are sourced in before main is defined.


[I hope that didn't come off as a rant. ;-)]


have you tried local?
I'm not sure if it'll make a difference.
I don't agree that typeset/declare should be able to override/redefine a 
readonly variable it would defeat the purpose in a way.

however local arguably should allow you to do it.





Re: I have a bash style question. Feedback request.

2012-04-20 Thread dethrophes

Am 20.04.2012 16:38, schrieb Steven W. Orr:
I manage a hefty collection of bash scripts. Sometimes, I make heavy 
use of pushd and popd. When problems come up, the directory stack can 
make debugging more complicated than I'd like. One thing I did to get 
better control over things was to write a dstack module that provides 
pushd and popd functions.


The dstack module gives me simple flags that I can set through an 
environment variable that allow me to dump the stack either before or 
after a pushd or a popd, and to control their stdout and stderr. The 
default is that they both redirect to /dev/null and that all calls are 
checked when they return. e.g.

push somewhere || die 'Failed to pushd to somewhere'

(I hope people are still reading...)

Recently I discovered context managers in python and I had already 
implemented a directory stack module there. I added a context manager 
so that instead of saying


ds.pushd(foo)
do_a_bunch_of_stuff
ds.popd()

I can now say

with ds(foo):
do_a_bunch_of_stuff
# The popd is now automatic.

Back to bash, I see that things like code readability are impacted by 
the frequent use of pushd / popd. There could be lots (more than a 
screen full) between the pair, and forgetting to put the popd in or 
losing track of where you are can make things more complicateder. So, 
I thought: How can I get the benefit of a context manager in bash? It 
came to mind that simple curly braces might help.


So now I'm thinking of two possible scenarios.

Here's S1:

pushd somewhere
{
do_a_bunch_of_stuff
}
popd

And S2 would be:

{
pushd somewhere
do_a_bunch_of_stuff
popd
}

I'd like to get feedback. Some choices for reasonable feedback might be:
a. It doesn't matter and I'm clearly overthinking this. Just pick one.
b. I like S[12] and anyone who disagrees will be met with a jihad.
c. Here's a better solution that I didn't think of.

If you got this far, have feedback and are in the Boston area, there's 
a beer with your name on it.


TIA :-)


Tried something like this?


function RunCmdDir {
pushd "${1}" || return $?
"${@:2}"
popd
}
RunCmdDir "Work/Dir" do_a_bunch_of_stuff

ok so you'd have to put the in between code in a function(if you want 
more than 1 instruction) but that would probably help code readability 
anyway.



you could use a sub shell, if do_a_bunch_of_stuff doesn't need to modify 
the global context.

(
cd "Path"
do_a_bunch_of_stuff
)







autocomplete error doesn't look to be in bash-complete so I'm reporting it here.

2013-08-16 Thread dethrophes
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib  
-D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Wformat-security -Werror=format-security -Wall
uname output: Linux dethrophes-Mint-VM 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 
10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.2
Patch Level: 25
Release Status: release

Description:
autocomplete error doesn't look to be in bash-complete so I'm reporting 
it here.

Repeat-By:
  touch  '>(pygmentize -l text -f html )'
  rm >[Press tab]

  rm >\>\(pygmentize\ -l\ text\ -f\ html\ \)
 ^ Note leading >





Re: child_pid of background process? (not in manpage?)

2013-08-19 Thread dethrophes
   I actually had that problem as well, I found the description in the end
   it just took longer than I expected.

   Sent from my BlackBerry 10 smartphone.

   From: Linda Walsh
   Sent: Monday, August 19, 2013 19:53
   To: bug-bash
   Subject: Re: child_pid of background process? (not in manpage?)

   Chris Down wrote:
   > On 2013-08-18 17:46, Linda Walsh wrote:
   >> I don't find the variable for the process ID of the
   >> last started background process documented in the bash manpage...
   >>
   >> Am I just missing it, or did it get left out by accident or
   >> where did it go?
   >
   > First of all, it would help if you gave your version. At least in
   4.2.45, it's
   > under "Special Parameters".
   >
   >> Special Parameters
   >>
   >> [...]
   >>
   >> ! Expands to the process ID of the most recently executed
   >> background (asynchronous) command.
   ---
   That explains it... There's -- in searching the output, I'm
   pretty sure that seeing a stray '!' in the left hand margin
   wouldn't have looked like a var.
   In this case, especially for the 1 char vars, wouldn't it be
   easier to help people see them and help them search for them
   if they included the '$' before them so they wouldn't look
   like stray punctuation? or so I could search on \$[^a-zA-Z0-0]
   and find it?
   It would make finding it alot easier for people skimming or
   searching for it. (Not that, it isn't technically accurate,
   but searching for a single '!' as the name of a var in
   text doc, might lead to quite a few false positives?
   (im in V4.2.x bash, BTW,...sorry)


bug-bash@gnu.org

2013-12-14 Thread dethrophes
   I thought the value was only 0 if the fork/spawn was succesful.

   i.e. if it fails for lack of resources or something it's non zero.

   or have I miss understood it's significance?

   From: Chet Ramey
   Sent: Samstag, 14. Dezember 2013 05:05
   To: Martin Kealey
   Reply To: chet.ra...@case.edu
   Cc: bug-bash@gnu.org; b...@packages.debian.org; chet.ra...@case.edu
   Subject: Re: inconsistent $? after &

   On 12/12/13, 7:00 PM, Martin Kealey wrote:
   > Bash Version: 4.2
   > Patch Level: 25
   > Release Status: release
   >
   > Description:
   > The value of $? after starting a backgrounded command is
   > inconsistent:
   > $? is unchanged after starting a sufficiently complex command, but
   > after starting a simpler command it is set to 0.
   Thanks for the report. The exit status of any asynchronous command is
   0.
   I will fix this for bash-4.3-release.
   Chet
   --
   ``The lyf so short, the craft so long to lerne.'' - Chaucer
   ``Ars longa, vita brevis'' - Hippocrates
   Chet Ramey, ITS, CWRU c...@case.edu http://cnswww.cns.cwru.edu/~chet/


Re: pattern substitution expands "~" even in quoted variables

2014-03-08 Thread dethrophes
backslash to escape still works?

Sent from my BlackBerry 10 smartphone.
  Original Message  
From: Chet Ramey
Sent: Samstag, 8. März 2014 01:52
To: Lars Wendler; bug-bash@gnu.org
Reply To: chet.ra...@case.edu
Cc: chet.ra...@case.edu
Subject: Re: pattern substitution expands "~" even in quoted variables

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 3/7/14, 11:21 AM, Lars Wendler wrote:

> Bash Version: 4.3
> Patch Level: 0
> Release Status: release
> 
> Description:
> 
> bash-4.3 seems to expand a "~" (tilde character) with full homepath in a 
> pattern substitution even when the variable is embraced by double quotes.

Yes, this is a fix for a bug in bash-4.2. Eduardo's excellent explanation
provides more detail. ksh93 behaves the same as bash-4.3, FWIW.

Chet

- -- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRU c...@case.edu http://cnswww.cns.cwru.edu/~chet/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlMaaYYACgkQu1hp8GTqdKv/wACdEmh6RxfsHOxN8IwQ8EV7XRJB
pT8AmwaOD6sUaJpC3d8sD+4GUtMjuSy+
=inIq
-END PGP SIGNATURE-




Re: extension of file-test primitives?

2017-08-23 Thread Dethrophes


On August 23, 2017 3:37:51 PM GMT+02:00, Chet Ramey  wrote:
>On 8/23/17 9:34 AM, Greg Wooledge wrote:
>> On Wed, Aug 23, 2017 at 04:22:09PM +0300, Pierre Gaston wrote:
>>>  testfile () {
>>> local OPTIND=1 f=${!#}
>>> while getopts abcdefghLkprsSuwxOGN opt;
>>>   do
>>>  case $opt in
>>>[abcdefghLkprsSuwxOGN]) test -$opt $f  || return 1;;
>> 
>> "$f"
>> 
>>>*)return 1;;
>>>  esac;
>>>done
>>>  }
>>>
>>> if testfile -fx file;then.
>> 
>> Add the quotes, make opt local too, and I think we have a winner.
>This has the advantage of supporting both syntax options: a single
>option with multiple operators or a series of options, each with one
>or more operators, combined with a single operand.

Not really as it changes the meaning of 
test -f file -a -x file
Which I always understood as the correct way of doing this in the first place...

The only optimisation would be to possible cache the stat to save on system io.

-- 
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.



Re: extension of file-test primitives?

2017-08-23 Thread Dethrophes


>
>> Which I always understood as the correct way of doing this in the
>first place...
>
>It's not as good as multiple test commands: test -f file && test -x
>file.
>There's no ambiguity and you get short-circuiting.

Only if you are using the test built-in, otherwise the latter means 2 
spawns/forks however the shell in question calls the test exec.


-- 
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.



Re: extension of file-test primitives?

2017-08-23 Thread dethrophes

Well technically I don't *have* to accept the performance penalty.

As I can just use the posix comform syntax, which is quicker.

And just becasue I was curious as to how much quicker it is. Now 
admittedly it's not the biggest difference in perf, but I don't see any 
real point in using a slower one.


    test_file_1(){
        test -f "${1:?Missing Test File}" -a -x "${1}"
    }

    test_file_2(){
        test -f "${1:?Missing Test File}" && test -x "${1}"
    }

    test_case(){
        local cnt=${1:?Missing repeat count}
        local test_func=${2:?Missing Test Function}
        local test_file=${3:?Missing Test File}

        echo "${test_func}" "${test_file}"
        while [ $(((cnt -= 1 ) )) != 0 ]; do
  "${test_func}" "${test_file}"
        done
    }
    : ${TMPDIR:=/tmp}
    setup_test(){
        touch "${TMPDIR}/file"
        touch "${TMPDIR}/exec_file"
        chmod a+x "${TMPDIR}/exec_file"
        time test_case 1 test_file_1 "${TMPDIR}/file"
        time test_case 1 test_file_2 "${TMPDIR}/file"
        time test_case 1 test_file_1 "${TMPDIR}/exec_file"
        time test_case 1 test_file_2 "${TMPDIR}/exec_file"
    }

~/ks_qnx/test_bash.sh setup_test
test_file_1 /tmp/file

real    0m0.132s
user    0m0.128s
sys 0m0.004s
test_file_2 /tmp/file

real    0m0.148s
user    0m0.116s
sys 0m0.028s
test_file_1 /tmp/exec_file

real    0m0.138s
user    0m0.128s
sys 0m0.008s
test_file_2 /tmp/exec_file

real    0m0.153s
user    0m0.128s
sys 0m0.024s


Am 23.08.2017 um 16:27 schrieb Chet Ramey:

On 8/23/17 10:24 AM, Dethrophes wrote:



Which I always understood as the correct way of doing this in the

first place...

It's not as good as multiple test commands: test -f file && test -x
file.
There's no ambiguity and you get short-circuiting.

Only if you are using the test built-in, otherwise the latter means 2 
spawns/forks however the shell in question calls the test exec.

Since bash has a test builtin, this isn't exactly on point. But you have
to accept this kind of micro-inefficiency with a shell that sacrifices
speed for size.






Re: extension of file-test primitives?

2017-08-23 Thread dethrophes



Am 23.08.2017 um 16:46 schrieb Greg Wooledge:

On Wed, Aug 23, 2017 at 04:24:55PM +0200, Dethrophes wrote:

Which I always understood as the correct way of doing this in the

first place...

It's not as good as multiple test commands: test -f file && test -x
file.
There's no ambiguity and you get short-circuiting.

Only if you are using the test built-in, otherwise the latter means 2 
spawns/forks however the shell in question calls the test exec.

The comparison was against "test -f file -a -x file" which is deprecated.
The use of "-a" as a logical AND is not mandated by POSIX except in
"obsolescent XSI" mode.

http://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html
http://pubs.opengroup.org/onlinepubs/9699919799/help/codes.html#OB%20XSI

So if you're using test ... -a ... then you're almost certainly relying
on test being bash's builtin version.  And since this is bug-bash, we
generally assume you are using bash.

There is also test(1) in GNU coreutils, which currently still supports
the binary -a, but the coreutils 8.26 man page says "NOTE:  Binary  -a
and -o are inherently ambiguous.  Use 'test EXPR1 && test EXPR2' or
'test EXPR1 || test EXPR2' instead."

But, writing a script that relies on test being the one provided by
GNU coreutils (or any other version which implements the obsolescent
XSI deprecated feature set) is also extremely silly.

Ok I wasn't aware of that it is depricated.
Having said that it is pretty widely supported. I make use of this in 
pdksh, bash, dash, ksh and in a couple of test implementations not just 
the gnu one.





Re: extension of file-test primitives?

2017-08-23 Thread dethrophes
Yhea I just learned that now, it's been at least a decade since I looked 
at the posix spec on test.


Should probably update the bash help to reflect that

as help bash (in my version at least) only says

  EXPR1 -a EXPR2 True if both expr1 AND expr2 are true.
  EXPR1 -o EXPR2 True if either expr1 OR expr2 is true.

The main oses I work on are XSI conform.


Am 23.08.2017 um 17:00 schrieb Chet Ramey:

On 8/23/17 10:49 AM, dethrophes wrote:

Well technically I don't *have* to accept the performance penalty.

As I can just use the posix comform syntax, which is quicker.

Wait, which posix-conforming syntax? Because your original example, which
had five arguments to `test', is explicitly unspecified:


4 arguments:

 The results are unspecified.

unless you're on an XSI-conformant system.






Re: extension of file-test primitives?

2017-08-23 Thread dethrophes

Ok I inferred incorrectly that bash was also depricating it.

And just because I was curious.

I added the other syntaxes I'm aware of to see how they compared.
looks like [[ -e file && -x file ]], is the quickest.


    test_file_1(){
        test -f "${1}" -a -x "${1}"
    }

    test_file_2(){
        test -f "${1}" && test -x "${1}"
    }

    test_file_3(){
        [  -f  "${1}" -a  -x  "${1}" ]
    }

    test_file_4(){
        [[ -f "${1}" && -x "${1}" ]]
    }

    test_file_5(){
        [ -f "${1}" ] && [ -x "${1}" ]
    }

    test_file_6(){
        [[ -f "${1}" ]] && [[ -x "${1}" ]]
    }
    test_case(){
        local cnt=${1:?Missing repeat count}
        local test_func=${2}
        local test_file=${3}

        echo "${test_func}" "${test_file}"
        time {
        while [ $(((cnt -= 1 ) )) != 0 ]; do
  "${test_func}" "${test_file}"
        done
        }
    }
    : ${TMPDIR:=/tmp}
    setup_test(){
        touch "${TMPDIR}/file"
        touch "${TMPDIR}/exec_file"
        chmod a+x "${TMPDIR}/exec_file"
        for cFile in "${TMPDIR}/file" "${TMPDIR}/exec_file" ; do
            for cTest in test_file_{1,2,3,4,5,6} ; do
                time=$(test_case 1 "${cTest}" "${cFile}" 2>&1)
                echo ${time}
            done
        done
    }

./test_bash.sh setup_test
test_file_1 /tmp/file real 0m0.144s user 0m0.132s sys 0m0.008s
test_file_2 /tmp/file real 0m0.146s user 0m0.136s sys 0m0.008s
test_file_3 /tmp/file real 0m0.142s user 0m0.136s sys 0m0.004s
test_file_4 /tmp/file real 0m0.138s user 0m0.120s sys 0m0.016s
test_file_5 /tmp/file real 0m0.172s user 0m0.160s sys 0m0.008s
test_file_6 /tmp/file real 0m0.123s user 0m0.100s sys 0m0.020s
test_file_1 /tmp/exec_file real 0m0.138s user 0m0.116s sys 0m0.020s
test_file_2 /tmp/exec_file real 0m0.151s user 0m0.140s sys 0m0.008s
test_file_3 /tmp/exec_file real 0m0.142s user 0m0.140s sys 0m0.000s
test_file_4 /tmp/exec_file real 0m0.118s user 0m0.112s sys 0m0.004s
test_file_5 /tmp/exec_file real 0m0.162s user 0m0.148s sys 0m0.012s
test_file_6 /tmp/exec_file real 0m0.142s user 0m0.132s sys 0m0.004s



Am 23.08.2017 um 17:17 schrieb Chet Ramey:

On 8/23/17 11:13 AM, dethrophes wrote:

Yhea I just learned that now, it's been at least a decade since I looked at
the posix spec on test.

Should probably update the bash help to reflect that

as help bash (in my version at least) only says

   EXPR1 -a EXPR2 True if both expr1 AND expr2 are true.
   EXPR1 -o EXPR2 True if either expr1 OR expr2 is true.

Why update the help documentation? Bash supports it. It's just deprecated
in the posix standard.







Re: Patch for unicode in varnames...

2017-06-07 Thread Dethrophes
Instead of talking in terms of seriousness, it may be more use to think in 
terms of formality. 

Even in gramitically strong and formal languages variable and function names 
are restricted in the characters they may use. This is not just because it 
makes the parsing simpler but because it simplifies reading comprehension of 
the code. 

Now bash is already very difficult for most to appreciate the subtitles of how 
it is written. Supporting an almost endless number of characters in car and fun 
names would just make it endlessly more complex to understand.

This would be a bad idea in the same way that having control characters in 
filenames is a bad idea, just because you can do something doesn't mean you 
should.

This is also a question of the tco (total cost of ownership) of code. The 
flexible you allow the coding to be the more difficult it is to understand 
others code. Perl(need to be a guru to read someone else's code) vs python(a 
noob can figure it out pretty quickly with Google.).




On June 7, 2017 10:03:08 PM GMT+01:00, George  wrote:
>On Tue, 2017-06-06 at 10:20 -0400, Greg Wooledge wrote:
>> (OK, in reality, I am not taking any of this seriously.  This entire
>> proposal and discussion are like some bizarre fantasy land to
>me.  Bash
>> is a SHELL, for god's sake.  Not a serious programming
>language.  Even
>> serious programming languages are not ready for this; see the Python
>> proposal that was mentioned up-thread.)
>> 
>I don't think "shell" and "serious programming language" must be
>considered mutually exclusive.
>Shell scripting is "serious" in the sense that there are people who
>rely upon and actively develop shell scripts to keep their products
>working. If
>the language itself seems not "serious" I think that is because not
>enough effort has been made to extend and improve its functionality
>over time. (At
>least, not in the Bash project...  Korn Shell has done some good work
>along those lines.)
>But I think the "interactive" aspect is probably the most compelling
>argument for supporting something like this. The shell can serve as one
>of the
>primary tools for operating the computer. As such it should also aim to
>be a comfortable environment. If you tell a programmer that a shell
>only
>accepts ASCII in some contexts, there's a fair chance they'll
>understand why, and accept the limitation. For more casual users, this
>may not be the
>case. I think they'd be likely to wonder why they can't use accented
>characters, or other alphabets for parameter names. I don't think
>there's a
>particularly good reason not to allow it...  Though I'd personally like
>to see it implemented in a way that allows scripts to take advantage of
>that,
>and still work reliably regardless of the locale setting in which
>they're ultimately run.
>As a side note, I had expressed concern that Bash might not work
>correctly with GB-18030 encoding, but it appears that I was wrong. I
>set up a
>terminal to display the GB-18030 character set, set the locale within
>the terminal to GB-18030, and formed a string that I would expect to
>expose a
>parser bug (a string of multi-byte Chinese characters including 0x7C,
>the byte that corresponds to the ASCII vertical bar character), but it
>appeared
>to parse the string correctly according to the locale rather than break
>it up on the 0x7C byte. So that's good. :)

-- 
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.