Inline `ifdef style` debugging

2011-08-08 Thread Roger
I'm doing some research for one of my scripts and always liked the C style
ifdef inline debug statements.

The following script seems to work just fine when using the "echo" command
instead of the currently used "printf" command.

When using "printf", I get something like the following output on stdout:

$ ~/bin/script.sh
CheckingDone


What I intend to get, and do get when using "echo" and omitting \n's is:
$ ~/bin/script.sh
Checking depedencies...
Done checking dependecies.


Is there any way to fix this so "printf" can be used instead of "echo"?

Any additional ideas on performing this same task.  (I'm always concerned
about wasting CPU cycles and code readability.)

Using bash-4.2_p10 here.


---begin snip---
#!/bin/bash
DEBUG="1"

# Function to optionally handle executing included debug statements
_debug()
{
# I prefer using if/then for readability, but this is an unusual case
[ "${DEBUG}" -ne "0" ] && $@
}


_check_depends()
{
_debug printf "Checking depedencies...\n"

# ...omit more additional scripting

_debug printf "Done checking dependecies.\n\n"
}


# MAIN
_check_depends

_debug echo "TEST for kicks"

_debug printf "TEST for kicks\n"
---end snip---



-- 
Roger
http://rogerx.freeshell.org/



Re: Inline `ifdef style` debugging

2011-08-08 Thread Andreas Schwab
Roger  writes:

> [ "${DEBUG}" -ne "0" ] && $@

Missing quotes.

  [ "${DEBUG}" -ne 0 ] && "$@"

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."



Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Eric Blake

On 08/07/2011 02:35 PM, Linda Walsh wrote:



Eric Blake wrote:

On 08/05/2011 05:41 PM, Linda Walsh wrote:

Seem to fail on any negative number, but 'exit status' is defined
as a short int -- not an unsigned value (i.e. -1 would return 255).


In bash, 'return -- -1' sets $? to 255 (note the --). But since that
is already an extension (POSIX does not require 'return' to support --
any more than it is required to support an argument of -1), I agree
with your argument that bash would be more useful if, as an extension
to POSIX, it would handle 'return -1' - in fact, that would match ksh
behavior. Conversely, since portable code already can't use it, it's
no skin off my back if nothing changes here.

---
How about portable code using:

(exit -1); return


That's not portable, either.  exit is allowed to reject -1 as invalid. 
POSIX is clear that exit and return have the same constraints - if an 
argument is provided, it must be 0-255 to be portable.


However, you are on to something - since bash allows 'exit -1' as an 
extension, it should similarly allow 'return -1' as the same sort of 
extension.  The fact that bash accepts 'exit -1' and 'exit -- -1', but 
only 'return -- -1', is the real point that you are complaining about.


--
Eric Blake   ebl...@redhat.com+1-801-349-2682
Libvirt virtualization library http://libvirt.org



Re: Inline `ifdef style` debugging

2011-08-08 Thread Dennis Williamson
On Mon, Aug 8, 2011 at 3:47 AM, Roger  wrote:
> I'm doing some research for one of my scripts and always liked the C style
> ifdef inline debug statements.
>
> The following script seems to work just fine when using the "echo" command
> instead of the currently used "printf" command.
>
> When using "printf", I get something like the following output on stdout:
>
> $ ~/bin/script.sh
> CheckingDone
>
>
> What I intend to get, and do get when using "echo" and omitting \n's is:
> $ ~/bin/script.sh
> Checking depedencies...
> Done checking dependecies.
>
>
> Is there any way to fix this so "printf" can be used instead of "echo"?
>
> Any additional ideas on performing this same task.  (I'm always concerned
> about wasting CPU cycles and code readability.)
>
> Using bash-4.2_p10 here.
>
>
> ---begin snip---
> #!/bin/bash
> DEBUG="1"
>
> # Function to optionally handle executing included debug statements
> _debug()
> {
>    # I prefer using if/then for readability, but this is an unusual case
>    [ "${DEBUG}" -ne "0" ] && $@
> }
>
>
> _check_depends()
> {
>    _debug printf "Checking depedencies...\n"
>
>    # ...omit more additional scripting
>
>    _debug printf "Done checking dependecies.\n\n"
> }
>
>
> # MAIN
> _check_depends
>
> _debug echo "TEST for kicks"
>
> _debug printf "TEST for kicks\n"
> ---end snip---
>
>
>
> --
> Roger
> http://rogerx.freeshell.org/
>
>

Another way to write the _debug() function:

#!/bin/bash
debug=true

# Function to optionally handle executing included debug statements
_debug()
{
   debug && "$@"
}

Using this method, you don't really need a function at all. You can
just use the debug variable directly (you could use an underscore
prefix):

   $debug && printf "Checking depedencies...\n"

   $debug && {
# it's even more like ifdef since you can conditionally
execute blocks of code
foo
bar
}

You have to make sure that $debug contains only "true" or "false" or
you'll get errors. There are exceptions to this, but the complexity
isn't worth the effort.

I prefer using lowercase or mixed case variable names by habit to
reduce the chances of name collisions with predefined variables
(although that's not an issue with this specific script).

Since you're writing a script for Bash, you might as well use Bash
features. Here is the main line of your function a couple of more
different ways (using the original capitalization and value):

   [[ $DEBUG != 0 ]] && "$@"# string comparison
   (( DEBUG != 0 )) && "$@"  # integer comparison (note,
the $ is unnecessary)

-- 
Visit serverfault.com to get your system administration questions answered.



Re: Inline `ifdef style` debugging

2011-08-08 Thread Roger
> On Mon, Aug 08, 2011 at 08:56:36AM -0500, Dennis Williamson wrote:
>On Mon, Aug 8, 2011 at 3:47 AM, Roger  wrote:
>> I'm doing some research for one of my scripts and always liked the C style
>> ifdef inline debug statements.

...

>Another way to write the _debug() function:
>
>#!/bin/bash
>debug=true
>
># Function to optionally handle executing included debug statements
>_debug()
>{
>   debug && "$@"
>}
>
>Using this method, you don't really need a function at all. You can
>just use the debug variable directly (you could use an underscore
>prefix):
>
>   $debug && printf "Checking depedencies...\n"
>
>   $debug && {
># it's even more like ifdef since you can conditionally
>execute blocks of code
>foo
>bar
>}
>
>You have to make sure that $debug contains only "true" or "false" or
>you'll get errors. There are exceptions to this, but the complexity
>isn't worth the effort.

This is interesting because I'm guessing it might save one or two CPU
cycles, although it still does use a test at "&&", and is still
readable, if not more readable then before.

>I prefer using lowercase or mixed case variable names by habit to
>reduce the chances of name collisions with predefined variables
>(although that's not an issue with this specific script).

I know the system capitalized defined variables, so I usually just use
all capitalized variable for global script variables within the
header/top of the bash script.  I've heard of the risk of collision,
and have tried mixed-case variable names, but they were not as readable
and the risk seemed more minimal ... unless a Linux distribution makes
up a system variable that conflicts with one of my all capitalized
variable names.

I then use lower case for locally used variables within the Bash script or
within the functions of the script.  Similar to C.  It just makes good
sense, and I know where to look for predefined capitalized variable names
for their definition.

>Since you're writing a script for Bash, you might as well use Bash
>features. Here is the main line of your function a couple of more
>different ways (using the original capitalization and value):
>
>   [[ $DEBUG != 0 ]] && "$@"# string comparison
>   (( DEBUG != 0 )) && "$@"  # integer comparison (note,
>the $ is unnecessary)

Yup.  I was thinking of sticking with 0's & 1's, as text comparison
requires more CPU cycles ... although negligible these days, it's just
good programming practice.

-- 
Roger
http://rogerx.freeshell.org/



Re: Inline `ifdef style` debugging

2011-08-08 Thread Steven W. Orr

On 8/8/2011 1:09 PM, Roger wrote:

On Mon, Aug 08, 2011 at 08:56:36AM -0500, Dennis Williamson wrote:
On Mon, Aug 8, 2011 at 3:47 AM, Roger  wrote:

I'm doing some research for one of my scripts and always liked the C style
ifdef inline debug statements.


...


Another way to write the _debug() function:

#!/bin/bash
debug=true

# Function to optionally handle executing included debug statements
_debug()
{
   debug&&  "$@"
}

Using this method, you don't really need a function at all. You can
just use the debug variable directly (you could use an underscore
prefix):

   $debug&&  printf "Checking depedencies...\n"

   $debug&&  {
# it's even more like ifdef since you can conditionally
execute blocks of code
foo
bar
}

You have to make sure that $debug contains only "true" or "false" or
you'll get errors. There are exceptions to this, but the complexity
isn't worth the effort.


This is interesting because I'm guessing it might save one or two CPU
cycles, although it still does use a test at "&&", and is still
readable, if not more readable then before.


I prefer using lowercase or mixed case variable names by habit to
reduce the chances of name collisions with predefined variables
(although that's not an issue with this specific script).


I know the system capitalized defined variables, so I usually just use
all capitalized variable for global script variables within the
header/top of the bash script.  I've heard of the risk of collision,
and have tried mixed-case variable names, but they were not as readable
and the risk seemed more minimal ... unless a Linux distribution makes
up a system variable that conflicts with one of my all capitalized
variable names.

I then use lower case for locally used variables within the Bash script or
within the functions of the script.  Similar to C.  It just makes good
sense, and I know where to look for predefined capitalized variable names
for their definition.


Since you're writing a script for Bash, you might as well use Bash
features. Here is the main line of your function a couple of more
different ways (using the original capitalization and value):

   [[ $DEBUG != 0 ]]&&  "$@"# string comparison
   (( DEBUG != 0 ))&&  "$@"  # integer comparison (note,
the $ is unnecessary)


Yup.  I was thinking of sticking with 0's&  1's, as text comparison
requires more CPU cycles ... although negligible these days, it's just
good programming practice.



Two things:

1. Worrying about lost cycles every time you call _debug because it checks to 
see if debug is True is pretty miserly. If you *really* have to worry about 
that kind of economics then it's questionable whether you're even in the right 
programming language. Having said that, you can certainly create a function 
that is defined at startup based on the value of debug. Remember that 
functions are not declared, their creation is executed.


if (( debug ))
then
_debug()
{
"$@"
# I do question whether this is a viable construct, versus
# eval "$@"
}
else
_debug()
{
:
}
fi

2. The other thing is that instead of

#!/bin/bash
debug=true

at the beginning of the file, you can just say:

#! /bin/bash
: ${debug:=0}   # or false or whatever feels good.

Then when you want to run the program with debug turned on, just say:

debug=1 prog with various args

and the environment variable will override the global variable of the same 
name in the program, an only set it for the duration of the program.


Is this helpful?


--
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net



Re: Inline `ifdef style` debugging

2011-08-08 Thread Roger
> On Mon, Aug 08, 2011 at 01:20:25PM -0400, Steven W. Orr wrote:

>Two things:
>
>1. Worrying about lost cycles every time you call _debug because it checks to 
>see if debug is True is pretty miserly. If you *really* have to worry about 
>that kind of economics then it's questionable whether you're even in the right 
>programming language. Having said that, you can certainly create a function 
>that is defined at startup based on the value of debug. Remember that 
>functions are not declared, their creation is executed.
>
>if (( debug ))
>then
> _debug()
> {
> "$@"
> # I do question whether this is a viable construct, versus
> # eval "$@"
> }
>else
> _debug()
> {
> :
> }
>fi
>
>2. The other thing is that instead of
>
>#!/bin/bash
>debug=true
>
>at the beginning of the file, you can just say:
>
>#! /bin/bash
>: ${debug:=0}  # or false or whatever feels good.
>
>Then when you want to run the program with debug turned on, just say:
>
>debug=1 prog with various args
>
>and the environment variable will override the global variable of the same 
>name in the program, an only set it for the duration of the program.
>
>Is this helpful?

Yes.

I've decided to use Mr. Williamson's suggestion:

_debug()
{
[[ $DEBUG != 0 ]] && "$@"
}


The reply above was very helpful and might provide people searching for
"ifdef style debugging" for Bash further info on inline debugging techniques.

The above does complete the task of inline ifdef style debugging, however, I
think it might confuse or slight hinder readability for some beginners.

I'll flag email and restudy it later as I think this "ifdef style" is the
better method versus having a function read every time even though it isn't
executed at all!


-- 
Roger
http://rogerx.freeshell.org/


signature.asc
Description: Digital signature


Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Linda Walsh


I was testing functions in my shell,  I would cut/paste,
thing is, with each past, I'd get my dir listed (sometimes multiple times)
on each line entered.

Now I have:
shopt:
no_empty_cmd_completion on

i.e. it's not supposed to expand an empty line

but type in
function foo {
return 1
}

When I hit tab it lists out all the files in my dir -- which
explains why when I cut/paste, any tab-indented line will list
out the dir, and if it is multiply indented, it will be listed
once for each indent level!

Now I'm sure someone will come up and tell me how POSIX this or that.
But maybe POSIX needs to be retired if this is some required feature!

When I tell it not to complete an empty line, and hit tab at the
beginning of a line -- even inside a function def, I don't need my
dir listed!

So is this a bug, or can we retire POSIX??

running bash  4.1.10(1)-release (x86_64-suse-linux-gnu)
(released version in Suse 11.4).









Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Michael Witten
On Mon, Aug 8, 2011 at 18:44, Linda Walsh  wrote:
>
> I was testing functions in my shell,  I would cut/paste,
> thing is, with each past, I'd get my dir listed (sometimes multiple times)
> on each line entered.
>
> Now I have:
> shopt:
> no_empty_cmd_completion on
>
> i.e. it's not supposed to expand an empty line
>
> but type in
> function foo {
> return 1
> }
> 
> When I hit tab it lists out all the files in my dir -- which
> explains why when I cut/paste, any tab-indented line will list
> out the dir, and if it is multiply indented, it will be listed
> once for each indent level!
>
> Now I'm sure someone will come up and tell me how POSIX this or that.
> But maybe POSIX needs to be retired if this is some required feature!
>
> When I tell it not to complete an empty line, and hit tab at the
> beginning of a line -- even inside a function def, I don't need my
> dir listed!
>
> So is this a bug, or can we retire POSIX??
>
> running bash  4.1.10(1)-release (x86_64-suse-linux-gnu)
> (released version in Suse 11.4).

Don't use tab characters on the interactive command line.

I'm not sure what you're mumbling about POSIX.



Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Linda Walsh



Michael Witten wrote:

On Mon, Aug 8, 2011 at 18:44, Linda Walsh  wrote:

I was testing functions in my shell,  I would cut/paste,
thing is, with each past, I'd get my dir listed (sometimes multiple times)
on each line entered.

Now I have:
shopt:
no_empty_cmd_completion on

i.e. it's not supposed to expand an empty line

but type in
function foo {
return 1
}

When I hit tab it lists out all the files in my dir -- which
explains why when I cut/paste, any tab-indented line will list
out the dir, and if it is multiply indented, it will be listed
once for each indent level!

Now I'm sure someone will come up and tell me how POSIX this or that.
But maybe POSIX needs to be retired if this is some required feature!

When I tell it not to complete an empty line, and hit tab at the
beginning of a line -- even inside a function def, I don't need my
dir listed!

So is this a bug, or can we retire POSIX??

running bash  4.1.10(1)-release (x86_64-suse-linux-gnu)
(released version in Suse 11.4).


Don't use tab characters on the interactive command line.

I'm not sure what you're mumbling about POSIX.



If you don't know, nevermind, it's irrelevant.

however, Can you explain the purpose of the "shopt option 
'no_empty_cmd_completion'

and if you can do that, can you explain why I shouldn't use tab as an indent
char on an empty line...?






Who's the target? Was: Inline `ifdef style` debugging

2011-08-08 Thread Steven W. Orr

On 8/8/2011 2:38 PM, Roger wrote:

On Mon, Aug 08, 2011 at 01:20:25PM -0400, Steven W. Orr wrote:



Two things:

1. Worrying about lost cycles every time you call _debug because it checks to
see if debug is True is pretty miserly. If you *really* have to worry about
that kind of economics then it's questionable whether you're even in the right
programming language. Having said that, you can certainly create a function
that is defined at startup based on the value of debug. Remember that
functions are not declared, their creation is executed.

if (( debug ))
then
 _debug()
 {
 "$@"
 # I do question whether this is a viable construct, versus
 # eval "$@"
 }
else
 _debug()
 {
 :
 }
fi

2. The other thing is that instead of

#!/bin/bash
debug=true

at the beginning of the file, you can just say:

#! /bin/bash
: ${debug:=0}   # or false or whatever feels good.

Then when you want to run the program with debug turned on, just say:

debug=1 prog with various args

and the environment variable will override the global variable of the same
name in the program, an only set it for the duration of the program.

Is this helpful?


Yes.

I've decided to use Mr. Williamson's suggestion:

_debug()
{
 [[ $DEBUG != 0 ]]&&  "$@"
}


The reply above was very helpful and might provide people searching for
"ifdef style debugging" for Bash further info on inline debugging techniques.

The above does complete the task of inline ifdef style debugging, however, I
think it might confuse or slight hinder readability for some beginners.

I'll flag email and restudy it later as I think this "ifdef style" is the
better method versus having a function read every time even though it isn't
executed at all!




Thanks, I think ;-)

But you raise an interesting question: When I write code, am I targeting the 
person who knows less of the language, and so therefore I should dumb my code 
down? Or should I feel free to use the well documented constructs that I went 
out of my way to learn (and of course to doc what I write)?


Imagine a code base of 10s of thousands of lines of bash, where there are no 
shared modules, no arrays, almost no error checking, no local variables, no 
command line option parsing, no anything that's not completely vanilla. In 
short, only constructs that contributed to Sun's succeeding for years at 
foisting the Cshell on people.


Languages come in different sizes. C is small, PL/I is large. I'd categorize 
bash as somewhere in the middle. If you're in high school, then the name of 
the game is to write a program that prints the right answer. If you're a 
professional software engineer, then the name of the game is to write high 
quality, industrial grade code, that is well documented, and targets the 
reader of the code with all the information (s)he needs to understand just 
what this verfluchtiges thing is trying to do.


It can be very frustrating that people will go out of their way to write 
compiled code in painstaking detail, but then when they get to shell scripts, 
they throw their hands up and just get it done with no concept of doing it well.


Two examples: New person out of school is told to write something and to use 
some code that almost does what they want from Big Blue. The guy who wrote the 
code from Big Blue clearly thought that all he knew from Csh should be used in 
his bash scripts.


Only that were
if ( test $? -eq 0 )
...

Another case: There's a guy in a well known Journal of Linux magazine who 
writes a column on bash scripting. He wouldn't know it if he got bashed in the 
face with his junk properly written. He seems hell bent on doing things with 
as many possible processes; basically pushing the limits of what a junior 
Bourne shell guy might have mastered years earlier.


I didn't mean to go on a rant, but this list here should not be another place 
that discourages people from learning more of the language elements that allow 
us to do better work. Obscure features are in the mind of the uneducated. Doc 
your code well, but don't engage in elbow kissing in the misdirected intent to 
make the code simpler.


No?

--
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net



Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Davide Brini
On Mon, 08 Aug 2011 12:07:21 -0700, Linda Walsh  wrote:

> however, Can you explain the purpose of the "shopt option 
> 'no_empty_cmd_completion'
> and if you can do that, can you explain why I shouldn't use tab as an
> indent char on an empty line...?

Short answer: if you do

foo() {
>  # hit tab here
 
(the ">" is the prompt bash gives you if you hit enter), that's NOT an
empty line as far as bash is concerned, so the option
no_empty_cmd_completion does not come into play. In fact, you could do the
same thing with

foo() { # hit tab here

and I'm sure you wouldn't consider that an empty line.

Note that it's not related specifically to declaring a function: try for
example

ls /tmp &&   # hit enter here
>  # hit tab here, you get the completions

And as already stated, POSIX has nothing to do with all this.

-- 
D.



Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Davide Brini
On Mon, 8 Aug 2011 21:14:50 +0200, Davide Brini  wrote:

> In fact, you could do the same thing with
> 
> foo() { # hit tab here
> 
> and I'm sure you wouldn't consider that an empty line.

I have to take that back: it looks like bash treats the above differently
depending on whether enter was pressed or not:

foo() { # hit tab here
Display all 2138 possibilities? (y or n)

foo() {  # hit enter here
> # hit tab here
Display all 112 possibilities? (y or n)

The latter only attemps completion from names in the current directory.

On the other hand, with no_empty_cmd_completion set, no completion at all is
attempted in the first case, while the second case still attempts completion
from local names.

-- 
D.



Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Linda Walsh



Davide Brini wrote:


foo() {  # hit enter here

# hit tab here

Display all 112 possibilities? (y or n)

The latter only attemps completion from names in the current directory.

---
Right.   That was my issue.

My understanding is it shouldn't try to perform completion on an empty
line.  Period.

Consider at the command line:



it doesn't display the local filenames

(assuming no_command_completion is set)...


The POSIX reference was to a previous note I posted ... were I wanted to
know why
return -1 threw an error, but you could work around it with
(exit -1);return;  People cited posix, but I asked for return to be 
consistent with 'exit',
as exit (as it currently functions) is consistent with most other 
languages (i.e. they

define the value as a short int, but it is still truncated to 8-bits).
Having exit & return in bash behave similarly to other langs can only 
help in shell programming.


If 'posix-only', (or whtaever the option is) is set, then probably both 
(exit and return)

should 'throw' errors -- 'value out of range' (not invalid option

Anyway...command completion!...uhshouldn't be trying to complete
things from the current dir.   That's the whole point of 
no_empty_cmd_completion, no?





Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Linda Walsh



Eric Blake wrote:

(exit -1); return


That's not portable, either.  exit is allowed to reject -1 as invalid. 
POSIX is clear that exit and return have the same constraints - if an 
argument is provided, it must be 0-255 to be portable.


However, you are on to something - since bash allows 'exit -1' as an 
extension, it should similarly allow 'return -1' as the same sort of 
extension.  The fact that bash accepts 'exit -1' and 'exit -- -1', but 
only 'return -- -1', is the real point that you are complaining about.




Clearly(? :-)), 'return' should accept -1, in 'normal mode',
and both 'exit' and 'return' should return error "ERANGE" if "--posix" is
set, and -1 is given.  Iinvalid option doesn't make as much sense, in
this situtation, if it was -k or -m, sure...but in this case, it's a fact
that --posix artificially limits exit values apart from what is allowed in
most prog langs (which accept negative, but still return results &0xff),
so for Posix, it's a matter of disallowing a 'normal range', vs. it being
an invalid option

Would that be sound reasoning?







Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Linda Walsh



Bob Proulx wrote:

Linda Walsh wrote:

Bob Proulx wrote:

Exit codes should be in the range 0-255.

---
I suppose you don't realize that 'should' is a subjective opinion that
sometimes has little to do with objective reality.


Sigh.  Okay.  Keep in mind that turn about is fair play.  You are
giving it to me.  Please be gracious on the receiving end of it. 

---
I *do* love it!   I strive vociferously, to not be hypocritical,
so what I put out, feel free to give back -- where I get in trouble, is
that is how I operate with other people (not from the start, but given time,
I start reciprocating their behavior and language toward me...)

Mirroring, or adopting another's form of address/speech, etc, is
usually considered a good thing to develop rapport, (Ha!)... -- expcept
when the other person is being a jerk!...then it causes problems...

Unfortunately unless I catch myself, 'mirroring' is almost
unconscious for me.  But it works to my disadvantage as often or more
so in society these days (with everyone being so 'friendly' by default 
[not!]).


So try to buffer and average input and start from a 'positive'
friendly point -- things go up or down from there.



You
do realize that you should comply with documented interfaces.  Why?
Because that is the way the interface specification is documented.  If
you want things to work easily then most generally the easiest way is
to follow the documented practice.  Unless you have an exceptional
reason for doing something different from the documented interface
then actively doing anything else is actively trying to be broken.


Ok, lets look at what you say below...



POSIX was intended to document
the existing behavior so that if you followed the specification then
you would have some hope of being able to run successfully on multiple
systems.  And it was intended to curb the differences from becoming
greater than they were already.  It tries to reign in the divergent
behaviors so that there won't be more differences than already exist.

This is what POSIX says about it:

  void exit(int status);

  The value of status may be 0, EXIT_SUCCESS, EXIT_FAILURE, or any
  other value, though only the least significant 8 bits (that is,
  status & 0377) shall be available to a waiting parent process.



Exactly!.

'int status' means it takes a 'signed' value.  It will and it
with 0377, given, but the *input* allows positive and negative numbers.
Then they will be masked to 8 bits.

That's all I'm asking for.






manpage clarification/addition.

2011-08-08 Thread Linda Walsh

Lest some think functions can replace aliases, there's a line in the manpage
that I feel needs amending.  Currently it says:

   "For almost every purpose, aliases are superseded by shell functions."

While true, it may too likely be read by some to mean that aliases have no
useful purpose.  So I'd suggest:

   "For most purposes, aliases are superseded by shell functions, though
aliases are still required in some situations".


---


For those who would need an example, consider the following
2 example:
---

alias sub=function  # (example 1: I couldn't see how to use a
 # function to define a keyword for 
function)

declare -ix Debug_Ops=1   #set to 0/1 to turn off trace of myop
declare -ixr _D_myop=0x01
sub  debug {
   local op="${1:-}"; test "$op" || return 1;
   local dop="_D_$op"
   local vop="${!dop:-0}"
   local res
   ((res=vop&Debug_Ops))# if bitset, evals as nonzero, so returns 0
}

#   could be a function, but alias is pretty str8forward
alias DebugPop='test -n "$save_ops" && set -$save_ops'

# But DebugPush takes a "param" that includes the debug flag for the 
routine.

# Since the param needs to be "subbed" into the middle
# of the string, an alias, obviously won't work, so first try plain 
function:


sub DebugPush { local save_ops="${-:-}"; debug $1 || set +x; }

sub xxyz {
   DebugPush myop
   echo do some code;a=1;b=2;
   DebugPop
}
---
That one won't work -- cuz 'save_ops' which I wanted as a local to xxyz, is
local to DebugPush.

Needed for workaround:

unset -f DebugPush   #make sure previous function is undef'd
sub DebugPush_helper { debug $1  && set +x; }
alias DebugPush='local save_ops="${-:-}"; DebugPush_helper'
sub xxyz {
   DebugPush myop
   echo do some code;a=1;b=2;
   DebugPop
}

Note -- if you run the above, you have to re-define 'sub xxyz', as alias 
subbing
is done at function define time...so it needs to be redefined to use the 
new

definition for DebugPush.
In 2nd example the 'local is in xxyz's context (where it was desired).

The function only "turns off" (could be extended). tracing on some 
functions...

Just wrote it today, so haven't thought through the design that much...just
wanted to turn of some tracing in low level functions...so ... that's 
what got

implemented... ;-)









Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Michael Witten
On Mon, 8 Aug 2011 21:40:39 +0200, Davide Brini wrote:

> On Mon, 8 Aug 2011 21:14:50 +0200, Davide Brini  wrote:
> 
>> In fact, you could do the same thing with
>> 
>> foo() { # hit tab here
>> 
>> and I'm sure you wouldn't consider that an empty line.
> 
> I have to take that back: it looks like bash treats the above differently
> depending on whether enter was pressed or not:
> 
> foo() { # hit tab here
> Display all 2138 possibilities? (y or n)
> 
> foo() {  # hit enter here
> > # hit tab here
> Display all 112 possibilities? (y or n)
> 
> The latter only attemps completion from names in the current directory.
> 
> On the other hand, with no_empty_cmd_completion set, no completion at all is
> attempted in the first case, while the second case still attempts completion
> from local names.

This behavior does indeed seem buggy.

>From the following:

  $ info '(bash)The Shopt Builtin'

we have:

  `no_empty_cmd_completion'
If set, and Readline is being used, Bash will not attempt to
search the `PATH' for possible completions when completion is
attempted on an empty line.

Now, firstly, there is possibly a conceptual conflict between `empty_cmd' and
`empty line', but seeing as virtually everything that can be input is defined
as a command of some sort or another, maybe there's no problem.

At any rate, the input:

  $ foo() { # hit tab here, right after the space character.

is not only a non-empty line, but it is also an incomplete function definition
*command*, according to the nomenclature and shell grammar specified by POSIX:

  Base Definitions Issue 7, IEEE Std 1003.1-2008
  
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_05

Hence, it seems to me that completion should be attempted regardless of
whether `no_empty_cmd_completion' is set; of course, searching `PATH'
for a completion would likely result in no available completions anyway.

Moreover, given that searching `PATH' is mentioned, one could easily
imagine that the completion in question is that of the names of simple
commands. In that capacity, there is no simple command to complete (in
other words, there is an `empty' command to complete). In this case, then,
it is correct for there to be no completion when `no_empty_cmd_completion'
is set, and it is correct for there to be a list of all available commands
when `no_empty_cmd_completion' is unset (which would appear to be the
current behavior).

Now, what about the continuation of the command line?

  $ foo() {# hit enter here
  > #hit tab here.

According to the same POSIX documentation:

  PS2
  Each time the user enters a  prior to completing a
  command line in an interactive shell, the value of this variable
  shall be subjected to parameter expansion and written to standard
  error. The default value is "> " . This volume of POSIX.1-2008
  specifies the effects of the variable only for systems supporting
  the User Portability Utilities option.

and together with bash's shopt options `cmdhist' and `lithist', it would
seem that bash [mostly] treats `multi-line' commands (as bash calls them)
as one large single command line. Hence, a tab on a continuation line might
be expected to invoke only file name completion (which does happen, and is
indeed orthogonal to the setting of `no_empty_cmd_completion').

Unfortunately, such behavior is still inconsistent with the current completion
behavior as described above, because bash basically converts an interactive
newline in a compound command to a semicolon, thereby initiating another
simple command.

For instance, regardless of `no_empty_cmd_completion', the following:

  $ foo() { echo a; ech#tab

completes `echo' on my system.

When the option is set, the following:

  $ foo() { echo a; #tab

doesn't attempt any completion (visibly, anyway). However, when the option is
UNset, completion is attempted based on all available commands (thousands).

So, based on the existing semantics and descriptions, one would expect 

  $ foo() {#enter
  > #tab

to make no completion attempt when the option is set, and to make an attempt
based on all available command names when unset, but not the current behavior,
which is to suggest file name completions.

Now, of course, completion is much more complex than just command name and
file name completion, and I think it could be improved to be more useful.

For instance, according to shell grammar, any compound command is acceptable
as the body of a function, so that we could have this:

  $ i=0
  $ foo()
  > while [ $i -lt 10 ]; do i=$((i + 1)); echo $i; done
  $ foo#enter
  1
  2
  3
  4
  5
  6
  7
  8
  9
  10
  $ bar()
  > if [ $i -eq 10 ]; then echo 11; fi
  $ bar
  11

Thus, for instance, completing function definition commands should produce
results like this:

  $ foo() #tab
  ( { case  ifwhile until

Also, why doesn't `shopt' have option completion? etc.

In any case, e

Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Linda Walsh



Michael Witten wrote:


In any case, even if `no_empty_cmd_completion' were to behave as Linda
expected, her tabs would still get eaten when pasted on the interactive
command line.

---
Which is what Linda expects...
the function definition wouldn't hold spaces or tabs or whitespace unless
it is quoted.

She just doesn't expect, when pasting a function
that is from a source file into her shell, scads of output that is
unexpected, unwanted, and more than a bit confusing.

Fortunately, if you have the function 'well formed' and 'well defined',
it seems to make no difference as far as defining the actual function,
BUT having all the names of my current dir blatted out for each tab
char is a pain.

So don't assume or infer that Linda wanted the tabs included in bash's
internal function representation.   She just didn't want the blather.

Reasonable?  Or is someone going to tell me why blather is a desired
and wanted feature (under one standard or another! ;-))...









Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Michael Witten
On Mon, 08 Aug 2011 15:56:30 -0700, Linda Walsh wrote:

> Michael Witten wrote:
>> 
>> In any case, even if `no_empty_cmd_completion' were to behave as Linda
>> expected, her tabs would still get eaten when pasted on the interactive
>> command line.
> 
> Which is what Linda expects... the function definition wouldn't
> hold spaces or tabs or whitespace unless it is quoted.
>
> She just doesn't expect, when pasting a function that is from a
> source file into her shell, scads of output that is unexpected,
> unwanted, and more than a bit confusing.
> 
> Fortunately, if you have the function 'well formed' and 'well
> defined', it seems to make no difference as far as defining the
> actual function, BUT having all the names of my current dir
> blatted out for each tab char is a pain.
> 
> So don't assume or infer that Linda wanted the tabs included in
> bash's internal function representation. She just didn't want the
> blather.
> 
> Reasonable? Or is someone going to tell me why blather is a
> desired and wanted feature (under one standard or another! ;-))...

Reasonable, but naive.

Interactive bash can complete variable names, even when they are quoted.

On my system, these variables are in the environment:

  TERM
  TERMINFO

Now, when I try to paste the following ("$TERM$TERMINFO"):

  foo() { echo "$TERM   $TERMINFO"; }

this is the result in my terminal:

  mfwitten$ foo() { echo "$TERM$TERMINFO"; }

which is not what I wanted; if I save that same line in a file
and then run a shell on that file, I get the expected output.

Similarly, if I paste the following ("$TERM$TERMINFO"):

  foo() { echo "$TERM   $TERMINFO"; }

the result is this in my terminal:

  mfwitten$ foo() { echo "$TERM
  $TERM  $TERMINFO  
  mfwitten$ foo() { echo "$TERM$TERMINFO"; }

Here, the 2 tabs ask bash/readline to print the possible completions
for `$TERM' (hence the scads of output) and then the rest of the
pasted input gets written in as before.

If you want to work with your functions interactively, then just
save them to a file (like `/tmp/s') and source them in:

  mfwitten$ source /tmp/s
  mfwitten$ foo
  rxvt-unicode-256color /usr/share/terminfo

or, equivalently:

  mfwitten$ . /tmp/s
  mfwitten$ foo
  rxvt-unicode-256color /usr/share/terminfo

This is much more efficient than copying and pasting when you need
to do a lot of tinkering.

If you insist on copying and pasting into an interactive terminal,
then never ever use tab characters anywhere.



Re: manpage clarification/addition.

2011-08-08 Thread Roger
> On Mon, Aug 08, 2011 at 02:28:00PM -0700, Linda Walsh wrote:
>Lest some think functions can replace aliases, there's a line in the manpage
>that I feel needs amending.  Currently it says:
>
>"For almost every purpose, aliases are superseded by shell functions."
>
>While true, it may too likely be read by some to mean that aliases have no
>useful purpose.  So I'd suggest:
>
>"For most purposes, aliases are superseded by shell functions, though
>aliases are still required in some situations".

The latter seems even more trickier to read then the previous.

I would suggest scrapping both attempts at clarifications and state one (maybe
two) solid pros for each and then a con for each.  Or something of a mix within
one sentence for the sake of brevity?

I've seen & worked with both and in my opinion:

---
Aliases are really meant for CLI or bashrc usage and can be quickly written.
Aliases seem to have some limitations as to what statements they may contain
as it's a one-liner.

Functions can easily contain more complicated statements, and can also be
contained within bashrc, and utilized via CLI -- but really are used within
scripts.

As far as system resources, I've heard functions are quicker.  But I don't know
if this is accurate as functions usually contain more execution statements!
---


-- 
Roger
http://rogerx.freeshell.org/



Re: Who's the target? Was: Inline `ifdef style` debugging

2011-08-08 Thread Roger
> On Mon, Aug 08, 2011 at 03:07:15PM -0400, Steven W. Orr wrote:
>
>I didn't mean to go on a rant, but this list here should not be another place 
>that discourages people from learning more of the language elements that allow 
>us to do better work. Obscure features are in the mind of the uneducated. Doc 
>your code well, but don't engage in elbow kissing in the misdirected intent to 
>make the code simpler.

This is easily a place where everybody is right.  It's a question based on, how
much knowledge you have, personal preference, etc.

This is why I flagged your email for later research.  Thought that it perfectly
solved the problem, however, could be difficult if an intermediate user saw it. 
Matter of fact, I even printed the code out for later reading!

It's just how I code (work).  When I perform something, I want somebody else to
easily pickup fix/repair.  Look at it from this perspective, it would be like
writing a Bible in Old English ... very effectively written to display emotion
unlike our English language usage today.  However, translating text to a more
easily read state loses some or most meaning.


As for this feature with "ifdef C style statements", maybe it's something that
can be improved upon later within Bash code?

-- 
Roger
http://rogerx.freeshell.org/



Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Chet Ramey
On 8/8/11 8:53 AM, Eric Blake wrote:

> However, you are on to something - since bash allows 'exit -1' as an
> extension, it should similarly allow 'return -1' as the same sort of
> extension.  The fact that bash accepts 'exit -1' and 'exit -- -1', but only
> 'return -- -1', is the real point that you are complaining about.

That's a reasonable extension to consider for the next release of bash.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Mike Frysinger
On Monday, August 08, 2011 21:20:29 Chet Ramey wrote:
> On 8/8/11 8:53 AM, Eric Blake wrote:
> > However, you are on to something - since bash allows 'exit -1' as an
> > extension, it should similarly allow 'return -1' as the same sort of
> > extension.  The fact that bash accepts 'exit -1' and 'exit -- -1', but
> > only 'return -- -1', is the real point that you are complaining about.
> 
> That's a reasonable extension to consider for the next release of bash.

i posted a patch for this quite a while ago.  not that it's hard to code.
-mike


signature.asc
Description: This is a digitally signed message part.


equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
Has anyone ever come across an equivalent to Linux's readlink -f that
is implemented purely in bash?

(I need readlink's function on AIX where it doesn't seem to be available).

jon.



Re: bash completion

2011-08-08 Thread Clark J. Wang
On Sun, Aug 7, 2011 at 11:35 PM, jonathan MERCIER
wrote:

> I have a bash completion file (see below)
> It works fine, but i would like add a feature => not expand the flag by
> a space when it contain '='
> curently when i do:
> $ ldc2 -Df
> ldc2 -Df=⊔
> i would like:
>  ldc2 -Df
> ldc2 -Df=
>
> without space
>
>
Try like this:

complete -o nospace -F _ldc ldc2


Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Chet Ramey
On 8/8/11 9:42 PM, Mike Frysinger wrote:
> On Monday, August 08, 2011 21:20:29 Chet Ramey wrote:
>> On 8/8/11 8:53 AM, Eric Blake wrote:
>>> However, you are on to something - since bash allows 'exit -1' as an
>>> extension, it should similarly allow 'return -1' as the same sort of
>>> extension.  The fact that bash accepts 'exit -1' and 'exit -- -1', but
>>> only 'return -- -1', is the real point that you are complaining about.
>>
>> That's a reasonable extension to consider for the next release of bash.
> 
> i posted a patch for this quite a while ago.  not that it's hard to code.

Sure.  It's just removing the three lines of code that were added
between bash-3.2 and bash-4.0.  The question was always whether that's
the right thing to do, and whether the result will behave as Posix
requires.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bug: return doesn't accept negative numbers

2011-08-08 Thread Linda Walsh



Chet Ramey wrote:


Sure.  It's just removing the three lines of code that were added
between bash-3.2 and bash-4.0.  The question was always whether that's
the right thing to do, and whether the result will behave as Posix
requires.


That explains why I never ran into this before!  You broke previous
compat!   Was there a note to this effect?  I don't remember it, but 
could have

easily missed it...


So why not limit that behavior to when "--posix" is in effect?



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> Has anyone ever come across an equivalent to Linux's readlink -f that
> is implemented purely in bash?
> 
> (I need readlink's function on AIX where it doesn't seem to be available).

Try this:

  ls -l /path/to/some/link | awk '{print$NF}'

Sure it doesn't handle whitespace in filenames but what classic AIX
Unix symlink would have whitespace in it?  :-)

Bob



Re: manpage clarification/addition.

2011-08-08 Thread Linda Walsh



Roger wrote:

On Mon, Aug 08, 2011 at 02:28:00PM -0700, Linda Walsh wrote:
Lest some think functions can replace aliases, there's a line in the manpage
that I feel needs amending.  Currently it says:

   "For almost every purpose, aliases are superseded by shell functions."

While true, it may too likely be read by some to mean that aliases have no
useful purpose.  So I'd suggest:

   "For most purposes, aliases are superseded by shell functions, though
aliases are still required in some situations".


The latter seems even more trickier to read then the previous.

I would suggest scrapping both attempts at clarifications and state one (maybe
two) solid pros for each and then a con for each.  Or something of a mix within
one sentence for the sake of brevity?

I've seen & worked with both and in my opinion:

---
Aliases are really meant for CLI or bashrc usage and can be quickly written.
Aliases seem to have some limitations as to what statements they may contain
as it's a one-liner.

Functions can easily contain more complicated statements, and can also be
contained within bashrc, and utilized via CLI -- but really are used within
scripts.

As far as system resources, I've heard functions are quicker.  But I don't know
if this is accurate as functions usually contain more execution statements!
---



-
Nah, that's too much like right!





Re: Bug, or someone wanna explain to me why this is a POSIX feature?

2011-08-08 Thread Linda Walsh



Michael Witten wrote:


Reasonable? Or is someone going to tell me why blather is a
desired and wanted feature (under one standard or another! ;-))...


Reasonable, but naive.

Interactive bash can complete variable names, even when they are quoted.

---
That's cool!

I'm talking about a case where I'm at a prompt.  PS1, PS2, PS3, PS4,
whatever, and I hit tab on an otherwise blank line typing nothing in
on the line before it.

Tab is for auto-completion.   Completing a null string is .. well
ANYTHING can be typed at that point.   So that's why the option to suppress
completing a null-string was implemented.

I think you are missing the point of my example.

I'm not talking about where you have typed something in...

If you are in quotes, and you hit return, there is nothing on the line.
What would you expect it to autocomplete?  ENV vars, can't have
white-space in the middle of them.  So if you have hit return, you cannot
be completing a variable.

The example you give, isn't the example I'm talking about -- I was talking
*leading*, tabs, (as from 'indentation'...though, I would argue, that
in the case where you had '$TERM no autocompletion should be done, as
tab may be part of the string.  (Which wouldn't be the case for "$TERM,
where white space is ignored".

So can you explain again the naïveté of my not expanding NULL strings 
(the option is

"no_empty_cmd_completion", a line with "TERM on it is isn't empty).





Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 12:49 PM, Bob Proulx  wrote:
> Jon Seymour wrote:
>> Has anyone ever come across an equivalent to Linux's readlink -f that
>> is implemented purely in bash?
>>
>> (I need readlink's function on AIX where it doesn't seem to be available).
>
> Try this:
>
>  ls -l /path/to/some/link | awk '{print$NF}'
>
> Sure it doesn't handle whitespace in filenames but what classic AIX
> Unix symlink would have whitespace in it?  :-)
>

readlink -f will fully resolve links in the path itself (rather than
link at the end of the path), which was the behaviour I needed.

It seems cd -P does most of what I need for directories and so
handling things other than directories is a small tweak on that.

Anyway, thanks for that!

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> readlink -f will fully resolve links in the path itself (rather than
> link at the end of the path), which was the behaviour I needed.

Ah, yes, well, as you could tell that was just a partial solution
anyway.

> It seems cd -P does most of what I need for directories and so
> handling things other than directories is a small tweak on that.

You might try cd'ing there and then using pwd -P to get the canonical
directory name.  I am thinking something like this:

  #!/bin/sh
  p="$1"
  dir=$(dirname "$p")
  base=$(basename "$p")
  physdir=$(cd "$dir"; pwd -P)
  realpath=$(cd "$dir"; ls -l "$base" | awk '{print$NF}')
  echo "$physdir/$realpath" | sed 's|//*|/|g'
  exit 0

Again, another very quick and partial solution.  But perhaps something
good enough just the same.

Bob



Another new 4.0 feature? functions can't return '1', (()) can't eval to 0?

2011-08-08 Thread Linda Walsh

I have a function that returns true/false.

during development, (and sometimes thereafter depending on the script, I
run with -eu, to make sure the script stops as soon as there is a
problem (well, to 'try' to make sure, many are caught.

But there are two instances that cause an error exit that seem pretty
unuseful and I don't remember them breaking this way before.

1)

((expr)), if expr evals to '0', it returns false=failure, the script
stops.

I regularly use ((expr) to do calculations -- now none of them appear
safe -- this never used to be a problem.


2)  a function returning a false value  -- Tried putting the ((expr)) in
an if:

if ((expr)); then return 0; else return 1;

As soon as it sees the return 1, it exits, -- as I returned 'false'
(error).


I know I used to be able to do calculations and turn expressions with -e
on, and not have to worry about the script existing just because I
return false to a bool function!

Are these more changes that went into recent versions or is there
something else going on or did something get seriously broken?









Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 1:36 PM, Bob Proulx  wrote:
> Jon Seymour wrote:
>> readlink -f will fully resolve links in the path itself (rather than
>> link at the end of the path), which was the behaviour I needed.
>
> Ah, yes, well, as you could tell that was just a partial solution
> anyway.
>
>> It seems cd -P does most of what I need for directories and so
>> handling things other than directories is a small tweak on that.
>
> You might try cd'ing there and then using pwd -P to get the canonical
> directory name.  I am thinking something like this:
>
>  #!/bin/sh
>  p="$1"
>  dir=$(dirname "$p")
>  base=$(basename "$p")
>  physdir=$(cd "$dir"; pwd -P)
>  realpath=$(cd "$dir"; ls -l "$base" | awk '{print$NF}')
>  echo "$physdir/$realpath" | sed 's|//*|/|g'
>  exit 0
>
> Again, another very quick and partial solution.  But perhaps something
> good enough just the same.

> realpath=$(cd "$dir"; ls -l "$base" | awk '{print$NF}')

I always use sed for this purpose, so:

   $(cd "$dir"; ls -l "$base" | sed "s/.*->//")

But, with pathological linking structures, this isn't quite enough -
particularly if the target of the link itself contains paths, some of
which may contain links :-)

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> I always use sed for this purpose, so:
> 
>$(cd "$dir"; ls -l "$base" | sed "s/.*->//")
> 
> But, with pathological linking structures, this isn't quite enough -
> particularly if the target of the link itself contains paths, some of
> which may contain links :-)

Agreed!  Symlinks with arbitrary data, such as holding small shopping
lists in the target value, are so much fun.  I am more concerned that
arbitrary data such as "->" might exist in there more so than
whitespace.  That is why I usually don't use a pattern expression.
But I agree it is another way to go.  But it is easier to say
whitespace is bad in filenames than to say whitespace is bad and oh
yes you can't have "->" in there either.  :-)

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:14 PM, Bob Proulx  wrote:
> Jon Seymour wrote:
>> I always use sed for this purpose, so:
>>
>>    $(cd "$dir"; ls -l "$base" | sed "s/.*->//")
>>
>> But, with pathological linking structures, this isn't quite enough -
>> particularly if the target of the link itself contains paths, some of
>> which may contain links :-)
>
> Agreed!  Symlinks with arbitrary data, such as holding small shopping
> lists in the target value, are so much fun.  I am more concerned that
> arbitrary data such as "->" might exist in there more so than
> whitespace.  That is why I usually don't use a pattern expression.
> But I agree it is another way to go.  But it is easier to say
> whitespace is bad in filenames than to say whitespace is bad and oh
> yes you can't have "->" in there either.  :-)
>

Ok, I think this does it...

readlink_f()
{
local path="$1"
test -z "$path" && echo "usage: readlink_f path" 1>&2 && exit 1;

local dir

if test -L "$path"
then
local link=$(ls -l "$path" | sed "s/.*-> //")
if test "$link" = "${link#/}"
then
# relative link
dir="$(dirname "$path")"
readlink_f "${dir%/}/$link"
else
# absolute link
readlink_f "$link"
fi
elif test -d "$path"
then
(cd "$path"; pwd -P) # normalize it
else
dir="$(cd $(dirname "$path"); pwd -P)"
base="$(basename "$path")"
echo "${dir%/}/${base}"
fi
}



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:36 PM, Jon Seymour  wrote:
> On Tue, Aug 9, 2011 at 2:14 PM, Bob Proulx  wrote:
>> Jon Seymour wrote:
>>> I always use sed for this purpose, so:
>>>
>>>    $(cd "$dir"; ls -l "$base" | sed "s/.*->//")
>>>
>>> But, with pathological linking structures, this isn't quite enough -
>>> particularly if the target of the link itself contains paths, some of
>>> which may contain links :-)
>>
>> Agreed!  Symlinks with arbitrary data, such as holding small shopping
>> lists in the target value, are so much fun.  I am more concerned that
>> arbitrary data such as "->" might exist in there more so than
>> whitespace.  That is why I usually don't use a pattern expression.
>> But I agree it is another way to go.  But it is easier to say
>> whitespace is bad in filenames than to say whitespace is bad and oh
>> yes you can't have "->" in there either.  :-)
>>
>
> Ok, I think this does it...
>
> readlink_f()
> {
> ...
> }

And I make no claims whatsoever about whether this is vulnerable to
infinite recursion!

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Bob Proulx
Jon Seymour wrote:
> readlink_f()
> {
> local path="$1"
> test -z "$path" && echo "usage: readlink_f path" 1>&2 && exit 1;

An extra ';' there that doesn't hurt but isn't needed.

> local dir
> 
> if test -L "$path"
> then
> local link=$(ls -l "$path" | sed "s/.*-> //")

I would be inclined to also look for a space before the " -> " too.
Because it just is slightly more paranoid.

local link=$(ls -l "$path" | sed "s/.* -> //")

> if test "$link" = "${link#/}"
> then
> # relative link
> dir="$(dirname "$path")"

As an aside you don't need to quote assignments.  They exist inside
the shell and no word splitting will occur.  It is okay to assign
without quotes here and I think it reads slightly better without.

dir=$(dirname "$path")

> readlink_f "${dir%/}/$link"
> else
> # absolute link
> readlink_f "$link"
> fi
> elif test -d "$path"
> then
> (cd "$path"; pwd -P) # normalize it
> else
> dir="$(cd $(dirname "$path"); pwd -P)"
> base="$(basename "$path")"

Same comment here about over-quoting.  If nothing else it means that
syntax highlighting is different.

dir=$(cd $(dirname "$path"); pwd -P)
base=$(basename "$path")

> echo "${dir%/}/${base}"
> fi
> }

And of course those are just suggestions and nothing more.  Feel free
to ignore.

Note that there is a recent movement to change that dash greater-than
combination into a true unicode arrow graphic emited by 'ls'.  I think
things paused when there were several different bike shed suggestions
about which unicode arrow symbol people wanted there.  I haven't seen
any actual movement for a while and I think that is a good thing.

Bob



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:51 PM, Bob Proulx  wrote:
> Jon Seymour wrote:
>> readlink_f()
>> {
>>         local path="$1"
>>         test -z "$path" && echo "usage: readlink_f path" 1>&2 && exit 1;
>
> An extra ';' there that doesn't hurt but isn't needed.
>
>>         local dir
>>
>>         if test -L "$path"
>>         then
>>                 local link=$(ls -l "$path" | sed "s/.*-> //")
>
> I would be inclined to also look for a space before the " -> " too.
> Because it just is slightly more paranoid.
>
>                local link=$(ls -l "$path" | sed "s/.* -> //")
>
>>                 if test "$link" = "${link#/}"
>>                 then
>>                         # relative link
>>                         dir="$(dirname "$path")"
>
> As an aside you don't need to quote assignments.  They exist inside
> the shell and no word splitting will occur.  It is okay to assign
> without quotes here and I think it reads slightly better without.
>
>                        dir=$(dirname "$path")
>
>>                         readlink_f "${dir%/}/$link"
>>                 else
>>                         # absolute link
>>                         readlink_f "$link"
>>                 fi
>>         elif test -d "$path"
>>         then
>>                 (cd "$path"; pwd -P) # normalize it
>>         else
>>                 dir="$(cd $(dirname "$path"); pwd -P)"
>>                 base="$(basename "$path")"
>
> Same comment here about over-quoting.  If nothing else it means that
> syntax highlighting is different.
>
>                dir=$(cd $(dirname "$path"); pwd -P)
>                base=$(basename "$path")
>
>>                 echo "${dir%/}/${base}"
>>         fi
>> }
>
> And of course those are just suggestions and nothing more.  Feel free
> to ignore.

Tips appreciated, thanks.

jon.


Re: manpage clarification/addition.

2011-08-08 Thread Roger
> On Mon, Aug 08, 2011 at 07:52:38PM -0700, Linda Walsh wrote:
...
>   Nah, that's too much like right!

... sorry, I try. :-/


-- 
Roger
http://rogerx.freeshell.org/