Re: equivalent of Linux readlink -f in pure bash?
Bob Proulx writes: > Same comment here about over-quoting. If nothing else it means that > syntax highlighting is different. > > dir=$(cd $(dirname "$path"); pwd -P) You are missing a pair of quotes here. :-) Andreas. -- Andreas Schwab, sch...@linux-m68k.org GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: equivalent of Linux readlink -f in pure bash?
On 09.08.2011 03:44, Jon Seymour wrote: Has anyone ever come across an equivalent to Linux's readlink -f that is implemented purely in bash? You can find my version here: http://sudrala.de/en_d/shell-getlink.html As it contains some corrections from Greg Wooledge, it should handle even pathological situations. ;) Bernd -- http://sudrala.de
Re: bash completion
Le mardi 09 août 2011 à 10:05 +0800, Clark J. Wang a écrit : > On Sun, Aug 7, 2011 at 11:35 PM, jonathan MERCIER > wrote: > I have a bash completion file (see below) > It works fine, but i would like add a feature => not expand > the flag by > a space when it contain '=' > curently when i do: > $ ldc2 -Df > ldc2 -Df=⊔ > i would like: > ldc2 -Df > ldc2 -Df= > > without space > > > Try like this: > > complete -o nospace -F _ldc ldc2 tanks a lot, works fine [SOLVED]
Re: bug: return doesn't accept negative numbers
On 08/08/2011 08:14 PM, Chet Ramey wrote: On 8/8/11 9:42 PM, Mike Frysinger wrote: On Monday, August 08, 2011 21:20:29 Chet Ramey wrote: On 8/8/11 8:53 AM, Eric Blake wrote: However, you are on to something - since bash allows 'exit -1' as an extension, it should similarly allow 'return -1' as the same sort of extension. The fact that bash accepts 'exit -1' and 'exit -- -1', but only 'return -- -1', is the real point that you are complaining about. That's a reasonable extension to consider for the next release of bash. i posted a patch for this quite a while ago. not that it's hard to code. Sure. It's just removing the three lines of code that were added between bash-3.2 and bash-4.0. The question was always whether that's the right thing to do, and whether the result will behave as Posix requires. Yes, the result will behave as POSIX requires. POSIX requires that 'return' and 'exit' need not support '--' (since they are special builtins that do not specifically require compliance with the generic rules on option parsing), that they need not support options, and that if their optional argument is present, it need not be supported if it is not a non-negative integer no greater than 255. But they are _not_ required to reject any input outside the above constraints - therefore, an extension that supports '--', an extension that parses '-- -1' as 255, and an extension that parses any option that looks like a negative number such as 'exit -1', are ALL valid extensions permitted by POSIX, and need not be disabled by --posix, but can be available always. ksh does just that: 'return -1' and 'return -- -1' are always accepted and both result in the same behavior as the POSIX-mandated 'return 255'; ksh also has an extension where 'return --help' prints help, although bash uses 'help return' for this purpose. -- Eric Blake ebl...@redhat.com+1-801-349-2682 Libvirt virtualization library http://libvirt.org
Re: equivalent of Linux readlink -f in pure bash?
On 8/9/2011 5:29 AM, Bernd Eggink wrote: On 09.08.2011 03:44, Jon Seymour wrote: Has anyone ever come across an equivalent to Linux's readlink -f that is implemented purely in bash? You can find my version here: http://sudrala.de/en_d/shell-getlink.html As it contains some corrections from Greg Wooledge, it should handle even pathological situations. ;) Bernd I'd just like to make a couple of suggestions for your script (I hope these are welcome): *) You reset OPTIND to 1 but you didn't declare it local. This will cause any caller of getlink which uses getopts to reset its variable to 1. (I mention this because it cost me a couple of hours a while back.) When calling getopts, especially from a function that is intended to not be used at a top level for processing command line options, you should declare local copies of OPTIND, OPTARG and OPTERR. *) To remove the trailing slashes, instead of while [[ $file == */ ]] do file=${file%/} done file=${file##*/}# file name just say file="${file%${file##*[!/]}}" *) Instead of [[ ! -d $dir ]] && { ret=1 break } how about this for slightly cleaner? [[ -d $dir ]] || { ret=1 break } -- Time flies like the wind. Fruit flies like a banana. Stranger things have .0. happened but none stranger than this. Does your driver's license say Organ ..0 Donor?Black holes are where God divided by zero. Listen to me! We are all- 000 individuals! What if this weren't a hypothetical question? steveo at syslang.net
Re: equivalent of Linux readlink -f in pure bash?
On Tue, Aug 9, 2011 at 7:29 PM, Bernd Eggink wrote: > On 09.08.2011 03:44, Jon Seymour wrote: >> >> Has anyone ever come across an equivalent to Linux's readlink -f that >> is implemented purely in bash? > > You can find my version here: > > http://sudrala.de/en_d/shell-getlink.html > > As it contains some corrections from Greg Wooledge, it should handle even > pathological situations. ;) > > Bernd > Thanks for that. ${link##*-> } is a neater way to extract the link. It does seem that a link create like so: ln -sf "a -> b" c is going to create problems for both your script and mine [ not that I actually care about such a perverse case :-) ] jon.
Re: bug: return doesn't accept negative numbers
On 8/9/11 8:53 AM, Eric Blake wrote: > On 08/08/2011 08:14 PM, Chet Ramey wrote: >> On 8/8/11 9:42 PM, Mike Frysinger wrote: >>> On Monday, August 08, 2011 21:20:29 Chet Ramey wrote: On 8/8/11 8:53 AM, Eric Blake wrote: > However, you are on to something - since bash allows 'exit -1' as an > extension, it should similarly allow 'return -1' as the same sort of > extension. The fact that bash accepts 'exit -1' and 'exit -- -1', but > only 'return -- -1', is the real point that you are complaining about. That's a reasonable extension to consider for the next release of bash. >>> >>> i posted a patch for this quite a while ago. not that it's hard to code. >> >> Sure. It's just removing the three lines of code that were added >> between bash-3.2 and bash-4.0. The question was always whether that's >> the right thing to do, and whether the result will behave as Posix >> requires. > > Yes, the result will behave as POSIX requires. That's not exactly what I meant. I know what Posix says. The question is whether or not the code does that, and that's what I have to verify. The change went in (three years ago) to solve a specific issue, so I have to make sure we're not going backwards here. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: [patch] colored filename completion
ping, anyone interested in reviewing/commenting on this ?
Re: Who's the target? Was: Inline `ifdef style` debugging
OK. I did a little tracing and found both suggestions from Mr. Williamson and Mr. Orr very similar. The differences, Mr. Williamson's example is more simple. Mr. Orr's example sets the entire _debug function under another if/then statement. The benefit I found with Mr. Orr's example, if the _debug function is a really large function with multiple statements, the additional if/then would prevent reading the large function each time, further wasting system resources. This being the primary goal as well. Other then this, if the _debug function contains only a simple command, Mr. Williamson's example should be faster because it omits Mr. Orr's additional 'else' command. (Did I trace and add-up CPU cycles correctly?) -- Roger http://rogerx.freeshell.org/
Re: Another new 4.0 feature? functions can't return '1', (()) can't eval to 0?
On 8/8/11 11:43 PM, Linda Walsh wrote: > I have a function that returns true/false. > > during development, (and sometimes thereafter depending on the script, I > run with -eu, to make sure the script stops as soon as there is a > problem (well, to 'try' to make sure, many are caught. > > But there are two instances that cause an error exit that seem pretty > unuseful and I don't remember them breaking this way before. The change to make (( honor the `errexit' option came in with bash-4.1, part of the cleanup after the Posix changes to the specification of the behavior of `set -e'. Most of the other changes in this area came in with bash-4.0. Posix changed set -e to cause the shell to exit when any command fails, not just when simple commands fail, as in versions of the standard up to and including Posix.1-2008. There are the usual exceptions (command following if, commands preceding && and ||, and so on). This was changed for better alignment with historical versions of the shell and to reconcile differences between implementations. > 2) a function returning a false value -- Tried putting the ((expr)) in > an if: > > if ((expr)); then return 0; else return 1; > > As soon as it sees the return 1, it exits, -- as I returned 'false' > (error). This should have always been the case -- a function is a simple command, so its returning a non-zero exit status should cause the shell to exit. This was true even before Posix changed. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Bug, or someone wanna explain to me why this is a POSIX feature?
On 8/8/11 2:44 PM, Linda Walsh wrote: > > I was testing functions in my shell, I would cut/paste, > thing is, with each past, I'd get my dir listed (sometimes multiple times) > on each line entered. > > Now I have: > shopt: > no_empty_cmd_completion on > > i.e. it's not supposed to expand an empty line Not quite. It's supposed to suppress command completion in those contexts where bash would attempt it if the word to be completed is empty. That includes empty lines, but there are other cases as well. The `empty' refers to the command name, not the command line. > but type in > function foo { > return 1 > } > > When I hit tab it lists out all the files in my dir -- which > explains why when I cut/paste, any tab-indented line will list > out the dir, and if it is multiply indented, it will be listed > once for each indent level! Yes. It's not attempting command completion, since it doesn't think it's in a context where a command name is expected (the completion line parsing is pretty ad-hoc -- it doesn't use the shell parser). It's attempting readline's default filename completion. I'll have to see whether it can be taught that this is a context where a command name is valid, even though it's the second line of a multi-line construct. If this is really a big deal, you can temporarily turn off command line editing while cutting and pasting text into the shell. That way you won't be surprised. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Coproc usage ... not understanding
On 8/3/11 4:11 PM, Linda Walsh wrote: > I've searched for coproc usage examples on the web, but there aren't > many, and the few I found, while working for their test case, didn't > fully replicate my setup, so it didn't work. > > > Ideas? > > > What am I doing wrong? You're probably running into grep (and sort, and sed) buffering its output. I haven't been able to figure out a way past that. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Another new 4.0 feature? functions can't return '1', (()) can't eval to 0?
Chet Ramey wrote: On 8/8/11 11:43 PM, Linda Walsh wrote: I have a function that returns true/false. during development, (and sometimes thereafter depending on the script, I run with -eu, to make sure the script stops as soon as there is a problem (well, to 'try' to make sure, many are caught. But there are two instances that cause an error exit that seem pretty unuseful and I don't remember them breaking this way before. The change to make (( honor the `errexit' option came in with bash-4.1, part of the cleanup after the Posix changes to the specification of the behavior of `set -e'. Most of the other changes in this area came in with bash-4.0. I thought (()) was a bash extension? If so, why shoe-horn it into a 20y/o spec**? It's often used for doing calculations -- that's why it was added... it can only be used where a command can be used, so having it die whenever it evals to 0 -- just doesn't make sense. If there's an error in the calculation, like division by zero, sure, but just because I come up with a result of 0, == it's a far stretch to think that would be an error. ((sec=1,min=60*sec,hour=60*min,day=24*hour,year=365*day)) ((testval=year-365*day)) ; script dies due to it being considered an error? Did you get bitten by some POSIX virus? Posix changed set -e to cause the shell to exit when any command fails, not just when simple commands fail, as in versions of the standard up to and including Posix.1-2008. --- (()) isn't a command, it's a calculation (how's that for different justification?) There are the usual exceptions (command following if, commands preceding && and ||, and so on). This was changed for better alignment with historical versions of the shell and to reconcile differences between implementations. 2) a function returning a false value -- Tried putting the ((expr)) in an if: if ((expr)); then return 0; else return 1; As soon as it sees the return 1, it exits, -- as I returned 'false' (error). It was followed by an &&, has that changed too? i.e. must it be followed by an '||' (so the entire expression comes out as 'true'?) This should have always been the case -- a function is a simple command, so its returning a non-zero exit status should cause the shell to exit. This was true even before Posix changed. No wonder I'm going crazya bunch of changes went in that really hurt functionality. ** -- wait, I thought posix was dead ages agothere are updates? 2008? is that the latest? #*(@#$()@#)!@)*&% they whole reason for the standard was so programs wouldn't keep breaking...and now they change the standards... wonderful... seems like that eliminates the justification for having a standard -- other than to force everyone to rewrite and update to a new standard...hmmm
Re: Coproc usage ... not understanding
Chet Ramey wrote: What am I doing wrong? You're probably running into grep (and sort, and sed) buffering its output. I haven't been able to figure out a way past that. Chet --- I did think of that...but I thought when the foreground process closes 'input', then all of the chained utils should see 'eof', and should then flush their output...at least that was my belief in how they "should" be working...(sigh)...
Re: Coproc usage ... not understanding
Linda Walsh wrote: I did think of that...but I thought when the foreground process closes 'input', then all of the chained utils should see 'eof', and should then flush their output...at least that was my belief in how they "should" be working...(sigh)... --- Um...that got me to thinking.. When you spawn off the coproc, and the parent has the I/O handles to the child in COPROCNAME[0/1], does the child have a copy of that as well? I.e. does it retain an 'unclosed' copy of the writeable end of it's inputpipe? (and same for output, but not as important...i.e. it could also read from some handle it owns to read it's own output...(like that would useful)...). But the 1st case is a cause for concern -- if it doesn't close out the writeable end of the pipe to it's input, then even if the parent closes it's handles, the child won't ever see "EOF", as it has it's own writeable-handle to it's input stream. ??
Re: Bug, or someone wanna explain to me why this is a POSIX feature?
Chet Ramey wrote:! It's not attempting command completion, since it doesn't think it's in a context where a command name is expected (the completion line parsing is pretty ad-hoc -- it doesn't use the shell parser). It's attempting readline's default filename completion. I'll have to see whether it can be taught that this is a context where a command name is valid, even though it's the second line of a multi-line construct. --- Well, as it *is* a place where one can type in commands, so you should slap it up the side the code, and put it in it's place! ;-) If this is really a big deal, you can temporarily turn off command line editing while cutting and pasting text into the shell. That way you won't be surprised. --- set +o vi? I don't remember this being a problem before. I've done it before...as I'm writing code, sometimes I want to take a snippet and test it, so I cut/paste .. Hasn't been until recently that I noticed this problem.
Re: Coproc usage ... not understanding
Chet Ramey wrote: > Linda Walsh wrote: > > Ideas? > > You're probably running into grep (and sort, and sed) buffering its > output. I haven't been able to figure out a way past that. This may be a good point to mention this reference: http://www.pixelbeat.org/programming/stdio_buffering/ And the 'stdbuf' command the came out of it. http://www.gnu.org/software/coreutils/manual/html_node/stdbuf-invocation.html#stdbuf-invocation Bob
Re: Coproc usage ... not understanding
Bob Proulx wrote: Chet Ramey wrote: Linda Walsh wrote: Ideas? You're probably running into grep (and sort, and sed) buffering its output. I haven't been able to figure out a way past that. This may be a good point to mention this reference: http://www.pixelbeat.org/programming/stdio_buffering/ And the 'stdbuf' command the came out of it. http://www.gnu.org/software/coreutils/manual/html_node/stdbuf-invocation.html#stdbuf-invocation Bob Interesting -- it modifies the buffering on the command it invokes? Does it only work with gnu programs? I.e. how would they know to not buffer their i/o? I'll have to try it with that and see if it affects the problem, but given the size of the I/O buffs (4-8K?) I figured that the coproc would have held all of the output (it was 3 lines out of a bunch), until I closed it's stdin. I don't think the stdbuf will help if the child has it's own input pipe open...if it does, I can try closing it, but i just expected that to be done automatically as with pipes...but maybe that was a step too far. Thanks for the pointer!
Re: Coproc usage ... not understanding
On 8/9/11 8:19 PM, Linda Walsh wrote: > Linda Walsh wrote: >> I did think of that...but I thought when the foreground >> process closes 'input', then all of the chained utils should see 'eof', and >> should then flush their output...at least that was my belief in how they >> "should" be working...(sigh)... > --- > Um...that got me to thinking.. > > When you spawn off the coproc, and the parent has the I/O handles > to the child in COPROCNAME[0/1], does the child have a copy of that as well? The parent and child retain the appropriate ends of the pipe and close the others. The parent's copies are set close-on-exec. Maybe that's the problem. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Coproc usage ... not understanding
Chet Ramey wrote: On 8/9/11 8:19 PM, Linda Walsh wrote: Linda Walsh wrote: I did think of that...but I thought when the foreground process closes 'input', then all of the chained utils should see 'eof', and should then flush their output...at least that was my belief in how they "should" be working...(sigh)... --- Um...that got me to thinking.. When you spawn off the coproc, and the parent has the I/O handles to the child in COPROCNAME[0/1], does the child have a copy of that as well? The parent and child retain the appropriate ends of the pipe and close the others. The parent's copies are set close-on-exec. Maybe that's the problem. In the parent, I used exec "${parselast[1]}>&-" shouldn't that have closed that last write-handle to the child? (just before that, I did a "0<&${parselast[0]} cat & ", which I now realize would have duped the handle I was trying to close, but if you set close-on-exec, it shouldn't have been a probh...puzzlement!)
Re: Another new 4.0 feature? functions can't return '1', (()) can't eval to 0?
On 8/9/11 8:10 PM, Linda Walsh wrote: >> The change to make (( honor the `errexit' option came in with bash-4.1, >> part of the cleanup after the Posix changes to the specification of the >> behavior of `set -e'. Most of the other changes in this area came in >> with bash-4.0. > > I thought (()) was a bash extension? It is. That's not a reason to make `set -e' apply non-uniformly. > > If so, why shoe-horn it into a 20y/o spec**? It's often used for > doing calculations -- that's why it was added... it can only be used > where a command can be used, so having it die whenever it evals to 0 -- just > doesn't make sense. If there's an error in the calculation, like division > by zero, sure, but just because I come up with a result of 0, == it's a far > stretch to think that would be an error. There are ways around this, if it's an issue. > Did you get bitten by some POSIX virus? That didn't end up being nearly as funny as you probably thought it was going to be. >> Posix changed set -e to cause the shell to exit when any command fails, >> not just when simple commands fail, as in versions of the standard up >> to and including Posix.1-2008. > --- > (()) isn't a command, it's a calculation (how's that for different > justification?) Creative, but it's a command. >>> 2) a function returning a false value -- Tried putting the ((expr)) in >>> an if: >>> >>> if ((expr)); then return 0; else return 1; >>> >>> As soon as it sees the return 1, it exits, -- as I returned 'false' >>> (error). > > It was followed by an &&, has that changed too? It's impossible to say what the problem, if any, might have been. For instance, the following script displays `after': set -e func() { if (( 0 )); then return 0; else return 1; fi } func && echo a echo after > ** -- wait, I thought posix was dead ages agothere are updates? 2008? > is that > the latest? #*(@#$()@#)!@)*&% they whole reason for the standard was so > programs > wouldn't keep breaking...and now they change the standards... Maybe you should read it; you'll see that work is continuing. http://pubs.opengroup.org/onlinepubs/9699919799/nfindex.html We're about to ballot on Technical Corrigendum 1 (TC1) to the 2008 version. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Re: Another new 4.0 feature? functions can't return '1', (()) can't eval to 0?
Chet Ramey wrote: On 8/9/11 8:10 PM, Linda Walsh wrote: The change to make (( honor the `errexit' option came in with bash-4.1, part of the cleanup after the Posix changes to the specification of the behavior of `set -e'. Most of the other changes in this area came in with bash-4.0. I thought (()) was a bash extension? It is. That's not a reason to make `set -e' apply non-uniformly. --- But it's purpose was to do calculations, no? If you were doing calculations with a 'error checker turned on', would like feel it was a good feature if everytime your calculator displayed '0', it it quit? There is a reason to make it apply non-uniformly -- as (()) really isn't a 'command', in that it's not a standard 'command word', (if/then/while), nor is it something that would be handled by an external program. It is something specifically for calculating numeric values that one wouldn't want to crash a program that is detecting *error* conditions, since '0' is just as valid an integer as any other. If they want it to 'die', have them put 'test' in front of it, then 'test' can throw the error. But I'd have to think killing your script whenever you calc '0', can't be a desirable feature. There are ways around this, if it's an issue. Yes...rewriting bunches of scripts that use math ... or things like (concocted case...), but you couldn't use the value of that statement in an assignment, as it would get the 1. (((tmp=calc),1))... originalvals[${originalvals[*]}:-0]=$tmp if ((tmp<0)); then xxx Did you get bitten by some POSIX virus? That didn't end up being nearly as funny as you probably thought it was going to be. --- Oh...*gulp*...um...I hope it's nothing serious (at least not in a bad way)... Working Group Member?...Oh dear...I call conflict of interest ;-) (well, maybe not, but ... hard to see bash well represented...with that many other members, you could easily be 'borg'ized...(and next you're going to tell me that futile?))... Posix changed set -e to cause the shell to exit when any command fails, not just when simple commands fail, as in versions of the standard up to and including Posix.1-2008. --- (()) isn't a command, it's a calculation (how's that for different justification?) Creative, but it's a command. --- I don't regard a 'calculation' that returns 0 as an 'error', though. That's a **subjective** and maybe speaking for no one other than myself, but is not a universal interpretation. It is a type of 'statement', but I don't see how it can rightfully be called a command as it doesn't 'command' anything. It's not starting programs, it's governing program flow -- it merely does calculations on variables.. Now you may be referring to how it is handled semantically by the parser or such, but that's also an internal design matter, not how it looks externally. 2) a function returning a false value -- Tried putting the ((expr)) in an if: if ((expr)); then return 0; else return 1; As soon as it sees the return 1, it exits, -- as I returned 'false' (error). It was followed by an &&, has that changed too? It's impossible to say what the problem, if any, might have been. For instance, the following script displays `after': set -e func() { if (( 0 )); then return 0; else return 1; fi } func && echo a echo after - That can be justified though, fortunately, as && is your success, then 'after' is only a fail case where you can do error handling. I had this: #simple function to turn off tracing of some routines...as during devel, I #was wanting to see things by default and turn off things by option... function DebugPush_helper {debug $1 || set +x ;} Would cause my script to exit everytime something wasn't debug'ed... (great, I turn off debug, and everything fails!)... Of course one can work around it: function DebugPush_helper { if ! debug $1 ;then set +x ;fi; } But again, 'surprise', it wasn't a simple command at the top level of the script, but in a function!...took me forever to figure out that return non 0 from a function was 'bad'and should kill script by default.. (even when not called 'bare' at top level). Maybe you should read it; you'll see that work is continuing. http://pubs.opengroup.org/onlinepubs/9699919799/nfindex.html So I see...great...more incompats coming up I expect... (only the pessimist side of me...part thinks it cool...I argue w/myself all the time; side effect of growing up w/lawyer as father, and mother w/masters in speech & communications ... iii)... -l