Re: cd multiple levels up?
On Sun, Jun 13, 2010 at 06:46:56AM -0500, Peng Yu wrote: > Hello, > > I frequently need do cd multiple levels up. For example, > > cd ../.. > cd ../../../../ > > It would be convenient to type something like "cd 2" or "cd 4". Is > there a command for this? You could write a function: cdup() { local i for ((i=1; i<=$1; i++)); do cd .. done }
Re: simple script to alert if internet not working?
On Sun, Jun 13, 2010 at 06:53:48PM -0700, fuzzylogic25 wrote: > Problem is ping keeps gettin data requests nonstop from looks of it. so how > would i go about doing this? man ping If you're on most GNU/Linux systems (e.g. Debian) look for the -c switch. If you're on HP-UX 10.20, look for the -n switch. If you're on anything else, look for the word ``count'' or any English synonym and cross your fingers.
Re: new features to GNU Bash
Hello, I suppose I have found a new feature to Bash. If user needs to rename a file and the file is in directory /home/user/a/b/c/d/e/file, user needs to write command mv /home/user/a/b/c/d/e/file /home/user/a/b/c/d/e/fileB. This command contains the directory written two times. so if Bash would remember directory, it would be possible to retrieve directory from memory the second time it is needed. That way user does not need to rewrite the same long directory again. there should be a key combination to retrieve the directory from memory to command line. I also have three other feature propositions. a) In Bash scripts (and more generally in any programming language), there could be a feature, which, when user presses F1 in place of a source code line, would tell in plain English what the source code line does. that would be useful for those who are learning new programming language and need to know what a source code does. also hard-to-find typos would be revealed this way. b) There could be file system level feature, which like lsattr tells more about files. This would tell file description, which could be very long, more than 256 characters. File description could be read e.g. by file command. c) file attributes could be increased to help classify huge file amounts in large disk partitions. that way finding files would be easier if find command would be told to search files with e.g. attribute z set. Z could be anything (e.g. script files, which user A has created between 1.1.2000 - 2.2.2002) and they could be created by user for any purpose. single letter file attribute name (like z) and file attribute description would be separate, so disk space would not be wasted. there could be a tool for reading what new file attributes exist and their description. I hope you forward these feature propositions to others. Regards, Mika M. --- Richard Stallman wrote [at Mon, 14 Jun 2010 03:13:48 -0400] : > a) if user is renaming a file in directory A and working directory is > different than A, (mv /path/file /path/renamed_file) and directory path contains many directories, We don't call that a "path" -- we call it a directory name. In GNU we use the term "path" only for a list of directories to be searched. It was hard for me to understand the feature you have in mind. I suggest you describe it in a more concrete way, and send the suggestions directly to bug-b...@gnu.org. * Tutustu tapahtumiin ja valokuviin kaikkialta Suomesta! www.suomi-neito.fi
How to convert symbolic links pointing outside to a file?
Hi, I only want to convert symbolic links that point outside the directory to be archived to a file. But I still want to keep symbolic links point inside as symbolic links. Is there an option or a walkaround to do so in tar? -- Regards, Peng
Re: new features to GNU Bash
mika.p.maki...@webinfo.fi writes: > user needs to write command mv /home/user/a/b/c/d/e/file > /home/user/a/b/c/d/e/fileB. $ mv /home/user/a/b/c/d/e/{file,fileB} Andreas. -- Andreas Schwab, sch...@linux-m68k.org GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: How to convert symbolic links pointing outside to a file?
On 06/14/2010 08:08 AM, Peng Yu wrote: > Hi, > > I only want to convert symbolic links that point outside the directory > to be archived to a file. But I still want to keep symbolic links > point inside as symbolic links. Is there an option or a walkaround to > do so in tar? Perhaps, but asking on the bash list isn't the way to find out about tar. -- Eric Blake ebl...@redhat.com+1-801-349-2682 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature
Re: String replacements leak small amounts of memory each time
It would seem Debian Squeeze uses that option as default. Without it, I get a whole ton of warnings, and errors about "free", "malloc" and "realloc" being defines multiple times. Have you tried to reproduce the problem outside of Valgrind? Just running the examples and looking at the memory usage? I've tried on two different machines now, with bash 4.1.5 and 3.2.25, and it happens on both, though it does seem to happen a lot faster on 4.1.5. Other people in #bash have reproduced it too. By the way it seems one of my slashes were removed at some point. The testcases need to be run in the / directory, or the command has to be: while read line; do test=${line#\ }; done < <(ls -lR /) ..in order to generate enough input to reproduce the issue. On 14/06/10 04:52, Chet Ramey wrote: On 6/13/10 5:33 PM, Øyvind Hvidsten wrote: It could be logical leaks, or whatever is the correct english term for them. Memory that's used, and kept track of, but not used again, and not freed until the program shuts down. The memory usage is constantly increasing. I have a process using 3 gigs now, and it just runs one of those testcases (on a lot more data). You could try using the system malloc instead of the one that comes with bash. Configure --without-bash-malloc and see if that changes the allocation behavior. Chet
Re: String replacements leak small amounts of memory each time
On 6/14/10 5:25 PM, Øyvind Hvidsten wrote: > It would seem Debian Squeeze uses that option as default. > Without it, I get a whole ton of warnings, and errors about "free", > "malloc" and "realloc" being defines multiple times. > > Have you tried to reproduce the problem outside of Valgrind? Just > running the examples and looking at the memory usage? I've tried on two > different machines now, with bash 4.1.5 and 3.2.25, and it happens on > both, though it does seem to happen a lot faster on 4.1.5. Other people > in #bash have reproduced it too. > > By the way it seems one of my slashes were removed at some point. The > testcases need to be run in the / directory, or the command has to be: > while read line; do test=${line#\ }; done < <(ls -lR /) > ..in order to generate enough input to reproduce the issue. There's no evidence that bash frees a lot of memory when the program ends (valgrind would report that, too), nor that it makes a lot of allocations that are not quickly followed by frees. You may have hit on an allocation pattern that the system malloc handles badly, or it might be the case that there is a memory leak that I can't detect with the tools I have. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/