Typo in HISTORY EXPANSION section

2011-12-01 Thread lhunath
Configuration Information [Automatically generated, do not change]:
Machine: i386
OS: darwin11.2.0
Compiler: /Developer/usr/bin/clang
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i386' 
-DCONF_OSTYPE='darwin11.2.0' -DCONF_MACHTYPE='i386-apple-darwin11.2.0' 
-DCONF_VENDOR='apple' -DLOCALEDIR='/opt/local/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H -DMACOSX   -I.  -I. -I./include -I./lib  
-I/opt/local/include -pipe -O2 -arch x86_64
uname output: Darwin Myst.local 11.2.0 Darwin Kernel Version 11.2.0: Tue Aug  9 
20:54:00 PDT 2011; root:xnu-1699.24.8~1/RELEASE_X86_64 x86_64
Machine Type: i386-apple-darwin11.2.0

Bash Version: 4.2
Patch Level: 10
Release Status: release

Description:
   !?string[?]
  Refer to the most recent command preceding the current postition 
in the history list containing string.

Fix:
   !?string[?]
  Refer to the most recent command preceding the current position 
in the history list containing string.




Re: Severe memleak in sequence expressions?

2011-12-01 Thread Marc Schiffbauer
* Chet Ramey schrieb am 01.12.11 um 02:54 Uhr:
> That's probably the result of the power-of-two allocation policy in the
> bash malloc.  When this came up before, I wrote:
> 
> ==
> That's not a memory leak.  Malloc implementations need not release
> memory back to the kernel; the bash malloc (and others) do so only
> under limited circumstances.  Memory obtained from the kernel using
> mmap or sbrk and kept in a cache by a malloc implementation doesn't
> constitute a leak.  A leak is memory for which there is no longer a
> handle, by the application or by malloc itself.
> ==

Thanks for the explanation, Chat

-Marc
-- 
8AAC 5F46 83B4 DB70 8317  3723 296C 6CCA 35A6 4134



Re: Severe memleak in sequence expressions?

2011-12-01 Thread Marc Schiffbauer
* Bob Proulx schrieb am 01.12.11 um 05:34 Uhr:
> Marc Schiffbauer wrote:
> > Greg Wooledge schrieb:
> > > Marc Schiffbauer wrote:
> > > > echo {0..1000}>/dev/null
> > > > 
> > > > This makes my system starting to swap as bash will use several GiB of
> > > > memory.
> > >
> > > In my opinion, no.  You're asking bash to generate a list of words from 0
> > > to 100 all at once.  It faithfully attempts to do so.
> > 
> > Yeah, ok but it will not free the mem it allocated later on (see
> > other mail)
> 

Hi Bob,

[...]
> In total to generate all of the arguments for {0..1000} consumes
> at least 78,888,899 bytes or 75 megabytes of memory(!) if I did all of
> the math right.  Each order of magnitude added grows the amount of
> required memory by an *order of magnitude*.  This should not in any
> way be surprising.  In order to generate 100 arguments
> it might consume 7.8e7 * 1e10 equals 7.8e17 bytes ignoring the smaller
> second order effects.  That is a lot of petabytes of memory!  And it
> is terribly inefficient.  You would never really want to do it this
> way.  You wouldn't want to burn that much memory all at once.  Instead
> you would want to make a for-loop to iterate over the sequence such as
> the "for ((i=1; i<=100; i++)); do" construct that Greg
> suggested.  That is a much more efficient way to do a loop over that
> many items.  And it will execute much faster.  Although a loop of that
> large will take a long time to complete.
> 

I was hit by that by accident. Normally I always do normal for-loops
instead so I was a bit surprised by the fact that my machine was not
responding anymore ;-)

Clearly, when I think about it again, it is more or less obvious.

> Put yourself in a shell author's position.  What would you think of
> this situation?  Trying to generate an unreasonably large number of
> program arguments is, well, unreasonable.  I think this is clearly an
> abuse of the feature.  You can't expect any program to be able to
> generate and use that much memory.

ACK

> And as for whether a program should return unused memory back to the
> operating system for better or worse very few programs actually do it.
> It isn't simple.  It requires more accounting to keep track of memory
> in order to know what can be returned.  It adds to the complexity of
> the code and complexity tends to create bugs.  I would rather have a
> simple and bug free program than one that is full of features but also
> full of bugs.  Especially the shell where bugs are really bad.
> Especially in a case like this where that large memory footprint was
> only due to the unreasonably large argument list it was asked to
> create.  Using a more efficient language construct avoids the memory
> growth, which is undesirable no matter what, and once that memmory
> growth is avoided then there isn't a need to return the memory it
> isn't using to the system either.
> 
> If you want bash to be reduced to a smaller size try exec'ing itself
> in order to do this.
> 
>   $ exec bash
> 
> That is my 2 cents worth plus a little more for free. :-)


Thank you for the explanation. 

I will not consider this a bug anymore ;-)

-Marc
-- 
8AAC 5F46 83B4 DB70 8317  3723 296C 6CCA 35A6 4134



How to protect > and interpret it later on? (w/o using eval)

2011-12-01 Thread Peng Yu
Hi,

~$ cat ../execute.sh
#!/usr/bin/env bash

echo "$@"
"$@"

$  ../execute.sh  ls >/tmp/tmp.txt
$ cat /tmp/tmp.txt #I don't want "ls" be in the file
ls
main.sh

'>' will not work unless eval is used in execute.sh.

$ ../execute.sh  ls '>' /tmp/tmp.txt
ls > /tmp/tmp.txt
ls: cannot access >: No such file or directory
/tmp/tmp.txt

How to make execute protect > and interpret it later on w/o using eval?

-- 
Regards,
Peng



Re: How to protect > and interpret it later on? (w/o using eval)

2011-12-01 Thread Pierre Gaston
On Fri, Dec 2, 2011 at 8:24 AM, Peng Yu  wrote:
> Hi,
>
> ~$ cat ../execute.sh
> #!/usr/bin/env bash
>
> echo "$@"
> "$@"
>
> $  ../execute.sh  ls >/tmp/tmp.txt
> $ cat /tmp/tmp.txt #I don't want "ls" be in the file
> ls
> main.sh
>
> '>' will not work unless eval is used in execute.sh.
>
> $ ../execute.sh  ls '>' /tmp/tmp.txt
> ls > /tmp/tmp.txt
> ls: cannot access >: No such file or directory
> /tmp/tmp.txt
>
> How to make execute protect > and interpret it later on w/o using eval?
>

This really belongs to the new help-b...@gnu.org mailing list
* https://lists.gnu.org/mailman/listinfo/help-bash

The most simple is to redirect the output to standard error or the terminal:
echo "$@" >&2  # not that using set -x will give you this for free
echo "$@" > /dev/tty

Another possibilty is to pass the file name as an argument instead
file=$1
shitf
echo "$@"
exec > "$file"
"$@"