On 3/14/12 2:14 PM, Somchai Smythe wrote:
> Hello,
>
> I am reporting a problem with performance, not correctness.
>
> While preparing some examples for a course lecture where I code the
> same algorithm in many languages to compare languages, I ran some code
> and while it was reasonably quick
Hi,
1. bash version: GNU bash, version 4.2.10(1)-release (x86_64-pc-linux-gnu)
2. test script:
[bash] #!/bin/bash
[bash] remote_ssh_account="depesz@localhost"
[bash] directory_to_tar=pgdata
[bash] exec nice tar cf - "$directory_to_tar" | \
[bash] tee >(
[bash] md5sum - | \
[bash]
Richard Neill wrote:
> I don't know for certain if this is a bug per se, but I think
> "compgen -W" is much slower than it "should" be in the case of a
> large (1+) number of options.
I don't think this is a bug but just simply a misunderstanding of how
much memory must be allocated in order t
Hello,
I am reporting a problem with performance, not correctness.
While preparing some examples for a course lecture where I code the
same algorithm in many languages to compare languages, I ran some code
and while it was reasonably quick with ksh, it would just apparently
hang at 100% cpu in b
If I increase the upper number by a factor of 10, to 50, these times
become, 436 s (yes, really, 7 minutes!) and 0.20 s respectively. This
suggests that the algorithm used by compgen is O(n^2) whereas the
algorithm used by grep is 0(1).
I meant: grep is O(n).
Dear All,
I don't know for certain if this is a bug per se, but I think
"compgen -W" is much slower than it "should" be in the case of a large
(1+) number of options.
For example (on a fast i7 2700 CPU), I measure:
compgen -W "`seq 1 5`" 1794#3.83 s
compgen -W "`seq 1 5 |