Dear All,
I don't know for certain if this is a bug per se, but I think
"compgen -W" is much slower than it "should" be in the case of a large
(10000+) number of options.
For example (on a fast i7 2700 CPU), I measure:
compgen -W "`seq 1 50000`" 1794 #3.83 s
compgen -W "`seq 1 50000 | grep 1794`" #0.019 s
In these examples, I'm using `seq` as a trivial way to generate some
data, and picking 1794 as a totally arbitrary match.
In the first example, compgen is doing the filtering, whereas in the
2nd, I obtain the same result very much faster with grep.
If I increase the upper number by a factor of 10, to 500000, these times
become, 436 s (yes, really, 7 minutes!) and 0.20 s respectively. This
suggests that the algorithm used by compgen is O(n^2) whereas the
algorithm used by grep is 0(1).
For a real world example, see:
https://bugs.mageia.org/show_bug.cgi?id=373#c8
in which we are using completion on package-management.
In this case, the number is 43031.
I hope this is helpful.
Richard