How about: Don't sort the list, or consider "lazy sorting" only the portion of the list that's going to be displayed. (I'd suggest using an incremental Quicksort, which can yield a sorted sublist in almost linear time. (I started working on this for my zcomp module until I realised it was already sorted.))
Maybe change COMPREPLY into an associative array, where the *keys* are the choices to be displayed. Or more generally, use some kind of hash table rather than sorting to remove duplicates. That would replace about N×log(N) called to strcoll() with a trivial number of calls to strcmp(). Consider being able to attach a generator to an array, to create entries on demand. (If it's a subshell, or coprocess, put it in its own pgrp, so you can nuke it with SIGPIPE when you don't need any more values.) Use progressive rendering so that (a) you see something immediately, and (b) it doesn't hold up activity. -Martin On Thu, 11 Jan 2024 at 04:53, Dale R. Worley <wor...@alum.mit.edu> wrote: > Eric Wong <normalper...@yhbt.net> writes: > > Hi, I noticed bash struggles with gigantic completion lists > > (100k items of ~70 chars each) > > A priori, it isn't surprising. But the question becomes "What > algorithmic improvement to bash would make this work faster?" and then > "Who will write this code?" > > Dale > >