On Tue, Apr 08, 2025 at 08:05:21PM +0200, Mark Liam Brown via Cygwin wrote:
> Greetings!
> 
> Are there tuning variables to improve ls, ls -l, find ., find . -ls
> performance for very large dirs?
> 
> If we have a SMB dir with 60000+ entries a simple ls -l can take MANY
> minutes (22+mins), while cmd.exe dir just floods the terminal with
> results immediately.

`man ls`

This might help a little bit:

`ls -1f`  # one (-1) for single line output, and -f to disable sorting


Of course, please also ask why there are 60000 files in one directory
rather than sharded or stored some other way.  e.g. for log files,
create an archive/ subfolder and rotate all non-recently,
less-frequently accessed files to that subfolder so that the main
folder has (many) fewer entries.

For a folder that size, `ls` and `ls -l` are less appropriate tools.

A web server could serve a single index.html with the directory listing,
and the directory listing could be updated when the folder changes, or
on a periodic basis.

Various solutions depend on context and usage of the folder and its
contents.

Whether or not cmd.exe handles such a large directory better is
immaterial to the assessment that 60000+ entries in a single folder
is lacking better folder organization/file management and access methods

Cheers, Glenn

-- 
Problem reports:      https://cygwin.com/problems.html
FAQ:                  https://cygwin.com/faq/
Documentation:        https://cygwin.com/docs.html
Unsubscribe info:     https://cygwin.com/ml/#unsubscribe-simple

Reply via email to