Depends on the filesystem. And depends on how you interpret your
question (how many files can you put in a directory and still have
reasonable performance vs. how many files can you put in a directory,
absolutely, if you're prepared to wait 2 years after doing 'ls').
File System is one matter, and shell environment is another.
While ext2/3 and resistfs will surely contain more than
million files in a directory, (or anywhere anyhow)
Many shells like bash and csh won't process them
cleary.

you will just see the wild card characters cannot
process every names. (try ls *.txt, in a directory
that holds more than 20thousand files of .txt)
It gives some error on my Bash, months ago.

In shorts, Shells don't like long long list of
files.

Keep it short list. by the way, wasn't that what the
concepts of 'directory' was for? :]

Nayas Lyun



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to