on Thu, Jun 20, 2002, Mark Janssen ([EMAIL PROTECTED]) wrote:
> On Thu, 2002-06-20 at 10:32, Q. Gong wrote:
> > A general question: what's the maximum number of files located in one
> > directory?
> 
> As many as you want... but accessing them will get slower when more
> files are placed in a directory.

The hard limits are going to be inodes (fixed number per filesystem) and
probably some struct limits in the filesystem, latter of which are
likely to be quite large.

I've got a largeish directory here:

    $ time \ls -h | wc -l
    124657

    real        0m8.239s
    user        0m3.250s
    sys         0m0.250s

...running reiserfs.  Actually, reading the directory isn't the hard
part, it's adding entries.  I'd strongly recommend you keep ext2fs to a
few thousand entries.  Take a look at how squid creates its directories
for storing stuff.  You can experiment with this yourself using a
trivial shell script.

> I don't know the exact numbers from mind, but files (directory entries)
> are placed in i-nodes. There is a limited number of space in these
> i-nodes, when it fils up there will be pointers to new i-nodes that
> contain more files for this directory.
> 
> The more files... the slower accessing them gets...

For ext2/3, yes.  The directory is a list, which must be scanned.  For
reiserfs, no.  The directory is a hash, and operations are constant
regardless of size, less time to load the hash.


Peace.

-- 
Karsten M. Self <kmself@ix.netcom.com>        http://kmself.home.netcom.com/
 What Part of "Gestalt" don't you understand?
   We freed Dmitry!        Boycott Adobe!         Repeal the DMCA!
     http://www.freesklyarov.org

Attachment: pgpYlhWo6kJIi.pgp
Description: PGP signature

Reply via email to