On Wed, 20 Mar 2002, Ezra Nugroho wrote:

> Guys,
> 
> I know this issue has been discussed a long time ago, but maybe things have 
> change, so I am going to ask again.
> 
> What is the best journaling file system out there? XFS, ReiserFS, or ext3?
> What's their highlights?
> 
> I have been very happy with ext3 because of it's simplicity and that it is 
> supported in standard kernels.
> Backward compatibility with ext2 is great. A friend at work insists on 
> using XFS for most of our servers.
> 
> If I want to build a server for thousands of small files (bellow 500k) and 
> I want speed, what file system do you recomend?

 ReiserFS is pretty quick for random lookup of files, especially very
small ones, but what you're talking (half a MiB) isn't what I'd call
small, really.  Things like "find" are fast, but tarring up large sets
of data tends to be quite slow (apparently this is because the file
contents tends to end up a distance away from the "stat" data).  So it
very much depends on how you want to use the data.  It's a champ for
some purposes (I use it on / and /usr, and have had no trouble at all
with it).  Deleting files is really fast (this is what impressed me no
end when I first tried it out -- rm -rf linux-$OLDVERSION was fast).

 However, there is one notable caveat: reiserfs journalling will only
protect metadata, i.e. the filesystem integrity; you can occasionally
find garbage in the tail end of files that were open and being written
at the time of a crash.  This is only if there has been insufficient
time for cached data to be written, it should never happen on either
a "quiescent" machine or one which is mainly reading data.

 If you need data integrity, then ext3 in "ordered data" mode is your
man.  I'd also note that a number of benchmarks in the real world have
shown ext3 to be a healthy performer.  Google should turn you up some
references.

 Caveats for ext3 are mainly the same as for ext2: poor performance if
you put thousands of files in a directory and then delete them in the
same order ;o)

 If its important, do some tests.  Build a couple of test installs on
an old machine, start some operations typical of the expected workload
and see how quickly you get your data back.  For typical applications
(e.g. a web server) you won't see any real difference unless you have
gigabit ethernet and are transferring 90MB a second through it, so it
should come down to what your requirements are.




_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to