In message from Matt Lawrence <[EMAIL PROTECTED]> (Mon, 4 Aug 2008
19:35:47 -0500 (CDT)):
On Mon, 4 Aug 2008, Joe Landman wrote:
I haven't seen or heard anyone claim xfs 'routinely locks up their
system'.
I won't comment on your friends "sharpness". I will point out that
several
very large data stores/large cluster sites use xfs. By definition,
no large
data store can be built with ext3 (16 TB limit with patches, 8 TB in
practice), so if your sharp friend is advising you to do this ...
He currently works for a phone company, so the amount of data is
quite large, but the usage pattern is probably quite different. As
far as skill level, I would rate him much higher than any of the
folks I work with as far as being a sysadmin.
I work w/xfs for HPC since 1995: I used xfs w/SGI SMP servers under
IRIX, and then on Linux/x86 clusters. I didn't have any hang-ups
because of xfs.
But xfs is optimal for work w/large files; when you work w/a lot of
relative small files, xfs isn't the better choice.
The question about fragmentation itself is more interesting. We have
in xfs filesystem a set of small files (1st of all, input data) in
addition to large (usually temporary) files. So the fragmentation may
be present.
xfs has a rich set of utilities, but AFAIK no defragmentation tools (I
don't know what will be after xfsdump/xfsrestore). But which modern
linux filesystems have defragmentation possibilities ?
Mikhail Kuzminsky
Computer Assistance to Chemical Research Center
Zelinsky Institute of Organic Chemistry
Moscow
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf