If i/o on a large scale is an issue I'd suggest xfs, tweaked to your
application o/c. One big advantage over other fs is that there is a tool
for online defragmentation, and that can be handy if you are concerned
with sustained transfer rates. Not that fragmentation is bad to begin
with; extents put leaps and bounds above ext2/3 in that division.
Processor load is a bit higher than, say, jfs - but overall the overhead
is very low and with some tweaking you'll be approaching raw i/o. More
on diffrent tweaking posibilities can be found in the mkfs.xfs manfile.

A few things to be aware of is the aggressive caching and blanking of
inconsistent areas after a log replay.

Personaly, I'd even go out of my way to avoid reiserfs in any
incarnation...

On Thu, 2007-06-28 at 11:45 +0200, banym tuxaner wrote:
> Now i arrived at 60mb/sec using ext2 as filesystem.
> That's better.
> 
> ext3 has too much overhead ... 
> 
> the system is used to virtualise 4-5 machines with xen so i/o is an
> big problem.
> 
> 
> 
> 
> 2007/6/28, Hemmann, Volker Armin
> <[EMAIL PROTECTED]>:
>         On Donnerstag, 28. Juni 2007, banym tuxaner wrote:
>         > Hi,
>         >
>         > i want to rise the performance on an mcp55 chip.
>         >
>         > i became realy bad performance datas by testing the
>         harddisks with
>         > bonnie++. i is not possible to have 36m/s on a sata2
>         controller and 
>         > harddisks. someone who knows something about problems or
>         some configuration
>         > tricks ?
>         >
>         
>         what is wrong with 36mb/sec?
>         
>         100mb or 150mb/sec are peak numbers only reached when the
>         disk-cache is hit. 
>         
>         And when you have fragmented data something like 5mb/sec
>         sustained is hard to
>         achieve .
>         --
>         [EMAIL PROTECTED] mailing list
>         
> 

-- 
[EMAIL PROTECTED] mailing list

Reply via email to