:Another thing I've found with the MegaRAID (or maybe this is an nfs
:thing?) is that large scale (100Mb, full duplex) hits on the NFS
:server tend to lock up the nfs server (which has the megaraid in it).
:Typically, this includes not being able to access the non-raid root
:var and usr partition
Another thing I've found with the MegaRAID (or maybe this is an nfs
thing?) is that large scale (100Mb, full duplex) hits on the NFS
server tend to lock up the nfs server (which has the megaraid in it).
Typically, this includes not being able to access the non-raid root
var and usr partitions.
An
> Mike> The Mylex controllers seem to have a small edge in performance,
> Mike> which may be due to them doing cache-line-sized I/Os (usually
> Mike> only 8k) in that case.
>
> Maybe so, but they also don't seem to support the LVD-enabled versions
> of the Mylex cards.
Who is "they" here? We c
> "Mike" == Mike Smith <[EMAIL PROTECTED]> writes:
Mike> Try enabling DirectIO and WriteBack if you haven't already.
Mike> AMI's RAID5 implementation seems to suffer from rewriting the
Mike> entire stripe when you do sub-stripe-sized writes, but I'm not
Mike> sure about that yet.
Already don
> > "Mike" == Mike Smith <[EMAIL PROTECTED]> writes:
>
> >> The AMI MegaRAID 1400 delivers between 16.5 and 19 M/s (the 19M/s
> >> value is somewhat contrived --- using 8 bonnies in parrallel and
> >> then summing their results --- which is not 100% valid)... but the
> >> MegaRAID appears to
> "Brad" == Brad Knowles <[EMAIL PROTECTED]> writes:
Brad> At 10:02 AM -0500 1999/12/17, David Gilbert wrote:
>> Well... it's RAID-5 across the same 8 drives with all 8 drives on
>> one LVD chain (same configuration as the other test). I have tried
>> the 128k stripe, but it was slower than
At 10:02 AM -0500 1999/12/17, David Gilbert wrote:
> Well... it's RAID-5 across the same 8 drives with all 8 drives on one
> LVD chain (same configuration as the other test). I have tried the
> 128k stripe, but it was slower than the default 64k stripe.
One of the lessons I learned f
> "Mike" == Mike Smith <[EMAIL PROTECTED]> writes:
>> The AMI MegaRAID 1400 delivers between 16.5 and 19 M/s (the 19M/s
>> value is somewhat contrived --- using 8 bonnies in parrallel and
>> then summing their results --- which is not 100% valid)... but the
>> MegaRAID appears to be stable.
> I have the Enterprise 1400 Megaraid adapter with (currently 16M) on
> it. I have tested the various modes of operation (different raid
> levels and striping) and find it to be working well. My LVD array
> consists of 8 18G Quamtum IV's.
>
> Now... using vinum and either the 2940U2W (Adaptec L
> "Brad" == Brad Knowles <[EMAIL PROTECTED]> writes:
Brad> It sounds like the second RAID-5 bug listed on the page I
Brad> mentioned:
>>> 28 September 1999: We have seen hangs when perform heavy I/O to
>>> RAID-5 plexes. The symptoms are that processes hang waiting on
>>> vrlock and flswai
At 11:48 AM -0500 1999/12/16, David Gilbert wrote:
> It's a really long thread. I'm not going to repeat it here.
> Basically, under "enough" load, vinum trashes the kernel stack in such
> a way that debugging is very tough.
It sounds like the second RAID-5 bug listed on the page I men
> "Brad" == Brad Knowles <[EMAIL PROTECTED]> writes:
>> This is impressive and subject to the bug that I mentioned in
>> -STABLE which still hasn't been found.
Brad> Which one is this?
It's a really long thread. I'm not going to repeat it here.
Basically, under "enough" load, vinum trashe
At 10:52 AM -0500 1999/12/16, David Gilbert wrote:
> Now... using vinum and either the 2940U2W (Adaptec LVD) or the TekRAM
> (NCR) LVD (using the sym0 device) gives 30 to 35 M/s under RAID-5.
That's really interesting, because there are at least two or
three outstanding bugs in the vi
13 matches
Mail list logo