Mike Bird put forth on 4/28/2010 5:48 PM:
> On Wed April 28 2010 15:10:32 Stan Hoeppner wrote:
>> Mike Bird put forth on 4/28/2010 1:48 PM:
>>> I've designed commercial database managers and OLTP systems.
>>
>> Are you saying you've put production OLTP databases on N-way software RAID
>> 1 sets?
> 
> No.  I've used N-way RAID-1 for general servers - mail, web, samba, etc.
> 
> Nevertheless N-way RAID-1 would be a reasonable basis for a small OLTP
> database as the overwhelming majority of OLTP disk transfers are reads.

You seem to posses knowledge of these things that is 180 degrees opposite of
fact.  OLTP, or online transaction processing, is typified by retail or web
point of sale transactions or call logging by telcos.  OLTP databases are
typically much more write than read heavy.  OLAP, or online analytical
processing, is exclusively reads, made up entirely of search queries.
Why/how would you think OLTP is mostly reads?

> You had claimed that "on a loaded system, such as a transactional database
> server or busy ftp upload server, such a RAID setup will bring the system to
> its knees in short order as the CPU overhead for each 'real' disk I/O is now
> increased 4x and the physical I/O bandwidth is increased 4x".

> Your claim is irrelevant as neither CPU utilisation nor I/O bandwith are
> of concern in such systems.  They are seek-bound.

Yep, you're right.  That must be why one finds so many production OLTP, ftp
upload, mail, etc, servers running N-way software RAID 1.  Almost no one
does it, for exactly the reasons I've stated.  The overhead is too great and
RAID 10 gives almost the same level of fault tolerance with much better
performance.

>> Given the way most database engines do locking, you'll get zero additional
>> seek benefit on reads, and you'll take a 4x hit on writes. I don't know
>> how you could possibly argue otherwise.
> 
> Linux can overlap seeks on multiple spindles, as can most operating
> systems of the last fifty years.

Of course it can, and it even performs I/Os in parallel on multicore or smp
systems, in addition to overlapped I/O.  You're still missing the point that
you have to perform 4x the writes with the 4 disk RAID 1 setup, which
reduces write performance by a factor of 4 vs a single disk, and increases
write bandwidth by a factor of 4 for writes.

Thus, on a loaded multi-user server, compared to a single disk system,
you've actually decreased your overall write throughput compared to a single
disk.  In other words, if the single disk server can't handle the I/O load,
running a 4-way RAID 1 will make the situation worse.  Whereas running with
RAID 10 you should get almost double the write speed of a single disk due to
the striping, even though the total number of writes to disk is the same as
with RAID 1.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bd8e616.5070...@hardwarefreak.com

Reply via email to