The approach I've tested here is using a linux server with a ReiserFS
partition replicated between 2 nodes using DRBD.  Heartbeat runs on both
systems and will switch the slave node to master in the event of a failure
in the Master.  This scenerio is strictly failover and only one node has
write access to the mirrored partition at any one time.  By using a
journaling file system, this eliminates running fsck on the slave to mount
the mirrored drive in read/write.  This has been running on a Cyrus
mailserver. (Our main server is using Hardware Raid 5 though..)  So far
never seen any problems, corruption, etc.  Speed is very good with the
ReiserFS system as far as I can see (even though it's supposed to have a bit
higher overhead..).  Using rsync works as well but it is difficult and
tricky doing a proper resync when the original master comes back online.
Here DRBD and Heartbeat take care of this..

Alain
----- Original Message -----
From: "Andrew K Bressen" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, March 14, 2001 5:48 PM
Subject: Re: Replicated mail server ...


>
>
> >Does anyone know of any tools available that will allow someone to do
> >"distributed" or "replicated" mail servers?
>
> The short answer is "yes, but nothing really good".
>
> I did an extensive search on this a year or so ago and came
> up with the following conclusions.
>
> Mechanism:
> (1) Use Lotus Notes.
>     Pros:
>           replication works mostly transparently.
>     Cons:
>           their imap implementation, at least as of 5.x, sucks.
>             (VP of mktg tells us a year after they tried to sell it to
>             us "thank god you didn't buy our product. you'd have
>             been f*cked!").
>           proprietary commercial sw ($); limited platform support.
>           need lotus expertise to manage.
>
>
> (2) Use Innosoft PMDF on an OpenVMS cluster.
>     Pros:
>           rock-solid VMS reliability.
>           excellent IMAP implementation and an awesome MTA to boot.
>     Cons:
>           product might be discontinued in a year or three since Sun
bought
>             Innosoft.
>           need VMS expertise to manage, and need VMS hardware (compaq
alphas).
>             In particular, configuring such that a cluster
>             transition won't lock up the shared disk takes
>             some thought. VMS market now stable, but almost certainly not
>             growing, making long-term support issues questionable.
>
> (3) Various commercial high-availability systems, such as those
>     offered by qualix, veritas and others. (mostly targeted for solaris).
>     Pros:
>           sort of works.
>     Cons:
>           not true shared-mailstore access, much more of a warm-backup
>              level solution; you can't hit the same mailstore at the
>              same time from two different machines, you can only have
>              one machine grab and fsck the mailstore if the other machine
>              crashes. in general this approach is a kludge with some
>              gotchas to watch out for, and takes good sysadmins to
>              manage workably.
>
> (4) Home-rolled high-availability.
>     In theory, one could use rsync or unison
>      http://www.cis.upenn.edu/~bcpierce/unison/
>     or some linux distributed filesystem
>     to replicate a mailstore back and forth between two machines
>     with similar cyrus or uwash imap configurations.
>     if one machine crashes, you start the server processes on the other
>     and reverse the direction of replication when before bringing the
>     first server back up.
>     Pros:
>           can use free sw and commodity hw.
>           sort of works.
>     Cons:
>           can get yourself into deep trouble with no help line to call.
>           still isn't really a shared mailstore; about as bad a kludge
>             as commercial high-availability for unix.
>
>     Note: someone just posted to the list about a successful use
>           of this approach using networked raid under linux in which
>           both servers get live access. progress marches on.
>
>
>
>

Reply via email to