Haim Dimermanas schrieb am Wed, Dec 12, 2001 at 09:50:44AM -0500:
[...]
* 
* >  A. Each node has direct access to its storage via SCSI/Fiberchannel bus.
* 
*  Perfect but very expensive, especially if you go with fiberchannel.
[...]

Yeah, but you could do it the cheap way when applying SCSI with active 
terminators and capable controllers.  You can also do it the not-so-speedy
way but more redundant with complete separate disk storages and rsync (*sigh*)
or - finally working - intermezzo.

  
* 
* > 
* >  B. Each node has a backup system with the same access (the storage is
* >     sort of a SAN and administered by the kimberlite clustering software).
* >     The backup system will access the devices only on failover situations
* >     and will then replace the original system.
* 
*  Could you elaborate on that a bit?

See the heartbeat package for linux or the kimberlite software.  The technology
is best described in short form here:

  http://www.linuxjournal.com/article.php?sid=4344

The point is to

 a) see that the data is in sync but does not become corrupted by parallel
    access

 b) have services and IPs where they bind to ready at the node that is "up".

Kimberlite does a) via shared disk access where both nodes mount the data
when going up and avoiding parallel access by talking to each other on 
so-called quorum partitions (partitions with no FS that are mounted read-only 
by one node and writeable by the other and vice-versa).  This "talk" minimises
the possibility of a situation where both nodes are "up" and mess up the
disks by mounting the same areas writeable.

b) is done with heartbeat UDP packets to see wether the other node is running 
and arp spoofing to have the IPs and services visible in the network.

* 
* >  C. Each node has therefore its own user space.  As the Cyrus IMAP software
* >     cannot talk to others over several nodes which user is where, we used
* >     a separate Linux cluster at the front to proxy every incoming user
* >     request to its respective Cyrus IMAP server.  We used perdition as the
* >     proxy and an LDAP server as the main user base.  Every user within
* >     the LDAP is marked to which node he belongs to.
* 
*  The Linux cluster that you use at the front to proxy every incoming connection
* is redundant? How did you set it up? Did you use some High Availability patches?

No.  We just set up two systems with perdition (POP3/IMAP4-proxy) and postfix
for incoming and outgoing mail.  Whenever a user connects to the POP3/IMAP4 
ports, perdition will look him up in the LDAP and redirect the request to the
Cyrus IMAP where his box is physically stored.

Access to the front can generally be shaped/balanced with any of two mech-
anisms:

 a) Round-Robin-DNS for mail.domain.com

 b) Layer-4-switch (e.g. L5 or LVS)

 (c) for SMTP only: setting up multiple MX records in DNS)

whereas b) is much more expensive but mostly has some finer grained control 
regarding network load.  So the front servers are no hardwired cluster like 
the backend Cyrus IMAP servers.  They just relay connections and hold
temporary data.  You have more connections - you add a server and integrate
it in your layer-4-switch or RR-DNS logic.  There is just no need to apply
the clustering overhead that has to be done for the Cyrus IMAP backend 
servers to achieve HA.


Regards,

- Birger

Reply via email to