Amos wrote:
Ben Carter wrote:
When we get a chance, we're going to talk to Derrick about getting
some cluster support into the std. code.
That would be most impressive. I wonder how much Ken's work with 2.3
would fit in with this?
My code in 2.3 uses the Murder code to keep local copies of mailb
Ben Carter wrote:
When we get a chance, we're going to talk to Derrick about getting some
cluster support into the std. code.
That would be most impressive. I wonder how much Ken's work with 2.3
would fit in with this?
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyru
zorg wrote:
Hi c
ould you give me just some more explaination of what is the stage./
files used during LMTP delivery have unique filenames
so if i underdstand what you saying. if the stage./ files used during
LMTP delivery is the same for all the node of the cluster share the
same SAN then there
Amos wrote:
So y'all are doing active/active? What version of Cyrus?
Yes. We're running 2.1.17.
Thanks,
Dave
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
Hi c
ould you give me just some more explaination of what is the stage./
files used during LMTP delivery have unique filenames
so if i underdstand what you saying. if the stage./ files used during
LMTP delivery is the same for all the node of the cluster share the same
SAN then there won't be an
Hi,
Ok the solution with SAN seems good but did someone try this with the
linx virtual server (lvs) ???
Dave McMurtrie a écrit :
zorg wrote:
Hi
I'v seen in the list lot of discussion about availabity but none of
them seem to give a complete answers
I have been asked to build an high-availabilit
Ben Carter wrote:
Actually, the important code change for any active/active cluster
configuration is to make sure the stage./ files used during LMTP
delivery have unique filenames across the cluster.
There are some other setup differences related to this same issue such
as symlinking /var/imap/
Dave McMurtrie wrote:
Amos wrote:
What sort of changes did you have to make?
We just had to change map_refresh() to call mmap() with MAP_PRIVATE
instead of MAP_SHARED. Since mmap() is being called with PROT_READ
anyway, this doesn't affect the operation of the application since the
mapped regi
Amos wrote:
What sort of changes did you have to make?
We just had to change map_refresh() to call mmap() with MAP_PRIVATE
instead of MAP_SHARED. Since mmap() is being called with PROT_READ
anyway, this doesn't affect the operation of the application since the
mapped region can never be updated
We're doing this. We have a 4-node Veritas cluster with all imap data
residing on a SAN. Overall it's working quite well. We had to make
some very minor cyrus code changes so it'd get along well with Veritas'
cluster filesystem. This setup gives us high availability and scalability.
What sor
zorg wrote:
Hi
I'v seen in the list lot of discussion about availabity but none of them
seem to give a complete answers
I have been asked to build an high-availability for 5000 users
I was wondering what is actually the best solution
Using murder Idon' t really understand if it can help me. it's
On Thu, 8 Jul 2004, Ken Murchison wrote:
> Its not unheard of, in fact its been done for Cyrus before. I was paid
> a rather large sum by a semiconductor company to implement the
> altnamespace feature, and Fastmail.fm has contracted me for several
> features, most recently almost all of the new
Kevin Baker wrote:
Fair enough ;)
So what would it cost to have this feature implemented?
Specifically adding the application level redundancy patch
that was submitted.
I think it is certainly worth discussion if nothing else
to see if it is something we, people interested, might
collectively be ab
Fair enough ;)
So what would it cost to have this feature implemented?
Specifically adding the application level redundancy patch
that was submitted.
I think it is certainly worth discussion if nothing else
to see if it is something we, people interested, might
collectively be able to pay for.
On Tue, 6 Jul 2004, Kevin Baker wrote:
How would we indicate our interest to the development
team? How are updates and future development project
priorities decided?
Several methods..
Supplied patches often get a high priority (though not in this case, since
we have a patch that is very complicate
On Tue, 6 Jul 2004, Kevin Baker wrote:
> The cyrus/replication would be amazing. Application level
> replication seems to be the best option if the setup is
> straight forward.
>
> How would we indicate our interest to the development
> team? How are updates and future development project
> prior
The cyrus/replication would be amazing. Application level
replication seems to be the best option if the setup is
straight forward.
How would we indicate our interest to the development
team? How are updates and future development project
priorities decided?
Kevin
> Hi,
>
> Etienne Goyer wrot
Hi,
Etienne Goyer wrote:
Regarding IMAP replication, I have not found much but the work of
David Carter at
http://www-uxsup.csx.cam.ac.uk/~dpc22/cyrus/replication.html seem
interesting. As far as I can tell, source to this implementation and
current status are not available. Does somebody on
Lee wrote:
Has anyone used GFS with cyrus?
Not GFS specifically, but Sun's QFS is being used. I'm certain that
SGI's CXFS would also work (although I haven't tested it).
Could one theoretically create a
redundant, loadbalancing cluster using two boxes, GFS and a SAN?
Yes, see my earlier post
John C. Amodeo wrote:
I would be very interested to know if anyone is using a Cyrus cluster
connected to a SAN using a cluster file system. Would Cyrus die if two
servers were trying to access the same mailstore / db files?
As long as the filesystem provides correct locking and memory mapping,
th
Kevin P. Fleming wrote:
Etienne Goyer wrote:
On a similar note, RedHat have apparently bought Sistina, and GPLed
GFS. This is great news for HA under Linux, IMHO. I will be testing
it soon.
Well, on their site is it listed as "open source", but it is not on
sources.redhat.com (where LVM2 and
Norman Zhang wrote:
I think you can get it here, http://sources.redhat.com/cluster/gfs/
Yes, thanks. When I looked at the "sources" page I was looking for GFS
directly, not a "cluster" subproject. This page appears to have
everything needed to use GFS.
---
Cyrus Home Page: http://asg.web.cmu.edu/
I would be very interested to know if anyone is using a Cyrus cluster
connected to a SAN using a cluster file system. Would Cyrus die if two
servers were trying to access the same mailstore / db files?
Norman Zhang wrote:
On a similar note, RedHat have apparently bought Sistina, and GPLed
GFS. T
On a similar note, RedHat have apparently bought Sistina, and GPLed
GFS. This is great news for HA under Linux, IMHO. I will be testing
it soon.
Well, on their site is it listed as "open source", but it is not on
sources.redhat.com (where LVM2 and device-mapper landed when they bought
Sistina
Has anyone used GFS with cyrus? Could one theoretically create a
redundant, loadbalancing cluster using two boxes, GFS and a SAN?
Lee
On Jun 28, 2004, at 9:43 AM, Etienne Goyer wrote:
Ben Carter wrote:
Etienne Goyer wrote:
Tore Anderson word of wisdom where :
There's a third option, which is th
Etienne Goyer wrote:
On a similar note, RedHat have apparently bought Sistina, and GPLed GFS.
This is great news for HA under Linux, IMHO. I will be testing it soon.
Well, on their site is it listed as "open source", but it is not on
sources.redhat.com (where LVM2 and device-mapper landed when
Ben Carter wrote:
Etienne Goyer wrote:
Tore Anderson word of wisdom where :
There's a third option, which is the one I prefer the most: shared
block device.
Well, I did not consider that option since the SAN become a single
point-of-failure, and that is a big no-no according to the
specifica
Thanks...
I'm familiar with what it is... I'm not familiar with how
to setup application level replication with Cyrus.
MySQL/LDAP NP...
I've looked through the docs/archives and haven't found
anything... Murder seems more focused on partitioning.
>
>
> --On Wednesday, June 23, 2004 11:48 -
--On Wednesday, June 23, 2004 11:48 -0700 Kevin Baker
<[EMAIL PROTECTED]> wrote:
David,
This is exactly what I had in mind. Could you maybe give a
quick overview of how you have the replication and
failover setup; specifically "application level
replication vs block"
application lvel means exact
On Wed, 23 Jun 2004, Kevin Baker wrote:
Is it something like this:
- Server A
- active accounts 1-100
- replicate accounts 101-200 from Server B
- Server B
- active accounts 101-200
- replicate accounts 1-100 from Server A
If B goes down, A takes over the accounts it had
replicated from B.
Yes,
David,
This is exactly what I had in mind. Could you maybe give a
quick overview of how you have the replication and
failover setup; specifically "application level
replication vs block"
While the idea of a standby server that uses block level
replication seems very great, if possible I'd like to
On Tue, 22 Jun 2004, Etienne Goyer wrote:
Does somebody on the list use this solution or a similar one and could
comment and the practicality of it ? Perhap M. Carter (if you read the
list) could give us a status update for his particuliar project ?
There's really not a whole lot to say.
We've b
On Tue, 22 Jun 2004 18:02:29 -0500
Jim Levie <[EMAIL PROTECTED]> wrote:
> For a mail system, a Murder with multiple Front End systems and a bunch
> of Back End systems is more resilient. Yes, if a Back End fails some
> users will be without mail for a bit. That can be as short as a few
> minutes i
On Tue, 2004-06-22 at 15:54, Jure Peèar wrote:
> It _can_ break in a spectacular way ... easily.
>
Yep...
> Our batch of disks turned out to have some fubar firmware, which caused them
> to randomly fall out of the array under a specific load. That problem went
> undetected during the testing pha
Kevin Baker wrote:
While we are discussing network storage for HA.
Would there be a problem with NAS over Gigabit?
If you mean a remote filesystem mounted with NFS, yes, there will be
problems. You could use a NAS box with courier-imap, since courier
doesn't need to lock files, but with Cyrus tha
On Tue, 22 Jun 2004 18:52:09 +0200
Tore Anderson <[EMAIL PROTECTED]> wrote:
> There's a third option, which is the one I prefer the most: shared
> block device. Connect your two servers to a SAN, and store all of
> Cyrus' data on one LUN, which both servers have access to. Then, set
> your
While we are discussing network storage for HA.
Would there be a problem with NAS over Gigabit?
> * Etienne Goyer
>
> > Well, I did not consider that option since the SAN
> become a single
> > point-of-failure, and that is a big no-no according to
> the
> > specifications I have at the mom
* Etienne Goyer
> Well, I did not consider that option since the SAN become a single
> point-of-failure, and that is a big no-no according to the
> specifications I have at the moment.
>
> If it would have been possible, it would have been my first choice
> though.
Most decent storage equ
On Tue, 22 Jun 2004, Etienne Goyer wrote:
> Tore Anderson word of wisdom where :
> > There's a third option, which is the one I prefer the most: shared
> > block device.
>
> Well, I did not consider that option since the SAN become a single
> point-of-failure, and that is a big no-no accordin
Tore Anderson word of wisdom where :
There's a third option, which is the one I prefer the most: shared
block device.
Well, I did not consider that option since the SAN become a single
point-of-failure, and that is a big no-no according to the
specifications I have at the moment.
If it would
Hi,
-- Etienne Goyer <[EMAIL PROTECTED]> is rumored to have mumbled
on Dienstag, 22. Juni 2004 12:09 Uhr -0400 regarding High availability ...
again:
From what I can see, I would have two possibilities to make a hot spare
Cyrus IMAP daemon : replication, or cluster filesystem/block device
(drdb
* Etienne Goyer
> I have been asked to consider how to build an high-availability
> Cyrus installation. This is a small installation (~200 accounts ...
> peanuts), so scalability is not really a concern. In this regard, a
> Murder is not really appropriate.
>
> The platform would be Linux.
42 matches
Mail list logo