On 02 Aug 2006, at 03:24, Daniel Eckl wrote:
Well, as far as I know, the mailboxes.db and other databases are
only opened and modified by the master process.
That's not the case.
:; grep -lw mboxlist_open *[ch]
arbitron.c
chk_cyrus.c
ctl_cyrusdb.c
ctl_mboxlist.c
cyr_expire.c
cyrdump.c
fud.c
i
Well, as far as I know, the mailboxes.db and other databases are only
opened and modified by the master process. But I'm not sure here.
But as your assumption sounds correct and because this seems to work
with cluster (and I fully believe you here, no question), your
assumption regarding the D
On Tue, 1 Aug 2006, Daniel Eckl wrote:
Well, I don't have cluster knowledge, and so of course I simply believe you
that a good cluster system will never have file locking problems.
I already stated this below!
But how will the cluster affect application level database locking? That was
my pri
Well, I don't have cluster knowledge, and so of course I simply believe
you that a good cluster system will never have file locking problems.
I already stated this below!
But how will the cluster affect application level database locking? That
was my primary question and you didn't name this at
Daniel Eckl wrote:
Hi Scott!
Your statements cannot be correct by logical reasons.
While on file locking level you are fully right, cyrus heavily depends
on critical database access where you need application level database
locking.
As only one master process can lock the database, a secon
Hi Scott!
Your statements cannot be correct by logical reasons.
While on file locking level you are fully right, cyrus heavily depends
on critical database access where you need application level database
locking.
As only one master process can lock the database, a second one either
cannot
Kinda surprising, but it DOES have something to do with Cyrus. Caspur
did their case study on cluster filesystems with their e-mail environment.
It used Cyrus IMAP and some kind of SMTP (I think it was Postfix or
Their paper talks about Maildir. If you connect to mailbox.caspur.it:993
you'll s
On Mon, 2006-07-31 at 15:40 -0700, Andrew Morgan wrote:
> On Mon, 31 Jul 2006, Wil Cooley wrote:
>
> > How big is your journal? I have instructions for determining the size
> > here, because it's non-obvious:
> >
> > http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes
> >
> > (BTW, you can drop th
On Mon, 31 Jul 2006, Wil Cooley wrote:
Well, 32MB is small for a write-heavy filesystem. But if you're not
seeing any problems with kjournald stalling while it flushes, then it
might not be worth the trouble of re-creating the journal as a larger
size. It's unlikely to hurt anything, but I wou
On Mon, 31 Jul 2006, Wil Cooley wrote:
How big is your journal? I have instructions for determining the size
here, because it's non-obvious:
http://nakedape.cc/wiki/PlatformNotes_2fLinuxNotes
(BTW, you can drop the 'defaults' from the entry in your fstab;
'defaults' exists to fill the column
Okay, okay, I just can't *NOT* say something here :)
First, I disagree with all the statements below. Cyrus CAN run in an
Active/Active mode, you CAN have multiple servers reading and writing to
the same files, and clustering IS a good way to achieve HA/DR/BC in a
Cyrus environment. Why do I sa
On Fri, 2006-07-28 at 15:33 -0700, Andrew Morgan wrote:
> On Fri, 28 Jul 2006, Rich Graves wrote:
>
> > My question: So is *anyone* here happy with Cyrus on ext3? We're a small
> > site, only 3200 users, 246GB mail. I'd really rather not try anything more
> > exotic for supportability reasons, b
At 4:18 PM -0400 7/28/06, John Madden wrote:
> Sorry, please bear with my ignorance, I'm not very informed about NFS,
but what's wrong with locking against a real block device?
NFS is a file sharing protocol that doesn't provide full locking
semantics the way block devices do.
Has Cyrus be
At 11:49 PM +0200 7/28/06, Pascal Gienger wrote:
In the Apple case we need to distinguish Apple XSAN Harddisk chassis
and the XSAN software. The XSAN software seem to give you a special
filesystem for SAN issues (at least I read this on their webpage).
Let me dissect this a bit.
The Xserve RA
Michael--
One of the major problems you'd run into is /var/lib/imap, the config
directory. It contains, among other things, a Berkeley DB of
information about the mail store. GFS, Lustre, and other cluster
filesystems do file-level locking; in order to properly read and write
to the BDB backend,
Pascal Gienger wrote:
David Korpiewski <[EMAIL PROTECTED]> wrote:
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try
to create a redundant cyrus mail cluster. First of all, don't try
it, it
is a waste of time. Apple states that mail on an XSAN is not supported.
The re
Hi Michael!
As already said in this thread: Cyrus cannot share its spool.
No 2 cyrus instances can use the same spool, databases and lockfiles.
For load balancing you can use a murder setup and for HA you can use
replication.
Best,
Daniel
Michael Menge schrieb:
> Hi,
>
> Quoting Pascal Gienger
Hi,
Quoting Pascal Gienger <[EMAIL PROTECTED]>:
I would NEVER suggest to mount the cyrus mail spool via NFS, locking is
important and for these crucial things I like to have a real block
device with a real filesystem, so SANs are ok to me.
does someone use lustre as cyrus mail spoll? Would
Andrew Morgan wrote:
> On Fri, 28 Jul 2006, Rich Graves wrote:
>
>> My question: So is *anyone* here happy with Cyrus on ext3? We're a
>> small site, only 3200 users, 246GB mail. I'd really rather not try
>> anything more exotic for supportability reasons, but I'm getting
>> worried that our plann
-- Rich Graves <[EMAIL PROTECTED]> is rumored to have mumbled on 28.
Juli 2006 15:52:17 -0500 regarding Re: High availability email server...:
My question: So is *anyone* here happy with Cyrus on ext3?
Yes. We use it on a SAN with a 800 GB partition for /var/spool/imap.
--
Sebastian Ha
On Fri, 28 Jul 2006, Rich Graves wrote:
My question: So is *anyone* here happy with Cyrus on ext3? We're a small
site, only 3200 users, 246GB mail. I'd really rather not try anything more
exotic for supportability reasons, but I'm getting worried that our planned
move from Solaris 9/VxFS to RH
Fabio Corazza wrote:
Rich Graves wrote:
Clustered filesystems don't make any sense for Cyrus, since the
application itself doesn't allow simultaneous read/write. Just use a
normal journaling filesystem and fail over by mounting the FS on the
backup server. Consider replication such as DRDB or pr
"David S. Madole" <[EMAIL PROTECTED]> wrote:
That's just not true as a general statement. SAN is a broad term that
applies to much more than just farming out block devices. Some of the
more sophisticated SANs are filesystem-based, not block-based. This
allows them to implement more advanced func
Rich Graves wrote:
> Clustered filesystems don't make any sense for Cyrus, since the
> application itself doesn't allow simultaneous read/write. Just use a
> normal journaling filesystem and fail over by mounting the FS on the
> backup server. Consider replication such as DRDB or proprietary SAN
>
Clustered filesystems don't make any sense for Cyrus, since the
application itself doesn't allow simultaneous read/write. Just use a
normal journaling filesystem and fail over by mounting the FS on the
backup server. Consider replication such as DRDB or proprietary SAN
replication if you feel y
On Jul 28, 2006, at 1:40 PM, Pascal Gienger wrote:
So if Apple says that Xsan does not handle many files they admit
that their HFS+ file system is crap for many small files.
This is completely untrue. Xsan, although branded by Apple, is not
completely an Apple product. ADIC makes StorNext
> Sorry, please bear with my ignorance, I'm not very informed about NFS,
> but what's wrong with locking against a real block device?
NFS is a file sharing protocol that doesn't provide full locking
semantics the way block devices do.
> There are file systems like GFS that have been written for t
Pascal Gienger wrote:
> There are techniques to handle these situations - for xfs (as an
> example) consider having *MUCH* RAM in your machine and always mount it
> with logbufs=8.
Is XFS so RAM intensive?
> I would NEVER suggest to mount the cyrus mail spool via NFS, locking is
> important and f
> From: Pascal Gienger
>
> STOP!
> The capability to handle small files efficiently is related to the
> filesystem carrying the files and NOT to the physical and logical
> storage media (block device) under it.
>
> A SAN is a network where physical and logical block devices are shared
> between
> The capability to handle small files efficiently is related to the
> filesystem carrying the files and NOT to the physical and logical storage
> media (block device) under it.
Not necessarily true. All sorts of factors about the SAN, such as block
size, stripe size, RAID level(s), number of s
David Korpiewski <[EMAIL PROTECTED]> wrote:
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to try
to create a redundant cyrus mail cluster. First of all, don't try it, it
is a waste of time. Apple states that mail on an XSAN is not supported.
The reason is that it simply wo
u
> Betreff: Re: High availability email server...
>
> Chad--
>
> We've put /var/lib/imap and /var/spool/imap on a SAN and have
> two machines -- one active, and one hot backup. If the
> active server fails, the other mounts the storage and takes
> over. This is not yet in
Chris St. Pierre wrote:
We've put /var/lib/imap and /var/spool/imap on a SAN and have two
machines -- one active, and one hot backup. If the active server
fails, the other mounts the storage and takes over. This is not yet
Also consider /var/spool/{mqueue,clientmqueue,postfix}.
Depending on
David Korpiewski wrote:
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try to create a redundant cyrus mail cluster. First of all, don't try
it, it is a waste of time. Apple states that mail on an XSAN is not
supported. The reason is that it simply won't run. The Xsa
I spent about 6 months fighting with Apple XSAN and Apple OSX mail to
try to create a redundant cyrus mail cluster. First of all, don't try
it, it is a waste of time. Apple states that mail on an XSAN is not
supported. The reason is that it simply won't run. The Xsan can't
handle the large
Chad--
We've put /var/lib/imap and /var/spool/imap on a SAN and have two
machines -- one active, and one hot backup. If the active server
fails, the other mounts the storage and takes over. This is not yet
in production, but it's a pretty simple setup and can be done without
running any bleeding
Chad A. Prey wrote:
OK...I'm searching for strategies to have a "realtime" email backup in
the event of backend failure. We've been running cyrus-imap for about a
year and a half with incredible success. Our failures have all been due
to using junky storage.
One idea is to have a continuous rsyn
OK...I'm searching for strategies to have a "realtime" email backup in
the event of backend failure. We've been running cyrus-imap for about a
year and a half with incredible success. Our failures have all been due
to using junky storage.
One idea is to have a continuous rsync of the cyrus /var/sp
38 matches
Mail list logo