Interesting, I have some folders with ~100'000 messages and cyrus handles
it very nice. Did you say you have a problem with 10'000 messages in a
mailbox?
And I dealt with a user the other day with 280,000 in their inbox.
Filesystem and cyrus were fine, though our web interface was a bit slow.
On Tue, Jan 30, 2007 at 02:27:24PM -0800, Tom Samplonius wrote:
> It could be filesystem and kernel related. ext2/3 has historically
> not been good at this, but is better now. ufs is actually pretty
> good, but mainly because BSD kernels cache so much vnode data, it
> does not matter if
On Tue, Jan 30, 2007 at 11:22:55AM -0500, Joel Nimety wrote:
> Janne Peltonen wrote:
> > On Tue, Jan 30, 2007 at 11:01:03AM +0100, Simon Matter wrote:
> >>> This might be a problem, since we have some users that really have 1
> >>> messages in their INBOX. Although it seems that Cyrus itself ca
- "Janne Peltonen" <[EMAIL PROTECTED]> wrote:
> On Tue, Jan 30, 2007 at 11:01:03AM +0100, Simon Matter wrote:
> > > This might be a problem, since we have some users that really have
> 1
> > > messages in their INBOX. Although it seems that Cyrus itself
> cannot cope
> > > with this either
On Mon, 29 Jan 2007, Tom Samplonius wrote:
- "Bron Gondwana" <[EMAIL PROTECTED]> wrote:
On Fri, Jan 26, 2007 at 12:20:15PM -0800, Tom Samplonius wrote:
* the system monitoring scripts do a 'du -s' on the sync directory every
2 minutes and store the value in a database so our status com
On Tue, Jan 30, 2007 at 11:33:20AM +0200, Janne Peltonen wrote:
> > And GFS 6.1 (current version) has some issues with large directories:
> >
> > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=214239
>
> This might be a problem, since we have some users that really have 1
> messages i
Janne Peltonen wrote:
> On Tue, Jan 30, 2007 at 11:01:03AM +0100, Simon Matter wrote:
>>> This might be a problem, since we have some users that really have 1
>>> messages in their INBOX. Although it seems that Cyrus itself cannot cope
>>> with this either... in our current, non-clustering se
On Tue, Jan 30, 2007 at 11:01:03AM +0100, Simon Matter wrote:
> > This might be a problem, since we have some users that really have 1
> > messages in their INBOX. Although it seems that Cyrus itself cannot cope
> > with this either... in our current, non-clustering setup. But then, it's
> > an
> On Mon, Jan 29, 2007 at 04:14:33PM -0800, Tom Samplonius wrote:
>>
>> - "Simon Matter" <[EMAIL PROTECTED]> wrote:
>> >
>> > Believe it or not, it works and has been confirmed by several peoples
>> on
>> > the list using different shared filesystems (VeritasFS and Tru64 comes
>> to
>> > mind).
On Sat, Jan 27, 2007 at 04:03:18PM +1100, Bron Gondwana wrote:
> On Fri, Jan 26, 2007 at 12:20:15PM -0800, Tom Samplonius wrote:
> > - Wesley Craig <[EMAIL PROTECTED]> wrote:
> [...]
> > > For your situation, Janne, you might want to explore sharing the sync
> > > directory. sync_client and s
On Mon, Jan 29, 2007 at 04:14:33PM -0800, Tom Samplonius wrote:
>
> - "Simon Matter" <[EMAIL PROTECTED]> wrote:
> >
> > Believe it or not, it works and has been confirmed by several peoples on
> > the list using different shared filesystems (VeritasFS and Tru64 comes to
> > mind). In one thing
On Mon, Jan 29, 2007 at 04:04:36PM -0800, Tom Samplonius wrote:
> sync_client prints errors from time to time, but most seem harmless. It
> certainly does not print anything like "Exiting...", when it decides to quit.
> I don't really know which log lines are bad, or not. What do you conside
- "Simon Matter" <[EMAIL PROTECTED]> wrote:
>
> Believe it or not, it works and has been confirmed by several peoples on
> the list using different shared filesystems (VeritasFS and Tru64 comes to
> mind). In one thing you are right, it doesn't work with BerkeleyDB.
> Just switch all your BDB'
- "Bron Gondwana" <[EMAIL PROTECTED]> wrote:
> On Fri, Jan 26, 2007 at 12:20:15PM -0800, Tom Samplonius wrote:
> > - Wesley Craig <[EMAIL PROTECTED]> wrote:
> > > Close. imapd, pop3d, lmtpd, and other processes write to the log.
>
> > > The log is read by sync_client. This merely tell
On Fri, Jan 26, 2007 at 12:20:15PM -0800, Tom Samplonius wrote:
> - Wesley Craig <[EMAIL PROTECTED]> wrote:
> > Close. imapd, pop3d, lmtpd, and other processes write to the log.
> > The log is read by sync_client. This merely tells sync_client what
> > (probably) has changed. sync_clien
>
> - Wesley Craig <[EMAIL PROTECTED]> wrote:
>> On Jan 26, 2007, at 3:07 AM, Tom Samplonius wrote:
>> > - Janne Peltonen <[EMAIL PROTECTED]> wrote:
>> >> As a part of our clustering Cyrus system, we are considering using
>> >> replication to prevent a catastrophe in case the volume used by
- Wesley Craig <[EMAIL PROTECTED]> wrote:
> On Jan 26, 2007, at 3:07 AM, Tom Samplonius wrote:
> > - Janne Peltonen <[EMAIL PROTECTED]> wrote:
> >> As a part of our clustering Cyrus system, we are considering using
> >> replication to prevent a catastrophe in case the volume used by the
>
On Jan 26, 2007, at 3:07 AM, Tom Samplonius wrote:
- Janne Peltonen <[EMAIL PROTECTED]> wrote:
As a part of our clustering Cyrus system, we are considering using
replication to prevent a catastrophe in case the volume used by the
cluster gets corrupted. (We'll have n nodes each accessing the
On Fri, Jan 26, 2007 at 12:07:49AM -0800, Tom Samplonius wrote:
>
> - Janne Peltonen <[EMAIL PROTECTED]> wrote:
> > Hi!
> >
> > As a part of our clustering Cyrus system, we are considering using
> > replication to prevent a catastrophe in case the volume used by the
> > cluster gets corrupted
- Janne Peltonen <[EMAIL PROTECTED]> wrote:
> Hi!
>
> As a part of our clustering Cyrus system, we are considering using
> replication to prevent a catastrophe in case the volume used by the
> cluster gets corrupted. (We'll have n nodes each accessing the same GFS,
> and yes, it can be done,
Hi!
As a part of our clustering Cyrus system, we are considering using
replication to prevent a catastrophe in case the volume used by the
cluster gets corrupted. (We'll have n nodes each accessing the same GFS,
and yes, it can be done, see previous threads on the subject.)
Now the workings of th
21 matches
Mail list logo