On Thu, 2010-05-13 at 05:50 +1000, Bron Gondwana wrote:
> I've got no problem with adding this upstream by the way.
That would be great :)
Andre
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyr
Hello
I've been keeping a list of configuration changes to setup Cyrus in an
active-active two-node cluster (I'm using DRBD and OCFS2 for that).
Here's the list of things I have so far:
* Don't use BerkeleyDB (set all databases which default to berkeley to
skiplist);
* Under configdirectory,
On Tue, 2010-05-11 at 09:12 -0700, Vincent Fox wrote:
> We symlink our proc and log directories out to
> TMPFS directories local to each server. There is
> little sense to keeping proc & logs on disk IMO.
Do you know if the log directory is used even when none of the databases
is set to berkele
On Tue, 2010-05-11 at 09:16 -0400, Ciro Iriarte wrote:
> By the way, did you test your setup with considerable load?, I was
> really interested in this kind of solution, looking at various docs
> and mail posts, main concern seems to be file locking
I've tested DRBD and OCFS2 under heavy load,
On Tue, 2010-05-11 at 08:57 -0400, Ciro Iriarte wrote:
> LVS is quite simple to setup this days, there's no need to patch the
> kernel with any mainstream distribution...
Yeah, that was my second option, as I already use it for other services.
I was just wondering if I could come up with something
Hello
I'm setting up a two-machine cyrus cluster using OCFS2 over DRBD. The
setup is working fine, and I'm now considering the load balancing
options I have.
I believe the simplest option would be to simply rely on DNS load
balancing. However, for this to work, I need to consider what happens
whe
Hello.
This a new release of Ruby/ManageSieve, adding TLS support.
Ruby/ManageSieve is a pure-ruby library for the MANAGESIEVE
protocol. It also includes the command-line ``sievectl'' utility for
managing Sieve scripts.
Please check the Ruby/ManageSieve homepage for documentation and
downloads:
Asking differently, is it worth putting the spool metadata on a separate
partition, considering that the spool itself is already spread among
several partitions, and thus I/O should be split too?
Has anyone tried both alternatives?
Thanks,
Andre
On Thu, 2008-01-24 at 10:21 -0200, Andre Nathan
Hello
I'm planning for a new Cyrus install, for which I'll use a multiple
partitioning scheme. The idea so far is to have a partion for /
and /var/lib/imap (call it the metadata partition), and multiple
partitions for the spool data. The idea is to improve I/O by having
the /var/lib/imap data on a
On Tue, 2007-09-04 at 10:19 +0100, David Carter wrote:
> I also gather that the xfs repair tools need an extraordinary amount of
> memory to run on large file systems:
>
>http://oss.sgi.com/archives/xfs/2005-08/msg00045.html
I had this problem recently. Newer versions of xfsprogs fix it:
ht
On Tue, 2007-03-06 at 08:57 -0500, John Madden wrote:
> Are you connecting to lmtpd over TCP or something? I haven't seen this
> behavior with Postfix and a UNIX socket, at least. But still, I'd
> rather have Postfix defer the connection than have huge IO wait
> queues. ...If nothing else, think
On Tue, 2007-03-06 at 13:11 +0100, [EMAIL PROTECTED] wrote:
> You should always limit your MTA(s) (Postfix) LMTP clients to match the
> max number at your LMTP Server (Cyrus). Be sure to use a separate
> transport for lmtp and use the lmtp_connection_cache and maybe raise
> the max_use value. Wi
On Tue, 2007-03-06 at 15:00 +1100, Rob Mueller wrote:
> Yep, there's obviously a 2 sided limit here.
>
> Too few lmtpds and postfix won't be able to deliver incoming mail fast
> enough, and thus the mail queue on the postfix side will build up.
I'll try some config tweaking here... I guess it'll
On Tue, 2007-03-06 at 09:13 +1100, Rob Mueller wrote:
> I've never seen over 100%, and it doesn't seem to make sense, so I'm
> guessing it's a bogus value.
Yeah, I talked to the Coraid guys and they told me iostat reports
incorrect values for AoE.
> > avg-cpu: %user %nice %system %iowait %i
On Sat, 2007-03-03 at 14:23 +1100, Rob Mueller wrote:
> %util - Percentage of CPU time during which I/O requests were issued to the
> device (bandwidth utilization for the device). Device saturation occurs when
> this value is close to 100%.
Can values way above 100% be trusted? If so, it's pret
On Sat, 2007-03-03 at 13:14 +0100, Simon Matter wrote:
> 1) Try to put different kind of data (spool, meta databases) on
> independant storage (which means independant paths, disks, SAN
> controllers). For the small things like cyrus databases, putting them on
> separate local attached SCSI/SATA di
On Fri, 2007-03-02 at 17:07 -0800, Andrew Morgan wrote:
> I doubt this is the problem you are seeing. It sounds to me like you may
> be hitting the limits of your storage system. I would be fascinated to
> see the output of the command "iostat -x sdb 5" (where 'sdb' is the device
> you have cy
On Fri, 2007-03-02 at 12:59 -0800, Andrew Morgan wrote:
> You'll lose the ability to effectively use vacation/out-of-office
> messages. The vacation system uses deliver.db to determine which person
> has already received the vacation message, to prevent multiple messages
> from being sent.
But
On Fri, 2007-03-02 at 09:21 -0300, Andre Nathan wrote:
> Is there a cyrus version where specifying "duplicatesuppression: 0"
> actually prevents the duplicate check and thus the database locking
> issues I could upgrade to a newer version too, no problem.
I did that, and
On Thu, 2007-03-01 at 16:41 -0600, Blake Hudson wrote:
> > Actually it's still happening :( I read in the archives that the only
> > way to actually avoid the locking in deliver.db is to also disable
> > sieve, is that true?
So, I'm willing to recompile cyrus and comment the calls to
dupicate_chec
On Thu, 2007-03-01 at 16:41 -0600, Blake Hudson wrote:
> What format is your deliver.db file?
>
> I've had success changing mine to skiplist on more heavily utilized servers.
Skiplist too. I've moved away from berkeleydb long ago after all the
issues I had...
Andre
Cyrus Home Page: http://
On Thu, 2007-03-01 at 14:00 -0300, Andre Nathan wrote:
> Well, it's funny, I still see the "duplicate_check" lines in the log,
> but watching the processes with strace I can't see the blocking
> behaviour anymore. I still do have heavy contention on some users'
On Thu, 2007-03-01 at 11:19 -0500, John Madden wrote:
> FWIW, I turned off duplicatesuppression to no avail -- lmtpd still locks
> and writes to /var/imap/deliver.db. ...So are you sure it's really
> turned off?
Well, it's funny, I still see the "duplicate_check" lines in the log,
but watching th
On Thu, 2007-03-01 at 10:38 -0300, Andre Nathan wrote:
> After postfix sends the ".", it takes lmtpd more than 5 minutes to send
> the "250 2.1.5 Ok" back (everything else on the lmtp conversation
> happens in the same second). Is this the time when lmtpd writes the
On Wed, 2007-02-28 at 17:26 -0300, Andre Nathan wrote:
> The last field in the "delays" field shows that the time out occurred
> after 600s trying to send the message to cyrus. Even when a timeout does
> not occur, the time for the message to be sent is around 100-300s.
A more
On Wed, 2007-02-28 at 16:22 -0600, Blake Hudson wrote:
> I would suggest starting by reviewing your memory usage (esp. swap) with
> top and disk usage with "iostat -x 3" (part of the sysstat package)
>
> More than likely you are running into problems in one of these areas.
Thanks, I'll have a loo
On Thu, 2007-03-01 at 09:25 +0100, [EMAIL PROTECTED] wrote:
> A few thousand lmtpd would be *way* too much because they all use the
> same I/O bottleneck (if you don't have partitions on different I/O
> paths). For a single I/O path i would recommend not more than some 10
> .. 20 concurrent lmtp
Hello
We have a cyrus server that runs under heavy load, and a few days ago it
started to show a behaviour where its lmtpd processes take a long time
to deliver messages sent from postfix. Below is an example of a postfix
log message. with the email address removed:
Feb 28 17:14:11 mta13 postfix/
Hello.
This a new release of Ruby/ManageSieve with changes which were on CVS
HEAD for some time now.
Ruby/ManageSieve is a pure-ruby library for the MANAGESIEVE
protocol. It also includes the command-line ``sievectl'' utility for
managing Sieve scripts.
Please check the Ruby/ManageSieve homepage
Hi Chris
On Mon, 2006-07-17 at 11:13 -0500, Chris St. Pierre wrote:
> Does anyone know of any documentation for Cyrus::SIEVE::managesieve?
> The man page is decidedly sparse. Reading through the source of
> sieveshell gives some idea, but not enough to start writing my own
> code. Thanks!
If yo
Hello
We are moving our current mail infra-structure to a storage system which
will store all of our mailboxes. Currently we have around 10 servers, so
their contents will have to be merged on the storage.
My current plan is to dump mailboxes.db from each server and then
regenerate it at the stor
On Mon, 24 Jan 2005, Marcel Karras wrote:
> Like already mentioned: All works well exept pop3d. It is reproducable
> but I can't figure why. How do you define debug_command to get gdb into
> the game? Just setting it to "/usr/bin/gdb" won't be useful to me
> because the only output I'm getting is:
Hello
On Fri, 22 Oct 2004, Andre Nathan wrote:
> This behaviour makes it impossible for mail accounting systems to deal
> with those sieve messages, because they would have no information about
> which user is doing the redirection or auto-response on the MTA logs.
Is there any
On Mon, 25 Oct 2004, Matt Bernstein wrote:
> That may be true, but a better solution might be either to encode this
> information somewhere else that the MTA can log (such as the Message-ID),
> or for Sieve to log it separately.
OK.. Do you think a patch similar to the one in
http://oss.dig
Hello
I sent this to the -devel list a few days ago, but didn't get
feedback...
Currently, sieve sends redirection messages without touching the "MAIL
FROM" envelope address. For vacation messages, the address is always set
as the null sender ("<>").
This behaviour makes it impossible for mail a
Hello
I'm setting up sieve vacation, and while it is working, the vacation
messages are sent using the null sender on the envelope "mail from".
I'm using cyrus 2.1.15. Below are the test script and the relevant
postfix logs:
if header :contains "from" "<[EMAIL PROTECTED]>" {
vacation :addresse
Hello
I'm setting up sieve, but our setup will probably make things a bit more
difficult:
Our SMTP servers (Postfix) only allow authenticated users to send
messages, and checks if the sender login matches with the "mail from"
line of the SMTP protocol. The SMTP servers and IMAP/POP3 servers are
n
saying that the xfs_freeze
command works fine for them, so I wonder if there's any step I could be
missing.
Best regards and thanks in advance,
Andre Nathan
---
Home Page: http://asg.web.cmu.edu/cyrus
Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
38 matches
Mail list logo