Hi,
Thanks to all who responded with my situation below. Here is what I think happened.


Sometime late Saturday evening a server started having soft failures on a memory card. However as a result of the failures syslog messages were generated that were both stored locally and forwarded onto our central syslog server. The central syslog server then makes decisions on what to do with those messages based on the type and severity of the message.
In this particular case it forwarded it to 4 sysadmins. The messages the broken server was generating were at the rate of 5-10 per second. As it turns out all 4 of us are spread amongs the new Postoffices. I believe that the servers did they best they could but eventually a combination of backlog feeding our mailboxes (the system will only deliver one message at a time per mailbox but will deliver into multiple mailboxes at the same time) and filling our quotas caused there to be a large number of processes to be running on each of the new Postoffices. Eventually the server thought it was out of space and may of actually been out of space. This didn't show up in the system logs so I cannot say for sure. As a result of the space shortage the mailboxes database became corrupted on 2 servers and the master daemon on another froze up.
The high rate of mail ran for over 24 hours before anything had a problem.
Jim



Hi,
I have an interesting problem. Over the weekend our syslog forwarder went beserk generating over 300,000 messages to about 6 people. This morning our three new Cyrus systems went belly up, (yes that is a technical term), actually the master daemon seemed to eventually freeze up. The only real error msgs I can find are these:
Mar 10 00:18:16 postoffice8 lmtpd[27393]: [ID 729713 local6.error] DBERROR: opening /opt/cyrus/mailboxes.db: Not enough space
Mar 10 08:04:46 postoffice8 pop3d[2183]: [ID 729713 local6.error] DBERROR: opening /opt/cyrus/mailboxes.db: Not enough space
Mar 10 08:12:58 postoffice8 imapd[2489]: [ID 729713 local6.error] DBERROR: opening /opt/cyrus/mailboxes.db: Not enough space
Mar 10 08:14:05 postoffice8 imapd[2731]: [ID 729713 local6.error] DBERROR: opening /opt/cyrus/mailboxes.db: Not enough space
Mar 10 08:27:59 postoffice8 imapd[3951]: [ID 729713 local6.error] DBERROR: opening /opt/cyrus/mailboxes.db: Not enough space


Now I'm been running older versions of Cyrus (1.5.19) for years at 300,000 messages a day with no trouble. I don't believe space is really an issue, here is a df -k from one of the systems.

Filesystem            kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d0       1984564  904568 1020460    47%    /
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
mnttab                     0       0       0     0%    /etc/mnttab
/dev/md/dsk/d1        962573  255248  649571    29%    /var
swap                 28642528      32 28642496     1%    /var/run
swap                 28655440   12944 28642496     1%    /tmp
/dev/md/dsk/d4       5040814    8134 4982272     1%    /users
/dev/md/dsk/d3       5040814  452439 4537967    10%    /opt
/dev/vx/dsk/po8_dg01/logvol01
                     5160542  115891 4993046     3%    /logs
/dev/vx/dsk/po8_dg01/mqueuevol01
                     10321884    4986 10213680     1%    /mqueue
/dev/vx/dsk/po8_dg01/cyrus_data_vol01
                     41287586  126222 40748489     1%    /opt/cyrus
/dev/vx/dsk/po8_dg01/sendmailvol01
                     41287586  603402 40271309     2%    /opt/sendmail_vol
/dev/vx/dsk/po8_dg01/cyrus_app_vol01
                     41287586  147526 40727185     1%    /opt/cyrus_vol
/dev/vx/dsk/po8_dg01/spoolvol01
                     103218991  679107 101507695     1%    /var/spool/mail
swap                 28642640     144 28642496     1%    /opt/cyrus/proc
/dev/vx/dsk/po8_dg01/appvol01
                     20643785   54397 20382951     1%    /applications


This is all with Cyrus 2.1.11 on a V880 with 32GB of memory with Solaris 8 and, Sendmail 8.12.8. Anyone seen this before? Thanks.
Jim




Reply via email to