> A 500-user company can easily acquire an email archive of 2-5TB. I don't
> care how much the IO load of that archive server increases, but I'd like
> to reduce disk space utilisation. If the customer can stick to 2TB of
It would be interesting to measure the amount of duplication that is going
> How difficult or easy would it be to modify Cyrus to strip all
> attachments from emails and store them separately in files? In the
> message file, replace the attachment with a special tag which will point
> to the attachment file. Whenever the message is fetched for any reason,
> the original
Also check out the Fastmail auditlog patches.
http://cyrus.brong.fastmail.fm/
Rob
- Original Message -
From: "Dr. Tilo Levante"
To:
Sent: Friday, May 28, 2010 7:59 PM
Subject: Log all actions
>
> Cyrus Home Page: http://cyrusimap.web.cmu.edu/
> Cyrus Wiki/FAQ: http://cyrusimap.
>> For low bandwith connections this could be useful but I don't know if
>> that's a typical case nowadays. Together with the IMAP IDLE command it
>> should be fine for mobile devices...
>>
>> [1] http://tools.ietf.org/html/rfc4978
>
> I thought that this was supported in 2.3.16.
It's definitely
>> > It's quite likely, that these mailboxes will grow to 50 or even
>> > more then
>> > 1M mails per mailbox.
>> >
>> > Does anybody have experience with such big mailboxes?
>>
>> Is the I/O cost of message adding relative to O(n), n being the number of
>> msgs
>> already in the mailbox, o
> it seems that running squatter nightly on all mailboxes takes too long for
> us. I'm thinking of splitting the mailboxes over different nights or
> doing
> the job over the weekend.
Are you using the new incremental mode david carter added?
-i Incremental updates where squat indexes alre
> Client A: upload message to Inbox, gets UID 100
> At the same time, Client B: upload message to Inbox, gets UID 100
>
> You can't have two messages with the same UID.
>
> There's 3 solutions I can see:
>
> 1. Mysql solves this by having interleving id's on separate servers (eg.
> auto-incremen
> i'm very surprised that there is not really official point from cyrus-imap
> dev team against using cyrus in cluster active/active mode
I can't comment, but I guess they're busy.
> Since serverals years the messaging service become very important and the
> clustering system is the right way
> Well - it's theoretically possible. But I don't know anyone who's done
> it, and it has the potential to get ugly if you're delivering to the
> same mailboxes at each end. There's nothing I can see that would
> actually stop it working.
I think Bron failed to put sufficiently large warning si
> - cyrus-imapd-2.3.7 (from RHEL5/CentOS-5) with some minor patches in the
> popd (UUID format and an enhancement to the authentication - both
> shouldn't have any impact on the storage part)
As I'm sure others will mention, this is a quite old cyrus now with many
known bugs.
You chould defini
> Also, does anyone know what this means for searches on material that has
> changed since the last squatter run? I have assumed, and hope, that the
> search procedure is something like this:
> search in the squatter index
> remove results referring to deleted items
> do an unindexed search on ite
> We also wanted to use -U 1 so we could be sure changes to imapd.conf
> would be used more quickly since there wouldn't be imapd procs hanging
> around with old settings.
FYI, another way of doing this without forcing -U 1 is to touch the imapd
executable file.
The cyrus master notices if the
Check out the "auditlog" patches we have here:
http://cyrus.brong.fastmail.fm/
It syslogs every message/folder create/delete in a fairly regular format. eg
things like:
auditlog: create sessionid=<...> mailbox=<...> uniqueid=<...>
auditlog: delete sessionid=<...> mailbox=<...> uniqueid=<...>
au
> I think the problem with apple mail client not displaying shared folders
> in the subscription list is well known ...
>
> My question is: can something be done on the cyrus side to work around its
> problems? Google seems to have implemented something ...
Use the "altnamespace" option.
You c
>> - IMAP protocol extensions (most needed thing would be to "idle" on
>> every folders, not just inbox)
>
> Yeah, good luck with that one. It's a pretty major "protocol extention",
> and everything's very folder centric. It would be a rather large SMOP
> (small matter of programming) for this.
>> I am blocked with websieve.pl vacation/out-of-office because when users
>> are entering accents the script fails with an error...
>>
>> Does anyone has a suggestion on how to make accents work ?
>
> Yes, but I haven't committed it to CVS yet. I'm working on full
> UTF8 support in sieve scripts
> Our plan is to throw 12-16GB at it, with the purpose of vastly increasing
> the FS buffer cache (and decreasing I/O). Or, will that just be a waste
> of RAM?
>
> Some indications are that, yes, it does improve performance notably:
>
> http://blog.fastmail.fm/2007/09/21/reiserfs-bugs-32-bit-vs-6
> I have inherited a cyrus / postfix server, which is generally
> well-behaved.
> However, there is an intermittent error when delivering messages to a
> certain
> mailbox. It is infrequent, and generally the message is delievered on the
> user's next attempt in his MUA. The log is something l
> I would like to add a *lot* more storage so that we can increase our email
> quotas (currently 200MB per user). It seems like the proper way to scale
> up is to split the Cyrus metadata off and use some large SATA drives for
> the message files. I was considering adding a shelf of 1TB SATA dri
> Since I upgraded to 2.3.13 (Invoca RPM rev 4) I've been running into a
> mysterious replication bug. In some circumstances, creating a user with a
> three
> letter long username causes the sync master process to choke, on either
> signal
> 11 or 6. Like this:
Interestingly, we just encountered
>> We've found that splitting the data up into more volumes + more cyrus
>> instances seems to help as well because it seems to reduce overall
>> contention points in the kernel + software (eg filesystem locks spread
>> across multiple mounts, db locks are spread across multiple dbs, etc)
>
> Make
> Ext4, I never tried. Nor reiser3. I may have to, we will build a brand
> new
> Cyrus spool (small, just 5K users) next month, and the XFS unlink
> [lack of] performance worries me.
>From what I can tell, all filesystems seem to have relatively poor unlink
performance and unlinks often cause
> Running multiple cyrus instances with different dbs ? How do we do that.
> I have seen the ultimate io-contention point is the mailboxes.db file.
> And that has to be single.
> Do you mean dividing the users to different cyrus instances. That is a
> maintenance issue IMHO.
As Bron said, yes it
> Now see, I've had almost exactly the opposite experience. Reiserfs seemed
> to
> start out well and work consistently until the filesystem reached a
> certain
> size (around 160GB, ~30m files) at which point backing it up would start
> to
> take too long and at around 180GB would take nearly
> There are /lots/ of (comparative) tests done: The most recent I could
> find with a quick Google is here:
>
> http://www.phoronix.com/scan.php?page=article&item=ext4_benchmarks
Almost every filesystem benchmark I've ever seen is effectively useless for
comparing what's best for a cyrus mail se
> I'd probably use imtest to connect, get the PID of the server process
> that I'm connected to, and then attach to that process with ktrace
> (or whatever) with timestamps enabled. Then I'd select the mailbox
> -- this is assuming that mutt is only issuing a select when it says
> "Selecting INBO
> We have a moderately sized Cyrus installation with 2 TB of storage
> and a few thousand simultaneous IMAP sessions. When one of the
> backup processes is running during the day, there's a noticable
> slowdown in IMAP client performance. When I start my `mutt' mail
> reader, it pauses for sever
We're pretty sure this is related to the timezone being represented as a
short name, as opposed to in numeric format (+0400, etc.). The IMAP spec is
vague on whether or not this format should be accepted. I believe that this has
to do with the way the function from the C Library converts the
> Not out of my head, but we probably need to add a configuration
> setting in Ingo to switch between the old and new behavior. Please add a
> request on http://bugs.horde.org.
Might be worth trying to make it an imapd.conf option for cyrus as well to
choose between the old or new behaviour to
> Is this a bad joke or am I missing something? Sieve scripts of most
> non-English-speakers are intentionally broken due to a BC breaking
> change in a bugfix release version?
I'm afraid you're not missing anything. It bit us as well :(
Fortunately for us, there weren't that many people with no
> The mail is delivered to the INBOX (user.fri)
> The delivery is tried to user.fri.t&-APY-ster, NOT user.fri.t&APY-ster
>
> Any hints where the error occurs?
Yes.
Sieve scripts are in utf-8. You now have to use the true utf-8 name of the
folder in your sieve script, and it's automatically conv
> Is it possible to run several cyrus imap instances (with different
> cyrus.conf
> and imapd.conf files) on the same server?
>
> I will like to have all related files for imap server A in one directory
> (/imapA) and all the related files for imap server B in another directory
> (/imapB). Is it
> I have a problem with a high memory consumption using cyrus-imap 2.2 on
> FreeBSD 7.0.
> Each processes of imapd consume 30mb of ram ! I believe that this is not
> normal.
> Is there a way to adjust this?
This is probably fine.
cyrus mmap's lots of files, which often make the process look b
> Moving the /var/spool/imap directories, and /var/lib/user/{}.seen
> files to the new server and reconstructing works fine except that all the
> mail shows up as "not read" on the new sever.
The seen state is keyed on the mailbox "uniqueid", so if that changes, the
seen state becomes invalid.
> To get a tiny statistic we are going through all mailboxes and use
> GETANNOTATION to retrieve possible annotations, which is a time consuming
> progress. GETANNOTATION does not like wildcards like LIST.
Yes it does.
Bah, seems the draft is up to -13 now, and they've actually changed the IMAP
> On current production platform the frontends use swap massively but the
> impact is far less than on the new platform.
It's not so much how much swap is actually used, but how much is being paged
in or paged out at any time. If there are memory pages not being used at
all, the OS will swap th
>> Usually we have around 700k simultaneous pop connections on the real
>> servers now with perdition we have 3000+ connections
>
> Er, I didn't write that. That was Ram, the OP. Different person, different
> institution, similar issues. We don't offer POP here.
Ah oops, got a bit mixed up there
Ok, so lets get some more details here.
> We've got about 12,000 users, many of whom spend most of the day in
> lectures, and tend to read their email at lunch time. Some staff with
> large
> mail folders tend to stay logged on all day.
> Currently we have four OSX 10.4 servers with 6GB of RAM e
> opposed to 1.5 - 1.9 MB per process. I've even seen cyrus processes with
> up
> to 30MB.
cyrus uses mmap a lot. The processes probably aren't actually really
growing, they just look bigger because a user has a folder selected and the
cyrus.index file has been mmaped into the process space. If
> Squirrelmail on localhost:. very fast, in the blink of an eye,
> likewise with Outlook Express (perish the thought).
>
> So I guess it's Thunderbird. That sucks. I guess I'll take this to a
> Thunderbird forum. Thanks for pointing me in the right direction. Some
> days after troubleshooting every
> With your patch applied on my 2.3.11 installation, I now get a number
> of these messages in my logs:
>
> Mar 28 12:57:15 petole imap[25561]: skiplist: /home/cyrus/user/n/niko.seen
> is already open 1 time, returning object
> Mar 28 12:57:18 petole imap[25561]: skiplist: /home/cyrus/user/n/niko
> but all attempts to simulate
> a client load-pattern are devilishly difficult to get right.
I can atest to this as well.
I created an "imapstresstest" tool a few years back to attempt to stress our
cyrus installs. It attempts to emulate all the main actions of a running
IMAP server like lots
Does Cyrus-imapd take advantage of Dual and\or Quad core processors? We are
looking at upgrading our server to either 2x Dual core Xeon's or 1 x Quad core
Xeon processor. Does Cyrus have the ability to take advantage of this?
Since it uses a multi-process model, yes it does.
However that's no
> Okay, so you confirm my impression that message content and envelope seem
> to be treated the same way now (for example with postfix). Then, wouldn't
> it make sense to do the same with Cyrus? Could parseaddr() be relaxed to
> do this?
It sounds like the right approach, but there's probably quit
>> Can you turn off the 8BITMIME extension that postfix advertises? It seems
>> from the RFC (http://tools.ietf.org/html/rfc4952), if 8BITMIME isn't
>
> That seems to be the right logic but I can't find the proper way to do it
> with postfix.
>
> Anyway, I'm not sure about the whole 8BITMIME thing
> Yes, but that will block possibly valid mail. Of course I don't accept
> mail with non-ASCII RCPT TO addresses because Cyrus doesn't allow it, but
> I should accept non-ASCII MAIL FROM addresses if they are valid. But Cyrus
> also refuses them. That's the real problem.
Can you turn off the 8BIT
> There is some system (on freshmeat?) that has a special folder in IMAP
> for storing calendar events. The program uses the IMAP defined protocol
> though.
http://www.kolab.org/documentation.html
It's a reasonably interesting idea. Basically use a sort of folder=table and
email=row database map
> I do have one ZFS machine, and I don't use it to anywhere near its
> capabilities - it's just backups.
ZFS really did raise the bar on file systems by a big jump, and it's created
a new level of expectation.
For ages we lived with non-journaled file systems and then when we went to
journaled
> So, the problem has nothing to do with IMAP, and everything to do with
> message handling before delivery to the mailbox.
If I've assimilated everything right, I think the summary of the problem is:
Outlook handles some email messages specially (the example Joon has used is
iTIP emails). To a
> This is where I think the actual user count may really influence this
> behavior. On our system, during heavy times, we can see writes to the
> mailboxes file separated by no more than 5-10 seconds.
>
> If you're constantly freezing all cyrus processes for the duration of
> those writes, and
>> About 30% of all I/O is to mailboxes.db, most of which is read. I
> Solaris 10 does this in my case. Via dtrace you'll see that open() on the
> mailboxes.db and read-calls do not exceed microsecond ranges. mailboxes.db
> is not the problem here. It is entirely cached and rarely written
> (
> About 30% of all I/O is to mailboxes.db, most of which is read. I
> haven't personally deployed a split-meta configuration, but I
> understand the meta files are similarly heavy I/O concentrators.
That sounds odd.
Given the size and "hotness" of mailboxes.db, and in most cases the size of
ma
>> I didn't test but doesn't a symlink work?
>
> Yes, it does (just tried it on a development system).
Definitely, we use it on all our machines.
lrwxrwxrwx 1 root root 23 Oct 26 10:50 proc ->
/tmpfs/imapproc-slot101
Rob
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FA
> Not a bug, but a feature :) Outlook makes a clear distinction between
> storage and transport. In IMAP this gets a bit blurred as the INBOX is
> also
> the mechanism for receiving new mail. Using the POP3 moves the mail from
> the
> IMAP4 INBOX to the Outlook Inbox. This is handled by both Kont
> So you will have a choice of 3 commercial Outlook plug-ins,
What are the 3 commercial Outlook plugins? Obviously the Toltec one, but
which others?
I've actually always liked the idea of what toltec + bynari were doing. It's
basically using the IMAP server as a database where folders = tables
> However observationally we find that under high email usage that
> above 10K users on a Cyrus instance things get really bad. Like last
> week we had a T2000 at about 10,500 users and loads of 5+ and it
> was bogging down. We moved 1K users off bringing it down to
> 9,500 and loads dropped to
> squatter would really benefit from incremental updates. At the moment a
> single new message in a mailbox containing 20k messages causes it to read
> in all the existing messages in order to regenerate the index.
We spoke to Ken about this ages back, and even offered to pay for the work
to make
could someone whip up a small test that could be used to check different
operating systems (and filesystems) for this concurrancy problem?
Not a bad idea. I was able to throw something together in about half an hour
with perl. See attached. It requires the Benchmark, Time::HiRes and
Sys::Mmap
Thanks for your post, it's always interesting to hear other peoples stories.
> 1st STEP: Perdition mail-proxies
> in a load-balanced pool and 2 can handle the load most days. Initially we
If you have a chance, definitely think about changing from perdition to
nginx. There's slightly more work
There's no really easy way, but this patch might help. Basically it parses
the last Received: header to set the internal date.
http://cyrus.brong.fastmail.fm/#cyrus-receivedtime-2.3.8.diff
The problem is if you apply it, you then have to reconstruct all your
mailboxes to rebuild the cyrus.index
> Yesterday I checked my own Cyrus servers to see if I was running out of
> lowmem, and it sure looked like it. Lowmem had only a couple MB free, and
> I had 2GB of free memory that was not being used for cache.
>
> I checked again today and everything seems to be fine - 150MB of lowmem
> free
>> Are you comparing an "old" reiserfs partition with a "new" ext3 one where
>> you've just copied the email over to? If so, that's not a fair
>> comparison.
>
> No, a newly created partitions in both cases. Fragmented partitions are
> slower still of course.
That's strange. What mount options
> I think what truly scares me about reiser is those rather regular
> posts to various mailing lists I'm on saying "my reiser fs went poof
> and lost all my data, what should I do?"
I've commented on this before. I believe it's absolutely hardware related
rather than reiserfs related.
http://ww
> The iostat and sar data disagrees with it being an I/O issue.
>
> 16 gigs of RAM with about 4-6 of it being used for Cyrus
> leaves plenty for ZFS caching. Our hardware seemed more than
> adequate to anyone we described it to.
>
> Yes beyond that it's anyone guess.
If it wasn't IO limit relate
> A data point regarding reiserfs/ext3:
>
> We are in the process of moving from reiserfs to ext3 (with dir_index).
>
> ext3 seems to do substantially better than reiserfs for us, especially for
> read heavy loads (squatter runs at least twice as fast as it used do).
Are you comparing an "old" re
> I suppose that 8 SATA disks for the data and four 15k SAS disks for the
> metadata would be a good mix.
Yes. As I mentioned, our iostat data shows that meta-data is MUCH hotter
than email spool data.
---
Checking iostat, a rough estimate shows meta data get 2 x the rkB/s and 3 x
the wkB/s vs
> Anyhow, just wondering if we the lone rangers on this particular
> edge of the envelope. We alleviated the problem short-term by
> recycling some V240 class systems with arrays into Cyrus boxes
> with about 3,500 users each, and brought our 2 big Cyrus units
> down to 13K-14K users each which s
> Your other option, of course, is to implement this in the client.
I had thought of doing this using the sieve-test utility provided with
cyrus, which lets you specify a message + sieve script and prints to stdout
the actions that would have been performed.
The only issue is on large folders h
> A kludgy solution... what about fetchmail to pull the INBOX contents
> and resubmit the messages to deliver?
Be careful of duplicate delivery suppression!
I had a very brief talk with Ken about this ages back, about creating a
non-standard extension to IMAP that would allow you to "run" a sie
> http://cyrus.brong.fastmail.fm/patches/cyrus-statuscache-2.3.8.diff
>
> This cuts back on meta IO traffic considerably for repeated SELECT
> calls on mailboxes which are unchanged. We are in a similar boat,
> and it makes a huge difference (also for some other clients)
That should be "cuts back
> It would be a way to keep a second offline replica for backing up to a
> tape archive, which is what I plan to do. I realize it's slower than the
> standard rolling replication but the archive is mainly for "ooops I
> deleted that mail" kind of scenario. In fact, I'm testing the setup right
>
> Could you do it then with sync_client -S -u instead of -r to one of the
> relicas?
-u User mode. Remaining arguments are list of users who should
be replicated.
Which user though? You could do all users, but that takes time.
The point is that cyrus programs (lmtpd, imapd, etc) lo
> Disagree. I am replicating one server to two and have been doing it for
> quite
> a while with cyrus 2.3.
Can you explain how you're doing this? If you're just running multiple
sync_clients with different config files that point to different replica
servers, then what you've got is broken be
Hi
> If I understand this patch correctly, it doesn't solve the larger problem
> that I'm interested in: is the data on my replica the same as the data on
> my primary, or more to the point, are the two data sets converging? ...
> But I'm really interested in something that can run out of
Just some thoughts
1. Simon is right about RAID-5 being unreliable. Either use RAID-6 or
replication to a completely separate system to ensure you have enough
redundancy
2. Look at having separate RAID-1 for OS AND cyrus meta data, and
RAID-5+replication or RAID-6 for the data spool
>From our
> Would it be possible to use replication for this?
>
> Set up a replica of the first server, copy while both servers online.
> Then take first server offline and change IPs/servernames/whatever.
Yes, it's definitely possible, but potentially quite a bit of work.
We actually have some modules th
> So, as I understand it, when reconstructing mailboxes, internal dates
> will be lost. It this intentional, or did I miss something in the
> docs/manual ?
This is why we created this patch.
http://cyrus.brong.fastmail.fm/#cyrus-receivedtime-2.3.8
Rob
Cyrus Home Page: http://cyrusimap.web.
> I suspect that the problem is with mailbox renames, which are not atomic
> and can take some time to complete with very large mailboxes.
I think there's some other issues as well. For instance we still see
skiplist seen state databases get corrupted every now and then. It seems
certain corrup
> does the IMAP spec specify how large a UUID can be?
UUIDs aren't part of the IMAP spec. It's an addition to cyrus to help
replication. By default, there's no way to access UUIDs via IMAP at all
since they're not part of the IMAP spec.
The UUID size chosen was done by David Carter when he impl
>I run it directly, outside of master. That way when it crashes, it
> can be easily restarted. I have a script that checks that it's
> running, that the log file isn't too big, and that there are no log-
> PID files that are too old. If anything like that happens, it pages
> someone.
Ditto, we
> I don't have something to consume make_md5 data, yet, either. My
> plan is to note the difference between the replica and the primary.
> On a subsequent run, if those differences aren't gone, then they
> would be included in a report.
Rather than make_md5, check the MD5 UUIDs patch below. Using
You perhaps think we are adding Perdition to the mix, and assuming we
have a single box that might get overloaded. No, we've had Perdition
running on a load-balanced pool of 4 Linux boxes for about a year and a
half. This was our abstraction to hide the 10+ UW-IMAP servers from the
user pop
I manage a single-instance server running v2.2.12 with a reasonably
large number of mailboxes, using "mboxlist_db: flat":
Why are you using "flat"? The flat db implementation is pretty basic/kludgy
and not designed for large database files. I bet it's doing something stupid
like re-reading t
The provided Cyrus tool "make_md5" is for validating replication. It
would, for instance, have found the recently discussed bug in sync_server
that caused random files to be overwritten in the event that sync_server
reused a stale staging file. It would probably be cool if there were
doc
With hindsight I should probably have defined message UUIDs to be the full
MD5 hash: 128 bits isn't that much worse than 96 bits per message. What is
the CPU overhead like for calculating MD5 sums for everything on the fly?
That would have been nice, and from an integrity point of view as well,
Is it safe? - we calulated that with one billion messages you have a one
in 1 billion chance of a birthday collision (two random messages with
the same UUID). They then have to get in the same MAILBOXES collection
to sync_client to affect each other anyway.
Isn't the case: UUIDs span al
I am trying to control whether when sorting by the recipient or sender
whether it sorts by the name or by the email address. Is this possible?
It's defined by the IMAP sort extension.
http://tools.ietf.org/html/draft-ietf-imapext-sort-19
FROM
[IMAP] addr-mailbox of the first
I had to reduce the default value of
"lmtp_destination_concurrency_limit" in postfix to 10 (the default is
20), and change the value of "queue_run_delay" on some servers to avoid
having them all run their queues at the same time, because that ends up
causing the lmtpd process limit to be reached.
Can values way above 100% be trusted? If so, it's pretty bad (this is
from a situation where there are 200 lmtp processes, which is the
current limit I set):
I've never seen over 100%, and it doesn't seem to make sense, so I'm
guessing it's a bogus value.
avg-cpu: %user %nice %system %i
After running iostat on my cyrus partition (both config and mail spool
are
kept on a SAN), I'm wondering if I should separate them out as well.
This
sounds related to the new metapartition and metapartition_files options
that were added in v2.3.x.
Does anyone have any recommendations or guidanc
see the output of the command "iostat -x sdb 5" (where 'sdb' is the device
you have cyrus on) on your system. Even if you aren't saturating your
Gigabit Ethernet link to the ATA-over-Ethernet storage, you may be
exceeding the number I/O operations per second.
I thought ATA-over-Ethernet had h
You can view the load average and process counts for these servers at:
https://secure.onid.oregonstate.edu/cacti/graph_view.php?action=tree&tree_id=4
OT, but I found this interesting... I wonder when school starts and ends and
when holidays are :)
https://secure.onid.oregonstate.edu/c
As of RFC 2045, Content-Type syntax should be:
content := "Content-Type" ":" type "/" subtype *(";" parameter)
Shouldn't cyrus still interpret this as text/html, despite the illegal "
boundary..." line following Content-Type ?
I've noticed this too, and while it clealy is broken with respect t
So how can I help to find this bug. I had i look at bugzilla, but found
nothing related.
I think this bug is actually in the regex library, not in cyrus. I do wish
cyrus would use PCRE if it was available though, PCRE seems much more robust
than the GNU posix regcomp/regexec stuff. Hmmm, I ju
I agree that storage and replication are orthogonal issues. However, if a
lump of storage is no longer a single point of failure then you don't have
to invest (or gamble) quite as much to make that storage perfect.
Yes, that old maxim that each extra 9 in 99.99... reliability costs 10 times
mo
I agree of course about avoiding SPOFs, but I do like a multi-tiered
approach, I mean multiple lines of defense. I use SAN for its speed,
reliability, and ease of administration, but naturally I replicate
everything on the SAN and have "true" backups as well.
So you have multiple SAN's? Or y
Fastmail dont use SAN, as I understand they use external raid arrays.
There are many ways to lose your data, one of these being filesystem
error, others being software bugs and human error. Block-level replication
(typically used in SANs) is very fast and uses few resources but doesnt
protect f
Interesting, I have some folders with ~100'000 messages and cyrus handles
it very nice. Did you say you have a problem with 10'000 messages in a
mailbox?
And I dealt with a user the other day with 280,000 in their inbox.
Filesystem and cyrus were fine, though our web interface was a bit slow.
so I wonder, will skiplist be a better choice? obviously running
quota(8) will be a very cheap operation, but I'm worried about
contention on the quota database during delivery etc. (these users are
for the most part not actively using the system -- they get less than
one message per day each(
I already mailed the author of imapsync about that some days ago but
without having an idea how to fix it. I also checked Mail::IMAPClient and
found that it lacked the functionality needed here but it seems
Mail::IMAPClient isn't maintained anymore (at least there is no
development for long time n
1 - 100 of 161 matches
Mail list logo