Re: LARGE single-system Cyrus installs?

2007-11-08 Thread Vincent Fox
To close the loop since I started this thread: We still haven't finished up the contract to get Sun out here to get to the REAL bottom of the problem. However observationally we find that under high email usage that above 10K users on a Cyrus instance things get really bad. Like last week we ha

Re: How many people to admin a Cyrus system?

2007-11-08 Thread Vincent Fox
Gary Mills wrote: > How many > and what sort of people does it take to maintain a system such as > this? I need a good argument for hiring a replacement for me. > > At a minimum you want 1 qualified person and someone cross-trained as a backup, so that person can reasonably enough have vacatio

Re: LARGE single-system Cyrus installs?

2007-11-08 Thread Vincent Fox
Bron Gondwana wrote: > Also virtual interfaces means you can move an instance without having > to tell anyone else about it (but it sounds like you're going with an > "all eggs in one basket" approach anyway) > > No, not "all eggs in one basket", but better usage of resources. It seems silly

Could admin PLEASE unsubscribe the goldengatelanguage.com user?

2007-11-08 Thread Vincent Fox
Everytime I send email to the list now I get a bounce from "Gateway". Could some kindly admin please remove the account causing this bounce? Header snippet: >Received: from mail.goldengatelanguage.com ([4.79.216.102]) > by mx12.ucdavis.edu (8.13.7/8.13.1/it-defang-5.4.0) with ESMTP id lA92U7

Re: LARGE single-system Cyrus installs?

2007-11-09 Thread Vincent Fox
Eric Luyten wrote: > Another thought : if your original problem is related to a locking issue > of shared resources, visible upon imapd process termination, the rate of > writing new messages to the spool does not need to be a directly contri- > buting factor. > Were you experiencing the load probl

Re: LARGE single-system Cyrus installs?

2007-11-09 Thread Vincent Fox
Jure Pečar wrote: > > I'm still on linux and was thinking a lot about trying out solaris 10, but > stories like yours will make me think again about that ... > > We are I think an edge case, plenty of people running Solaris Cyrus no problems. To me ZFS alone is enough reason to go with Solaris

Re: LARGE single-system Cyrus installs?

2007-11-09 Thread Vincent Fox
Jure Pečar wrote: > In my expirience the "brick wall" you describe is what happens when disks > reach a certain point of random IO that they cannot keep up with. > The problem with a technical audience, is that everyone thinks they have a workaround or probable fix you haven't already thought

Re: Just in case it is of general interest: ZFS mirroring was the culprit in our case

2007-11-13 Thread Vincent Fox
Can you expand on this, like a LOT? I recall a while ago you brought up some performance issues and said you had found hacks for them. Were those issues actually unresolved or are you talking about something else? I don't see any recent posts by you about problems with your Cyrus install. I'm

Re: LARGE single-system Cyrus installs?

2007-11-14 Thread Vincent Fox
Michael Bacon wrote: > > Solid state disk for the partition with the mailboxes database. > > This thing is amazing. We've got one of the gizmos with a battery > backup and a RAID array of Winchester disks that it writes off to if > it loses power, but the latency levels on this thing are > non-

Re: LARGE single-system Cyrus installs?

2007-11-14 Thread Vincent Fox
This thought has occurred to me: ZFS prefers reads over writes in it's scheduling. I think you can see where I'm going with this. My WAG is something related to Pascal's, namely latency. What if my write requests to mailboxes.db or deliver.db start getting stacked up, due to the favoritism sho

Re: LARGE single-system Cyrus installs?

2007-11-15 Thread Vincent Fox
> > /etc/system: > > set zfs:zfs_nocacheflush=1 Yep already doing that, under Solaris 10u4. Have dual array controllers in active-active mode. Write-back cache is enabled. Just poking in the 3510FC menu shows cache is ~50% utilized so it does appear to be doing some work. Cyrus Home Pag

Re: Setting the location of proc independent of configdir

2007-11-15 Thread Vincent Fox
I've been running this in production: mkdir /var/imap-proc chown cyrusd /var/imap-proc ln -s /var/imap-proc /var/cyrus/imap/proc Setup vfstab entry for /var/imap-proc as TMPFS , and that's about all there is to it. But yeah it would be an improvement to see it configurable. Ian G Batten wrote:

Re: LARGE single-system Cyrus installs?

2007-11-19 Thread Vincent Fox
Bron Gondwana wrote: > Lucky we run reiserfs then, I guess... > > I suppose this is inappropriate topic-drift, but I wouldn't be too sanguine about Reiser. Considering the driving force behind it is in a murder trial last I heard, I sure hope the good bits of that filesystem get turned over to

Re: lmtp timed out while sending MAIL FROM

2007-11-27 Thread Vincent Fox
This of course sounds familiar to some experiences we had recently with Cyrus 2.3.8 on Solaris 10 backends pretty heavily loaded with number of users. If you search the mailing list archives you'll find several threads about our problems here at UC Davis. However, the others in the thread have m

TMPFS for socket and log directories?

2007-11-29 Thread Vincent Fox
We had sym-linked imap/proc directory to a size-limited TMPFS a while back. Now I'm thinking to do the same for imap/socket and imap/log Any reasons against? Other candidates? Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/In

skiplist_unsafe?

2007-11-30 Thread Vincent Fox
How "unsafe" is setting in imapd.conf skiplist_unsafe: 1 Our /var/cyrus/imap filesystem is on a ZFS mirror set on arrays with dual controllers so OS and/or hardware corruption is remote. The application can scramble it but that can happen whether we have sync or not eh? Anything I am missing?

Cyrus on Solaris at universities?

2007-12-12 Thread Vincent Fox
Just wondering what other universities are runing Cyrus on Solaris? We know of: CMU UCSB Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Cyrus on Solaris at universities?

2007-12-13 Thread Vincent Fox
Thanks folks, I have collated some responses below to make it easier to read: Cyrus on Solaris at Universities: Carnegie-Mellon University University of California Davis 50K accounts (active), multiple T2000 failover clusters ZFS storage to arrays attached through SAN Univ California San

Re: Cyrus on Solaris at universities?

2007-12-13 Thread Vincent Fox
Bron Gondwana wrote: (Linux comments) 'Twas not my intent to start a this-OS versus that-OS comparison. Valid though that is, it's a different thread. Like most sites, we have various OS in operation here, it just happens that the Cyrus backends are Solaris. The test project here started with s

Solaris ZFS & Cyrus : BugID 6535160

2007-12-13 Thread Vincent Fox
I have a candidate for our running into performance issues with Cyrus under load. I think it could be in the ZFS filesystem layer. Our man Omen has come back from DTrace class and started running it, and while it's early days yet, seems to be some fdsync calls with long times associated with them

Endgame: Cyrus big install at UC Davis

2008-02-19 Thread Vincent Fox
So for those of you who recall back that far. UC Davis switched to Cyrus and as soon as fall quarter started and students started hitting our servers hard, they collapsed. Load would go up to what SEEMED to be (for a 32-core T2000) a moderate value of 5+ and then performance would fall off a c

Re: Endgame: Cyrus big install at UC Davis

2008-02-25 Thread Vincent Fox
Several people have asked for the IDR number from Sun that gave us the performance optimizations we needed. Management says the agreement for the IDR we got prohibits this. However this forum thread from BEFORE the agreement should point you in the right direction: http://opensolaris.org/jive/th

Re: Endgame: Cyrus big install at UC Davis

2008-02-28 Thread Vincent Fox
Pascal Gienger wrote: > For x86 the patch will have id 127729-07. For SPARC it will have > the major number 127728. > My esteemed colleague Nick Dugan, ran some tests against this patch and according to the results it gives the desired results. I expect during our next monthly maintenance cy

Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be locking issues

2008-02-28 Thread Vincent Fox
Jeff Fookson wrote: > is unusably slow. Here are the specifics: > You are mighty short on the SPECIFICS of your setup. Expect a slew of questions to elicit this information. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Inf

Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be locking issues

2008-02-28 Thread Vincent Fox
Gah my first thought was, a 3-disk RAID5? Is this 1998 or 2008? Disk is cheap. RAID-1 or RAID-10. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be lockingissues

2008-03-05 Thread Vincent Fox
David Lang wrote: > > raid 6 allows you to loose any two disks and keep going. > > This is turning into a RAID discussion. The orginal poster was doing a RAID-5 across 3 disks, and has stopped commenting but it's probably because that's all the hardware he could scrounge. I am a staunch memb

Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be lockingissues

2008-03-05 Thread Vincent Fox
Jeff Fookson wrote: > We are planning to run the mirrors > off a 4-port 3ware RAID card even though we're not overly fond of > 3ware (we have a fair amount of experience > with RAID5 arrays on 3ware cards on our research machines where they > perform adequately but > not more). We are hoping the

Re: suggestion need to design an email system.

2008-09-17 Thread Vincent Fox
Wesley Craig wrote: >> Maildir and cyrus both suffer from the same >> disadvantages (huge needs in terms of inodes etc.), With ZFS, inodes are among the many stone-age worries you leave behind. ;-) Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/

Re: suggestion need to design an email system.

2008-09-18 Thread Vincent Fox
David Lang wrote: > and gaining some new worries along the way. while some are convinced that ZFS > is > the best thing ever others see it as trading a set of known problems for a > set > of unknown problems (plus it severly limits what OS you can run, which can > bring > it's own set of prob

Re: choosing a file system

2008-12-30 Thread Vincent Fox
We run Solaris 10 on our Cyrus mail-store backends. The mail is stored in a ZFS pool. The ZFS pool are composed of 4 SAN volumes in RAID-10. The active and failover server of each backend pair have "fiber multipath" enabled so their dual connections to the SAN switch ensure that if an HBA or SAN

Re: choosing a file system

2009-01-08 Thread Vincent Fox
(Summary of filesystem discussion) You left out ZFS. Sometimes Linux admins remind me of Windows admins. I have adminned a half-dozen UNIX variants professionally but keep running into admins who only do ONE and for whom every problem is solved with "how can I do this with one OS only?" I admin

Re: choosing a file system

2009-01-08 Thread Vincent Fox
Bron Gondwana wrote: > BUT - if someone is asking "what's the best filesystem to use > on Linux" and gets told ZFS, and by the way you should switch > operating systems and ditch all the rest of your custom setup/ > experience then you're as bad as a Linux weenie saying "just > use Cyrus on Linux"

Re: Security risk of POP3 & IMAP protocols

2009-02-13 Thread Vincent Fox
David Lang wrote: > > the flip side of the complience issue is that it's a LOT easier to control > retention policies (including backups) on a central server than on > everybody's > individual desktops/laptops. > > as for the concerns about laxer data security in other juristictions, that's > s

Re: POP3 locking. Is there a way to reduce this "issue"?

2009-05-18 Thread Vincent Fox
Jose Perez wrote: > Some people could just say "don't use POP3 anymore, use IMAP" right? > YES! > Ok, I'd say the same as a sysadmin but you know exactly that this > isn't always possible is some organizations for others reasons not > technical. > > What I would like to know is: > > Are there

Re: Need advice on building a Cyrus IMAP cluster

2009-08-12 Thread Vincent Fox
Michael Sims wrote: Quick question on this. If I setup an active/passive cluster and put the mail spool AND all of the application data on a SAN that both nodes have access to (not simultaneously, of course), doesn't that bypass the need for using "mupdate_config: replicated"? Thanks... Th

Re: What's the message's UID in the message storage of cyrus imapd.

2009-08-17 Thread Vincent Fox
The UID is defined in RFC for IMAP as needing to be a unique number for that mailbox. That is all it is. You can dump message files into an account and as long as they are "#." and you index them they will be visible to IMAP clients. The UIDL is related/derived number returned by POP. We actua

Re: Implement Cyrus IMAPD in High Load Enviromment

2009-09-28 Thread Vincent Fox
Lucas Zinato Carraro wrote: - Exist a recommended size to a Backend server ( Ex: 1 Tb )? Hardware-wise your setup is probably overkill. Nothing wrong with that. Sizing of filesystems IMO should be based on your tolerance for long fsck during a disaster. I run ZFS which has none of that and d

Re: Implement Cyrus IMAPD in High Load Enviromment

2009-09-28 Thread Vincent Fox
Bron Gondwana wrote: I assume you mean 500 gigs! We're switching from 300 to 500 on new filesystems because we have one business customer that's over 150Gb now and we want to keep all their users on the one partition for folder sharing. We don't do any murder though. Oops yes. I meant 5

Re: Implement Cyrus IMAPD in High Load Enviromment

2009-09-29 Thread Vincent Fox
Bron Gondwana wrote: It's an interesting one. For real reliability, I want to have multiple replication target supported cleanly. So the issues for me with Cyrus replication: 1) Is it working? Is the replica actually up to date and more importantly what if I switch to it and there is some

Re: Implement Cyrus IMAPD in High Load Enviromment

2009-09-29 Thread Vincent Fox
Simon Matter wrote: What I'm really wondering, what filesystem disasters have others seen? How many times was it fsck only, how many times was it really broken. I'm not talking about laptop and desktop users but about production systems in a production environment with production class hardware a

Re: TLS fails on imaps port

2010-01-25 Thread Vincent Fox
Bob Dye wrote: > > But it does seem odd that it supports STARTTLS on 143 but not 993. This is not odd, this is working as specified. TLS is enabling encryption on a connection that has started without it. There's a cogent argument that 993 should be depecrated as the vestige of "stunnel days" tha

Re: Backup strategy for large mailbox stores

2010-02-15 Thread Vincent Fox
I suppose replication and snapshots are out of the question for you? We run ZFS so snapshots are atomic and nearly instant. Thus we keep 14 days of daily snaps in our production pool for recovery purposes. In our setup the total of all the snaps is about a 50% overhead on the production data whic

Re: Backup strategy for large mailbox stores

2010-02-15 Thread Vincent Fox
John Madden wrote: > That still leaves full backups as a big issue (they take days to run) > and NetBackup has a solution for that: You run one full backup and store > it on disk somewhere and from then on, fulls are called "synthetic > fulls," where the incrementals are applied periodically in

Re: Backup strategy for large mailbox stores

2010-02-15 Thread Vincent Fox
Forgot to mention we are running inline compression on our ZFS pools. With "fast" LZJB compression on the filesystems for metadata etc. still a savings of ~2.0. The inboxes are all in /var/cyrus/mail which is set for gzip-6 compression savings of ~ 1.7. Backups run faster so it's win-win. Combin

Re: Backup strategy for large mailbox stores

2010-02-15 Thread Vincent Fox
John Madden wrote: > > We did quite a bit with snapshots (LVM) to when we were experimenting > with block-level backups but there's a performance problem there -- we > were saturating GbE. Snapshot doesn't really buy you anything in > terms of getting the data to tape. > > We run the tape backu

Re: Backup strategy for large mailbox stores

2010-02-15 Thread Vincent Fox
John Madden wrote: > Out of curiousity, how good is zfs with full fs scans when running in > the 100-million file count range? What do you see in terms of > aggregate MB/s throughput? > I'm not sure what you mean by "full fs scan" precisely, and haven't tested anything very large. Since t

Re: Backup strategy for large mailbox stores

2010-02-16 Thread Vincent Fox
Michael Bacon wrote: > For those of you doing ZFS, what do you use to back up the data after a zfs > snapshot? We're currently on UFS, and would love to go to ZFS, but haven't > figured out how to replace ufsdump in our backup strategy. > There are other commercial backup solutions however we

Re: Backup strategy for large mailbox stores

2010-02-16 Thread Vincent Fox
Andrew Morgan wrote: > Is there really a significant downside to performing backups on a hot > cyrus mailstore? Should I care if Suzie's INBOX was backed up at 3am > and Sally's INBOX was backed up at 4am? > > Vincent, on a slightly related note, what is your server and SAN > hardware? > I dunn

Re: Backup strategy for large mailbox stores

2010-02-16 Thread Vincent Fox
Clement Hermann (nodens) wrote: > The snapshot approach (we use ext3 and lvm, soon ext4) is promising, as > a simple tar is faster than using the full backup suite on a filesystem > with a lot of small files (atempo here). But you need the spare space > locally, or you need to do it over the net

Re: Backup strategy for large mailbox stores

2010-02-19 Thread Vincent Fox
Eric Luyten wrote: > Our Z pool was 83% full ... > Deleted the December snapshots, which brought that figure down to 74% > > Performance came right back :-) > Running close to full you will eventually run into fragmentation issues. Fortunately you can grow the size of a pool while it's hot, g

Re: DNS load balancing

2010-05-11 Thread Vincent Fox
On 5/11/2010 6:40 AM, Andre Nathan wrote: > What I still haven't figured out is how to keep the proc directory and > the locks in the socket directory local to the cluster nodes. For the > socket names there are configuration options, so I could just choose > different names for each cluster node.

Re: DNS load balancing

2010-05-11 Thread Vincent Fox
On 5/11/2010 5:35 AM, Andre Nathan wrote: > Hello > > I'm setting up a two-machine cyrus cluster using OCFS2 over DRBD. The > setup is working fine, and I'm now considering the load balancing > options I have. > > I believe the simplest option would be to simply rely on DNS load > balancing. Howeve

Re: DNS load balancing

2010-05-26 Thread Vincent Fox
On 5/26/2010 8:06 AM, Blake Hudson wrote: > I wish it were that straightforward. After performing several > switchovers where DNS A records were repointed, many clients (days > later) continue trying to access the old servers. TTL on the DNS records > are set appropriately short, this is simply a c

hardware recommendations for MURDER?

2006-07-20 Thread Vincent Fox
So lt looks like we are migrating our University to a murder setup. We are about 50,000+ users, with currently a number of mailstores (UWash) and users directly addressing them. We are looking towards adding new hardware for an MUPDATE box and some frontends. Anyone have recommendations on the

Re: hardware recommendations for MURDER?

2006-07-20 Thread Vincent Fox
My personal leaning is towards Sun hardware with RHEL4 but I wanted to get some fresh opinions. Thought this topic worth a rehash since 2004 data is useful but not current enough IMO. >(sun just announce a 3u dual proc 16G ram box with > 24TB > of disk space for ~$70k for example) I admit a fe

Re: hardware recommendations for MURDER?

2006-07-20 Thread Vincent Fox
> Just curious, and not to start any religious wars, but if you're > going to go so far as buying the Sun hardware (which is quite good), > what's keeping you from running Solaris 10 x86? Woah there! Several replies so far seem to think I made up my mind. I used the word "leaning" only to in

Solaris compile fails cyrus-sasl 2.1.22

2006-07-26 Thread Vincent Fox
So I cannot get cyrus-sasl to compile on Solaris. Any tips? I have attempted to look at the cyrus-sasl mailing list archives but it won't let me in. Perhaps only CMU people can look at them? I posted to cyrus-sasl this same query, but no response. Perhaps that is only for developers? Anyhow...

Re: Solaris compile fails cyrus-sasl 2.1.22

2006-07-26 Thread Vincent Fox
Ian Logan wrote: You might try installing OpenSSL, I'm pretty sure thats what I did when I ran into the problem you described. I think (not sure) that cyrus-sasl will then use the MD5 digest routines from OpenSSL. Ian Ooops! Retracting last reply. I have OpenSSL 0.9.7j in the standard locati

Re: Solaris compile fails cyrus-sasl 2.1.22

2006-07-26 Thread Vincent Fox
Ian Logan wrote: You might try installing OpenSSL, I'm pretty sure thats what I did when I ran into the problem you described. I think (not sure) that cyrus-sasl will then use the MD5 digest routines from OpenSSL. Ian Thanks Ian, but I had OpenSSL 0.9.7j already compiled into the usual locat

Re: Solaris compile fails cyrus-sasl 2.1.22

2006-07-26 Thread Vincent Fox
From poking at config.log, looks like the OpenSSL detection fails because socket is not linked in. I did an export CFLAGS=-lsocket and export LDFLAGS=-lsocket At this point compile seems to go okay through digestmd5.lo, but fails later on auth_getpwent.c I found a patch posted for that that

duplicate suppression in murder

2006-07-27 Thread Vincent Fox
Can a murder setup also do duplicate suppression across the backends? I am in the process of setting up a first murder, and this question came up in a meeting. We know you can do duplicate suppression on a single server that info is easily found. I have googled and scanned the archives didn't se

Re: duplicate suppression in murder

2006-07-28 Thread Vincent Fox
I really don't have a strong enough grasp of Cyrus terminology at this point. Sorry about that, perhaps duplicate suppression is not the right term. What I meant was you send a message to 30 inboxes at once, as I understood it Cyrus only stores one copy and references the other 29. At least with

Re: duplicate suppression in murder

2006-07-28 Thread Vincent Fox
Ooops, found the correct term I was looking for. "Single instance store" is what I was intending to ask about. Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

SSL certs on proxy pool?

2006-08-01 Thread Vincent Fox
Wondering how people deal with SSL certs with multiple frontends? Do you put wildcard certs on the proxies and leave the SSL processing on each unit? Do you use an SSL-aware load-balancer and let it hold a cert for the published hostname and do the heavy lifting? If there's some 3rd way, I'm in

cyrus UIDL - POP double download problem

2006-08-08 Thread Vincent Fox
So when moving mailboxes from UWash to Cyrus, the obvious solution seemed to be use imapsync. I just did a test-run of it moving my own mailbox from UW to Cyrus. It looks like the UIDL changes in the process, so any POP client is going to do double-downloads of INBOX contents. Is there any way

ZFS compression?

2007-04-24 Thread Vincent Fox
Has anyone attempted using ZFS compression on mail spools? This is a new Cyrus 2.3.8 mail-store, and we are using dual fiber HBA to SAN switches and Sun 3510FC units on the backend, multipath and active-active on the dual-controllers. RAID-5 LUNs on each array are ZFS mirrored between arrays. Y

Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox
I originally brought up the ZFS question. We seem to have arrived at a similar solution after much experimentation. Meaning using ZFS for the things it does well already, and leveraging proven hardware to fill in the weak spots. I have a pair of Sun 3510FC arrays we have exported 2 RAID5 L

Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox
Gary Mills wrote: There was a question earlier regarding ZFS for Cyrus IMAP storage. We recently converted to that filesystem here. I'm extremely pleased with it. Our server has about 30,000 users with over 200,000 mailboxes. It peaks at about 1900 IMAP sessions. It currently has 1 TB of sto

Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox
Rudy Gevaert wrote: Are you going to do this with "1" perdition server? Make sure you have compiled perdition with /dev/urandom, or an other sort of non blocking entropy providing device :) You perhaps think we are adding Perdition to the mix, and assuming we have a single box that might get

Cyrus & ZFS performance

2007-07-03 Thread Vincent Fox
I just thought to report back to the list, that ZFS is working out well. I asked a while ago about other's opinions, and got all thumbs up. We deployed a setup with Sun T2000 running Solaris 10u3 (11/06) and a pair of Sun 3510FC arrays mirroring in a hybrid HA RAID 5+1+0 setup that I can describe

Re: Cyrus & ZFS performance

2007-07-04 Thread Vincent Fox
Dale Ghent wrote: > Sorry for the double reply, but by the way, what sort of compression ratio are you seeing on your ZFS filesystems? {cyrus1:vf5:136} zfs get compressratio cyrus/mail NAME PROPERTY VALUE SOURCE cyrus/mail compressratio 1.26x

Re: Cyrus & ZFS performance

2007-07-04 Thread Vincent Fox
Dale Ghent wrote: > each with a zpool comprising of a mirror between two se3511s on our > SAN... Sun recommends against the 3511 in most literature I read, saying that the SATA drives are slower and not going to handle as much IOPS loading. But they are working out okay for you? Perhaps it's

Re: Cyrus & ZFS performance

2007-07-05 Thread Vincent Fox
here with most students being home, not so much daily class-related chitchat. If this 3511 setup is yielding very similar numbers we may switch array products during our next buildout. Dale Ghent wrote: > On Jul 4, 2007, at 12:59 PM, Vincent Fox wrote: > >> Sun recommends against the

Re: RAID type suggestion

2007-07-11 Thread Vincent Fox
Suggestions: 1) More spindles is best, so more small disks is better than fewer large ones 2) RAID-10 is best performance 3) Run bonnie++ on your proposed setups and benchmark performance using filesize typical for a mail message (16K typical) 4) Read this website before trusting user data to RAI

Re: User Bulletin implementable with Cyrus

2007-07-26 Thread Vincent Fox
Hello Rick, For IMAP users (and clients that support it) there is message of the day function. Example, create one-line bulletin in /var/cyrus/imap/msg/motd Next time Thunderbird user connects, they should all see message. No way to do this for POP users that I know of. I asked a question rece

Re: storing mail across several cyrus partitions

2007-09-04 Thread Vincent Fox
> >> May somebody recommend reliable/safe filesystem that support resizing? >> I'm afraid to use anything except ext3 in production enviroment... >> We use Solaris and I have to say ZFS is quite amazing. Never have to run fsck ever again! The checksum feature ensures that when you read the

Re: storing mail across several cyrus partitions

2007-09-04 Thread Vincent Fox
>> May somebody recommend reliable/safe filesystem that support resizing? >> I'm afraid to use anything except ext3 in production enviroment... >> Forgot to mention for ZFS, you can grow the pool by adding disks to it. We run a pool full of mirror pairs so we'd add more pairs. Quite easy an

Need Cyrus consulting support (will pay)

2007-09-12 Thread Vincent Fox
HELP! We need Cyrus consulting technical assistance on our Cyrus 2.3.8 system. The University of California Davis has 2 servers with about 29K users on one system and 23K on the other. The past few days we have seen our load go through the roof, timeouts to users, lot of problems. We have poked

Cyrus 2.3.8 thanks for help

2007-09-12 Thread Vincent Fox
We had several other universities called and offered assistance. Thanks! We also had several people call with offers to help with consulting on it, but mostly not at the same level of Cyrus hardware/software/config we are running. We appreciate the effort though, we really do. Lots of folks see

RE: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN

2007-09-20 Thread Vincent Fox
Pascal, How many accounts did you have per mail-store? Thanks! Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Cyrus 2.3.9: Errors compiling outside source directory

2007-09-22 Thread Vincent Fox
So our protocol is we have a central repository in AFS for source code. Then we write a script called BUILD.ksh that does all the right stuff to compile into a /tmp/(package)-obj directory on each build platform. This is not working for me yet on Cyrus 2.3.9 as the configure utility as-is doesn't

Re: Cyrus 2.3.9: Errors compiling outside source directory

2007-09-22 Thread Vincent Fox
Bron Gondwana wrote: > Do you have the 'makedepend' utility on your system? I found > that if it wasn't there, "make depend" would not get all the > dependencies in place. > In Solaris, there is a makedepend in /usr/openwin/bin. Hence my including that directory in PATH at top of script. I su

Re: Cyrus 2.3.9: Errors compiling outside source directory

2007-09-22 Thread Vincent Fox
Ah, found my problem. The 2nd problem with glob_t not being found, was caused by my fix for the first problem. To rewind a bit, my first trial failed here: ### Done building chartables. gcc -c -I.. -I/ucd/include -I/ucd/src/cyrus-imapd/cyrus-imapd-2.3.9/cyrus-imapd-2.3.9/et -I/ucd/include -I/

Re: Cyrus 2.3.9: Errors compiling outside source directory

2007-09-23 Thread Vincent Fox
It's undoubtedly not the right way, but I patched lib/Makefile.in. The basic problem in lib I think is twofold: 1) chartable.c being dynamically built during make, so it's not there when the makedepend is run 2) cyrusdb_quotalegacy.c including glob.h, which has a name collision with a similar Cyr

Re: Cyrus 2.3.9: Errors compiling outside source directory

2007-09-23 Thread Vincent Fox
After hours of struggle, I am abandoning this and switching to simply having my script unpack the tarball into /tmp and compile it there. Every time I found some fix for one piece there was another one that needed fixing. Like in the sieve dir there is the xversion.sh that creates an xversion.h.

Solaris Zones with Cyrus

2007-09-24 Thread Vincent Fox
Has anyone worked with Solaris Zones to run Cyrus? Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: Cyrus IMAP 2.3.9 on Solaris 10 with ZFS and SAN

2007-09-24 Thread Vincent Fox
Pascal Gienger wrote: > > [2] in /etc/system: set zfs:zfs_nocacheflush=1 >on a live system using mdb -kw: zfs_nocacheflush/W0t1 By the way I tried this on a fully patched Solaris 10u3 system and get this notice during boot: sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' modu

LARGE single-system Cyrus installs?

2007-10-04 Thread Vincent Fox
Wondering if anyone out there is running a LARGE Cyrus user-base on a single or a couple of systems? Let me define large: 25K-30K (or more) users per system High email activity, say 2+ million emails a day We have talked to UCSB, which is running 30K users on a single Sun V490 system. However

Re: LARGE single-system Cyrus installs?

2007-10-04 Thread Vincent Fox
do our anti-spam/anti-virus on other systems before delivering to the > 5 mailbox systems. I'm guessing you don't have that type of setup? > Jim > > > Vincent Fox wrote: >> Wondering if anyone out there is running a LARGE Cyrus >> user-base on a single or a couple

Re: LARGE single-system Cyrus installs?

2007-10-04 Thread Vincent Fox
Xue, Jack C wrote: > At Marshall University, We have 30K users (200M quota) on Cyrus. We use > a Murder Aggregation Setup which consists of 2 frontend node, 2 backend > nodes Interesting, but this is approximately 15K users per backend. Which is where we are now after 30K users per backend were

Re: LARGE single-system Cyrus installs?

2007-10-05 Thread Vincent Fox
The iostat and sar data disagrees with it being an I/O issue. 16 gigs of RAM with about 4-6 of it being used for Cyrus leaves plenty for ZFS caching. Our hardware seemed more than adequate to anyone we described it to. Yes beyond that it's anyone guess. Cyrus Home Page: http://cyrusimap.

Re: LARGE single-system Cyrus installs?

2007-10-06 Thread Vincent Fox
Rob Mueller wrote: >> We are in the process of moving from reiserfs to ext3 (with dir_index). >> >> ZFS with mirrors across 2 separate storage devices, means never having to say you're sorry. I sleep very well at night. Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ:

LMTP AUTH security exposure?

2007-10-09 Thread Vincent Fox
So I want to do LMTP between an MX pool and Cyrus backends. The common way I read about doing this, is with a shared LMTP account from MX pool to backends. So it becomes a postman sort of account with the password in plaintext in various places and of course transiting the network that way. Is t

Re: LMTP AUTH security exposure?

2007-10-10 Thread Vincent Fox
Ken Murchison wrote: > > You can set service-specific options, such as "lmtp_allowplaintext: > yes". The service-specific prefix must match a service name in > cyrus.conf. > That seems more than sufficient solution, thanks! We set allowplaintext: no lmtp_allowplaintext: yes It works like a cha

UC Davis Cyrus Incident September 2007

2007-10-16 Thread Vincent Fox
So here's the story of the UC Davis (no, not Berkeley) Cyrus conversion. We have about 60K active accounts, and another 10K that are forwards, etc. 10 UWash servers that were struggling to keep up with a load that was 2006 running around 2 million incoming emails a day, before spam dumpage, et

squatter running longer than 24 hours

2007-10-21 Thread Vincent Fox
I have seen squatter run more than 24 hours. This is on a large mail filesystem. I've seen it start up a second one while the first is still running. Should I: 1) Forget about squatter 2) Remove from cyrus.conf, run from cron every other day 3) Find some option to cyrus.conf for same effect as

Re: Reducing ZFS blocksize to improve Cyrus write performance ?

2010-08-09 Thread Vincent Fox
For what Cyrus is doing on Solaris with ZFS, the recordsize seems nearly negligible. What with all the caching in the way, and how ZFS orders transactions, it's about the last tuneable I'd worry about. Here's what works well for us, add this to /etc/system: * Turn off ZFS cache flushing set zfs:

Re: Reducing ZFS blocksize to improve Cyrus write performance ?

2010-08-09 Thread Vincent Fox
On Mon, 2010-08-09 at 17:22 +0200, Eric Luyten wrote: > Folks, > > did you consider, measure and/or carry > out a change of the default 128 KB blocksize ? To more directly answer your question than last post... We did some testing with Bonnie++ prior to deployment and changing recordsize didn't

Re: Reducing ZFS blocksize to improve Cyrus write performance ?

2010-08-26 Thread Vincent Fox
So to my mind, the downside of ZFS flush disable is. Data on disk may not be as current in the unlikely event of power outage. In point of fact MOST filesystem do not operate in journalled data mode anyhow and most people just don't realize this fact. The default for Linux EXT filesystems

Re: competition

2010-09-20 Thread Vincent Fox
On 09/20/2010 06:59 AM, Marc Patermann wrote: > But where does Cyrus IMAPd stand today? > It may be Murder/Aggregator - but how to get the people, when on first > contact, where they just need a simple IMAP server, they are pointed to > other product, which they then stay with? Umm, what? We run

  1   2   >