On Thu, May 10, 2007 at 09:49:01AM -0400, Nik Conwell wrote:
>
> On Jan 12, 2007, at 10:43 PM, Rob Mueller wrote:
>
> >Yep, this means we need quite a bit more software to manage the
> >setup, but now that it's done, it's quite nice and works well. For
> >maintenance, we can safely fail all m
On Jan 12, 2007, at 10:43 PM, Rob Mueller wrote:
Yep, this means we need quite a bit more software to manage the
setup, but now that it's done, it's quite nice and works well. For
maintenance, we can safely fail all masters off a server in a few
minutes, about 10-30 seconds a store. Then w
I agree that storage and replication are orthogonal issues. However, if a
lump of storage is no longer a single point of failure then you don't have
to invest (or gamble) quite as much to make that storage perfect.
Yes, that old maxim that each extra 9 in 99.99... reliability costs 10 times
mo
So you have multiple SAN's? Or your SAN is still a potential SPOF?
Multiple SANs.
Nice if you can afford it :)
Very ;)
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
On Mon, 12 Feb 2007, urgrue wrote:
SAN really has nothing to do with replication. You have your data
somewhere (local or external disks, local/ext raid, NAS, SAN, etc), and
youve got your various replication options (file-level, block-level, via
client, via server, etc).
I agree that storage
I agree of course about avoiding SPOFs, but I do like a multi-tiered
approach, I mean multiple lines of defense. I use SAN for its speed,
reliability, and ease of administration, but naturally I replicate
everything on the SAN and have "true" backups as well.
So you have multiple SAN's? Or y
On Mon, 12 Feb 2007, urgrue wrote:
If it's using block level replication, how does it offer instant recovery
on filesystem corruption? Does it track every block written to disk, and
can thus roll back to effectively what was on disk at a particular instant
in time, so you then just remount the
If it's using block level replication, how does it offer instant
recovery on filesystem corruption? Does it track every block written
to disk, and can thus roll back to effectively what was on disk at a
particular instant in time, so you then just remount the filesystem
and the replay of the
Fastmail dont use SAN, as I understand they use external raid arrays.
There are many ways to lose your data, one of these being filesystem
error, others being software bugs and human error. Block-level replication
(typically used in SANs) is very fast and uses few resources but doesnt
protect f
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2/12/07 11:01 AM, David Carter wrote:
>
> I would be surprised if NFS worked given that it is only a approximation
> to a "real" Unix filesystem. Cyrus really hammers the filesystem.
NFS does not work with cyrus. Been there, done that, didn't like
David Carter wrote:
Why do you need NFS?
The whole point of a SAN is distributed access to storage after all :).
SAN distributes the disk, not the filesystem. I presume in this case hes
not using the SAN for its multiple-client-access features but just
because its fast/reliable.
Some o
On Mon, 12 Feb 2007, Marten Lehmann wrote:
because NFS is the only standard network file protocol. I don't want to
load a proprietary driver into the kernel to access a SAN device.
Fair enough, although NFS is likely to be really rather slow compared to a
block device which just happens to be
Hello,
Why do you need NFS?
because NFS is the only standard network file protocol. I don't want to
load a proprietary driver into the kernel to access a SAN device.
The whole point of a SAN is distributed access to storage after all :).
So where's the point? SANs usually have redundant
On Mon, 12 Feb 2007, Marten Lehmann wrote:
what do you think about moving the mailspool to a central SAN storage
shared via NFS and having several blades to manage the mmapped files
like seen state, quota etc.?
Why do you need NFS?
The whole point of a SAN is distributed access to storage af
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2/12/07 5:41 AM, Marten Lehmann wrote:
> Hello,
>
> what do you think about moving the mailspool to a central SAN storage
> shared via NFS and having several blades to manage the mmapped files
> like seen state, quota etc.? So still only one server
Hello,
what do you think about moving the mailspool to a central SAN storage
shared via NFS and having several blades to manage the mmapped files
like seen state, quota etc.? So still only one server is responsible for
a certain set of mailboxes, but these SAN boxes have nice backup and
redun
Thanks, that was interesting reading.
Is there any specific reason you didnt opt for a cluster filesystem?
Internal knowledge mostly. We very were familiar with the performance and
overall usage implications of a local filesystems on locally attached
SATA-to-SCSI RAID boxes that we've been u
Thanks, that was interesting reading.
Is there any specific reason you didnt opt for a cluster filesystem?
Rob Mueller wrote:
>
>
>> May I ask how you are doing the actual replication, technically
>> speaking? shared fs, drbd, something over imap?
>
> We're using the replication engine in cyrus
May I ask how you are doing the actual replication, technically
speaking? shared fs, drbd, something over imap?
We're using the replication engine in cyrus 2.3
http://blog.fastmail.fm/?p=576
Rob
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.ed
May I ask how you are doing the actual replication, technically
speaking? shared fs, drbd, something over imap?
Rob Mueller wrote:
>> as fastmail.fm seems to be a very big setup of cyrus nodes, I would be
>> interested to know how you organized load balancing and managing disk
>> space.
>>
>> Did
as fastmail.fm seems to be a very big setup of cyrus nodes, I would be
interested to know how you organized load balancing and managing disk
space.
Did you setup servers for a maximum of lets say 1000 mailboxes and then
you use a new server? Or do you use a murder installation so you can move
Hello,
as fastmail.fm seems to be a very big setup of cyrus nodes, I would be
interested to know how you organized load balancing and managing disk space.
Did you setup servers for a maximum of lets say 1000 mailboxes and then
you use a new server? Or do you use a murder installation so you c
22 matches
Mail list logo