On Tue, 2009-09-29 at 14:50 +0200, Simon Matter wrote:
[...]
> The interesting point is that the discussion started as a ZFS vs.
> $ANY_OTHER_FS thing but it quickly turns out that the filesystem is only
> one part of the picture. If your storage fails on the block level I doubt
> the filesystem ma
On Wed, Sep 30, 2009 at 01:01:28AM -0400, Brian Awood wrote:
> On Tuesday 29 September 2009 @ 18:41, Bron Gondwana wrote:
> >
> > Possibly the secret is that we use IPAddr2 from linux-ha to force
> > ARP flushes, and we transfer the primary IP address between
> > machines, so nothing else needs to
On Tuesday 29 September 2009 @ 18:41, Bron Gondwana wrote:
>
> Possibly the secret is that we use IPAddr2 from linux-ha to force
> ARP flushes, and we transfer the primary IP address between
> machines, so nothing else needs to know - we just shut down one end
> and bring up the other with the IP
On Tue, Sep 29, 2009 at 09:19:13AM -0400, Brian Awood wrote:
>
>
> On Tuesday 29 September 2009 @ 06:59, Bernd Petrovitsch wrote:
> > On Mon, 2009-09-28 at 15:33 -0700, Vincent Fox wrote:
> > [...]
> >
> > > Really I've looked at fsck too many times in my life and
> > > don't ever want to again.
With my ever-growing experience with these things, I'm tending to
think that application-level HA solutions are a much more robust way
of dealing with the potential failure modes of hardware or software.
While this doesn't mean you shouldn't buy reasonably robust hardware
(not the cheapest
Simon Matter wrote:
What I'm really wondering, what filesystem disasters have others seen? How
many times was it fsck only, how many times was it really broken. I'm not
talking about laptop and desktop users but about production systems in a
production environment with production class hardware a
Bron Gondwana wrote:
It's an interesting one. For real reliability, I want to
have multiple replication target supported cleanly.
So the issues for me with Cyrus replication:
1) Is it working? Is the replica actually up to date
and more importantly what if I switch to it and there
is some
On Tuesday 29 September 2009 @ 06:59, Bernd Petrovitsch wrote:
> On Mon, 2009-09-28 at 15:33 -0700, Vincent Fox wrote:
> [...]
>
> > Really I've looked at fsck too many times in my life and
> > don't ever want to again. Anyone who tells me "oh yes but
>
> Especially not in the >100GB area.
We h
> On Tue, Sep 29, 2009 at 09:45:53AM +0200, Simon Matter wrote:
>> What I'm really wondering, what filesystem disasters have others seen?
>> How
>> many times was it fsck only, how many times was it really broken. I'm
>> not
>> talking about laptop and desktop users but about production systems in
On Tue, Sep 29, 2009 at 12:59:39PM +0200, Bernd Petrovitsch wrote:
> Does anyone has scripts/tools to - at least - simulate 1000s of
> (semi-realistic) parallel IMAP clients on a big setup?
Yeah, I've got one. I need to tidy it up a bit more though, and
they're a bit less realistic than I'd like.
On Mon, 2009-09-28 at 15:33 -0700, Vincent Fox wrote:
[...]
> Really I've looked at fsck too many times in my life and
> don't ever want to again. Anyone who tells me "oh yes but
Especially not in the >100GB area.
[...]
> The antiquated filesystems that 99% of admins tolerate and
> work with every
On Tue, Sep 29, 2009 at 09:45:53AM +0200, Simon Matter wrote:
> What I'm really wondering, what filesystem disasters have others seen? How
> many times was it fsck only, how many times was it really broken. I'm not
> talking about laptop and desktop users but about production systems in a
> product
> Bron Gondwana wrote:
>> I assume you mean 500 gigs! We're switching from 300 to 500 on new
>> filesystems because we have one business customer that's over 150Gb
>> now and we want to keep all their users on the one partition for
>> folder sharing. We don't do any murder though.
>>
>>
> Oops ye
On Mon, Sep 28, 2009 at 03:33:44PM -0700, Vincent Fox wrote:
> Bron Gondwana wrote:
> >I assume you mean 500 gigs! We're switching from 300 to 500 on new
> >filesystems because we have one business customer that's over
> >150Gb now and we want to keep all their users on the one partition
> >for
>
Bron Gondwana wrote:
I assume you mean 500 gigs! We're switching from 300 to 500 on new
filesystems because we have one business customer that's over 150Gb
now and we want to keep all their users on the one partition for
folder sharing. We don't do any murder though.
Oops yes. I meant 5
On Mon, Sep 28, 2009 at 08:59:43AM -0700, Vincent Fox wrote:
> Lucas Zinato Carraro wrote:
> >
> >- Exist a recommended size to a Backend server ( Ex: 1 Tb )?
> >
> Hardware-wise your setup is probably overkill.
> Nothing wrong with that.
Yeah, that's a fair few machines! Nice to have space for i
2009/9/28 Lucas Zinato Carraro :
> Hi, I am deploing cyrus-imapd in my organization:
>
> My organization need to have:
>
> - 75000 maiboxes
> - 1 simultaneous connections (IMAP)
> - 3000 mailboxes with 1Gb and 65000 with 200Mb
> - 12 Tb for spool ( EMC Clarion Storage )
> - 15 servers wit
Lucas Zinato Carraro wrote:
- Exist a recommended size to a Backend server ( Ex: 1 Tb )?
Hardware-wise your setup is probably overkill.
Nothing wrong with that.
Sizing of filesystems IMO should be based on your
tolerance for long fsck during a disaster. I run ZFS which
has none of that and d
18 matches
Mail list logo