On Sun, 02 May 2010, Mike Bird wrote:
> On Sun May 2 2010 13:24:30 Alexander Samad wrote:
> > My system used to become close to unusable on the 1st sunday of the month
> > when mdadm did it resync, I had to write my own script so it did not do
> > mulitple at the
> > same time, turn off the hung pr
Hi, Hugo:
On Tuesday 04 May 2010 20:25:52 Hugo Vanwoerkom wrote:
> martin f krafft wrote:
> > also sprach Hugo Vanwoerkom [2010.05.04.1808 +0200]:
> >> I forget your specifics, but you do RAID *and* backup regularly to an
> >> external lvm2?
> >
> > RAID is not a backup solution, it's an availabi
Eduardo M KALINOWSKI wrote:
On Ter, 04 Mai 2010, Hugo Vanwoerkom wrote:
martin f krafft wrote:
RAID is not a backup solution, it's an availability measure.
But as data availability goes up by using RAID doesn't the need for
backing up that same data go down? Or is this just semantics?
RAI
On Tue, 2010-05-04 at 14:50 -0500, Ron Johnson wrote:
> On 05/04/2010 11:08 AM, Hugo Vanwoerkom wrote:
> [snip]
> >
> > I forget your specifics, but you do RAID *and* backup regularly to an
> > external lvm2?
> >
>
> No, no RAID for me *at home*. But at work I manage databases on all
> sorts of
On 05/04/2010 11:08 AM, Hugo Vanwoerkom wrote:
[snip]
I forget your specifics, but you do RAID *and* backup regularly to an
external lvm2?
No, no RAID for me *at home*. But at work I manage databases on all
sorts of (to use a quaint old phrase) super-minicomputers, and if
they ever needed
On Ter, 04 Mai 2010, Hugo Vanwoerkom wrote:
martin f krafft wrote:
RAID is not a backup solution, it's an availability measure.
But as data availability goes up by using RAID doesn't the need for
backing up that same data go down? Or is this just semantics?
RAID does not prevent against y
martin f krafft wrote:
also sprach Hugo Vanwoerkom [2010.05.04.1808 +0200]:
I forget your specifics, but you do RAID *and* backup regularly to an
external lvm2?
RAID is not a backup solution, it's an availability measure.
But as data availability goes up by using RAID doesn't the need for
also sprach Hugo Vanwoerkom [2010.05.04.1808 +0200]:
> I forget your specifics, but you do RAID *and* backup regularly to an
> external lvm2?
RAID is not a backup solution, it's an availability measure.
--
.''`. martin f. krafft Related projects:
: :' : proud Debian developer
Ron Johnson wrote:
On 05/03/2010 03:45 AM, martin f krafft wrote:
also sprach Ron Johnson [2010.05.03.1039 +0200]:
Is that Q21?
http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD
Yes.
2. You were asked upon mdadm installation whether you wanted it, and
yo
On 05/03/2010 08:04 PM, Sam Leon wrote:
Ron Johnson wrote:
On 05/02/2010 03:24 PM, Alexander Samad wrote:
[snip]
My system used to become close to unusable on the 1st sunday of the
month when
mdadm did it resync,
That sounds... wrong, on a jillion levels.
I would rather the array fail on
Ron Johnson wrote:
On 05/02/2010 03:24 PM, Alexander Samad wrote:
[snip]
My system used to become close to unusable on the 1st sunday of the
month when
mdadm did it resync,
That sounds... wrong, on a jillion levels.
I would rather the array fail on a monthly resync than have it fail on a
On 05/03/2010 03:45 AM, martin f krafft wrote:
also sprach Ron Johnson [2010.05.03.1039 +0200]:
Is that Q21?
http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD
Yes.
2. You were asked upon mdadm installation whether you wanted it, and
you chose to accept yes.
also sprach Ron Johnson [2010.05.03.1039 +0200]:
> Is that Q21?
>
> http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD
Yes.
> >2. You were asked upon mdadm installation whether you wanted it, and
> >you chose to accept yes. dpkg-reconfigure mdadm if you don't
> >
On 05/03/2010 01:21 AM, martin f krafft wrote:
also sprach Ron Johnson [2010.05.02.2300 +0200]:
My system used to become close to unusable on the 1st sunday of
the month when mdadm did it resync,
That sounds... wrong, on a jillion levels.
It sounds (and is) wrong in exactly two ways:
1. Th
also sprach Ron Johnson [2010.05.02.2300 +0200]:
> >My system used to become close to unusable on the 1st sunday of
> >the month when mdadm did it resync,
>
> That sounds... wrong, on a jillion levels.
It sounds (and is) wrong in exactly two ways:
1. The operation is not a resync, check the FAQ
On Sun, 2010-05-02 at 19:19 -0700, Mike Bird wrote:
> On Sun May 2 2010 13:24:30 Alexander Samad wrote:
> > My system used to become close to unusable on the 1st sunday of the month
> > when mdadm did it resync, I had to write my own script so it did not do
> > mulitple at the
> > same time, turn
On Sun May 2 2010 13:24:30 Alexander Samad wrote:
> My system used to become close to unusable on the 1st sunday of the month
> when mdadm did it resync, I had to write my own script so it did not do
> mulitple at the
> same time, turn off the hung process timer and set cpufreq to performance.
A l
On Sun, 2010-05-02 at 16:00 -0500, Ron Johnson wrote:
> On 05/02/2010 03:24 PM, Alexander Samad wrote:
> [snip]
> >
> > My system used to become close to unusable on the 1st sunday of the month
> > when
> > mdadm did it resync,
>
> That sounds... wrong, on a jillion levels.
depends
a...@max:~$
On 05/02/2010 03:24 PM, Alexander Samad wrote:
[snip]
My system used to become close to unusable on the 1st sunday of the month when
mdadm did it resync,
That sounds... wrong, on a jillion levels.
--
Dissent is patriotic, remember?
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debia
On Mon, May 3, 2010 at 6:02 AM, Boyd Stephen Smith Jr.
wrote:
> On Sunday 02 May 2010 06:00:38 Stan Hoeppner wrote:
[snip]
>
> Speeds on my md-RAID devices were comparable to speeds with my Areca HW RAID
> controller (16-port, PCI-X/SATA, battery powered 128MB cache). Number of
> drives varied fr
On Sunday 02 May 2010 06:00:38 Stan Hoeppner wrote:
> Good hardware RAID cards are really nice and give you some features you
> can't really get with md raid such as true "just yank the drive tray out"
> hot swap capability. I've not tried it, but I've read that md raid doesn't
> like it when you
On Friday 30 April 2010 19:10:52 Mark Allums wrote:
> or even btrfs for the data directories.
While I am beginning experimenting with btrfs, I wouldn't yet use it for data
you care about.
/boot, not until/if grub2 gets support for it. Even then, boot is generally
small and not often used, so y
Disclaimer: I'm partial to XFS
Tim Clewlow put forth on 5/1/2010 2:44 AM:
> My reticence to use ext4 / xfs has been due to long cache before
> write times being claimed as dangerous in the event of kernel lockup
> / power outage.
This is a problem with the Linux buffer cache implementation, no
> On 4/30/2010 6:39 PM, Ron Johnson wrote:
>> On 04/26/2010 09:29 AM, Tim Clewlow wrote:
>>> Hi there,
>>>
>>> I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
>>
>> Since two of the drives (yes, I know the parity is striped across
>> all
>> the drives, but "two drives" is still t
On 04/30/2010 07:10 PM, Mark Allums wrote:
[snip]
Someone pointed out what I have come to regard as the best solution, and
that is to make /boot and / (root) and the usual suspects ext3 for
safety, and use ext4 or XFS or even btrfs for the data directories.
That's what I do. / & /home are ext
On 4/30/2010 6:39 PM, Ron Johnson wrote:
On 04/26/2010 09:29 AM, Tim Clewlow wrote:
Hi there,
I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
Since two of the drives (yes, I know the parity is striped across all
the drives, but "two drives" is still the effect) are used by s
On 04/26/2010 09:29 AM, Tim Clewlow wrote:
Hi there,
I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
Since two of the drives (yes, I know the parity is striped across
all the drives, but "two drives" is still the effect) are used by
striping, RAID 6 with 4 drives doesn't se
On Mon, Apr 26, 2010 at 04:44:32PM -0500, Boyd Stephen Smith Jr. wrote:
> On Monday 26 April 2010 09:29:28 Tim Clewlow wrote:
> > I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
> > but the intention is to add more drives as storage requirements
> > increase.
>
> Since you seem f
On Wednesday 28 April 2010 20:51:18 Stan Hoeppner wrote:
> Mike Bird put forth on 4/28/2010 5:48 PM:
> > On Wed April 28 2010 15:10:32 Stan Hoeppner wrote:
> >> Given the way most database engines do locking, you'll get zero
> >> additional seek benefit on reads, and you'll take a 4x hit on writes.
On Wed April 28 2010 18:51:18 Stan Hoeppner wrote:
> You seem to posses knowledge of these things that is 180 degrees opposite
> of fact. OLTP, or online transaction processing, is typified by retail or
> web point of sale transactions or call logging by telcos. OLTP databases
> are typically muc
Mike Bird put forth on 4/28/2010 5:48 PM:
> On Wed April 28 2010 15:10:32 Stan Hoeppner wrote:
>> Mike Bird put forth on 4/28/2010 1:48 PM:
>>> I've designed commercial database managers and OLTP systems.
>>
>> Are you saying you've put production OLTP databases on N-way software RAID
>> 1 sets?
>
On Wed April 28 2010 15:10:32 Stan Hoeppner wrote:
> Mike Bird put forth on 4/28/2010 1:48 PM:
> > I've designed commercial database managers and OLTP systems.
>
> Are you saying you've put production OLTP databases on N-way software RAID
> 1 sets?
No. I've used N-way RAID-1 for general servers -
Mike Bird put forth on 4/28/2010 1:48 PM:
> On Wed April 28 2010 01:44:37 Stan Hoeppner wrote:
>> On a sufficiently fast system that is not loaded, the user will likely see
>> no performance degradation, especially given Linux' buffered I/O
>> architecture. However, on a loaded system, such as a t
On 04/26/2010 04:33 PM, Mike Bird wrote:
> On Mon April 26 2010 14:44:32 Boyd Stephen Smith Jr. wrote:
>> the chance of a double failure in a 5 (or less) drive array is minuscule.
>
> A flaky controller knocking one drive out of an array and then
> breaking another before you're rebuilt can really
On Wed April 28 2010 01:44:37 Stan Hoeppner wrote:
> On a sufficiently fast system that is not loaded, the user will likely see
> no performance degradation, especially given Linux' buffered I/O
> architecture. However, on a loaded system, such as a transactional
> database server or busy ftp uplo
Stan,
We are on the same wavelength, I do the same thing myself. (Except that
I go ahead and mirror swap.) I love RAID 10.
MAA
On 4/28/2010 5:18 AM, Stan Hoeppner wrote:
Mark Allums put forth on 4/27/2010 10:31 PM:
For DIY, always pair those drives. Consider RAID 10, RAID 50, RAID 60,
e
Mark Allums put forth on 4/27/2010 10:31 PM:
> For DIY, always pair those drives. Consider RAID 10, RAID 50, RAID 60,
> etc. Alas, that doubles the number of drives, and intensely decreases
> the MTBF, which is the whole outcome you want to avoid.
This is my preferred mdadm 4 drive setup for a
Mike Bird put forth on 4/26/2010 3:04 PM:
> On Mon April 26 2010 12:29:43 Stan Hoeppner wrote:
>> Mark Allums put forth on 4/26/2010 12:51 PM:
>>> Put four drives in a RAID 1, you can suffer a loss of three drives.
>>
>> And you'll suffer pretty abysmal write performance as well.
>
> Write perform
On 4/27/2010 9:56 PM, Mark Allums wrote:
On 4/26/2010 1:37 PM, Mike Bird wrote:
On Mon April 26 2010 10:51:38 Mark Allums wrote:
RAID 6 (and 5) perform well when less than approximately 1/3 full.
After that, even reads suffer.
Mark,
I've been using various kinds of RAID for many many years a
On 4/26/2010 11:11 PM, Tim Clewlow wrote:
I don't know what your requirements / levels of paranoia are, but
RAID 5 is
probably better than RAID 6 until you are up to 6 or 7 drives; the
chance of a
double failure in a 5 (or less) drive array is minuscule.
.
I currently have 3 TB of data with a
On 4/26/2010 2:29 PM, Stan Hoeppner wrote:
Mark Allums put forth on 4/26/2010 12:51 PM:
Put four drives in a RAID 1, you can suffer a loss of three drives.
And you'll suffer pretty abysmal write performance as well.
Also keep in mind that some software RAID implementations allow more than
tw
On 4/26/2010 1:37 PM, Mike Bird wrote:
On Mon April 26 2010 10:51:38 Mark Allums wrote:
RAID 6 (and 5) perform well when less than approximately 1/3 full.
After that, even reads suffer.
Mark,
I've been using various kinds of RAID for many many years and
was not aware of that. Do you have a l
Hi
I recently (last week), migrated from 10 x 1Tb to adaptec 51645 and 5
x 2T drives.
my experience, I can't get frub2 and the adaptec to work, so I am
booting from a SSD I had.
I carved up the 5x2T into 32G (mirror 1e - mirror stripe + parity) -
too boot from and mirrored against my ssd. the r
> I don't know what your requirements / levels of paranoia are, but
> RAID 5 is
> probably better than RAID 6 until you are up to 6 or 7 drives; the
> chance of a
> double failure in a 5 (or less) drive array is minuscule.
>
.
I currently have 3 TB of data with another 1TB on its way fairly
soon,
On Mon April 26 2010 14:44:32 Boyd Stephen Smith Jr. wrote:
> the chance of a double failure in a 5 (or less) drive array is minuscule.
A flaky controller knocking one drive out of an array and then
breaking another before you're rebuilt can really ruin your day.
Rebuild is generally the period o
On Monday 26 April 2010 09:29:28 Tim Clewlow wrote:
> I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
> but the intention is to add more drives as storage requirements
> increase.
Since you seem fine with RAID 6, I'll assume you are also fine with RAID 5.
I don't know what your
On Mon April 26 2010 12:29:43 Stan Hoeppner wrote:
> Mark Allums put forth on 4/26/2010 12:51 PM:
> > Put four drives in a RAID 1, you can suffer a loss of three drives.
>
> And you'll suffer pretty abysmal write performance as well.
Write performance of RAID-1 is approximately as good as a simple
Mark Allums put forth on 4/26/2010 12:51 PM:
> Put four drives in a RAID 1, you can suffer a loss of three drives.
And you'll suffer pretty abysmal write performance as well.
Also keep in mind that some software RAID implementations allow more than
two drives in RAID 1, most often called a "mirr
On Mon April 26 2010 10:51:38 Mark Allums wrote:
> RAID 6 (and 5) perform well when less than approximately 1/3 full.
> After that, even reads suffer.
Mark,
I've been using various kinds of RAID for many many years and
was not aware of that. Do you have a link to an explanation?
Thanks,
--Mike
On 4/26/2010 11:57 AM, Tim Clewlow wrote:
I'm afraid that opinions of RAID vary widely on this list (no
surprise)
but you may be interested to note that we agree (a consensus) that
software-RAID 6 is an unfortunate choice.
.
Is this for performance reasons or potential data loss. I can live
w
> I'm afraid that opinions of RAID vary widely on this list (no
> surprise)
> but you may be interested to note that we agree (a consensus) that
> software-RAID 6 is an unfortunate choice.
>
.
Is this for performance reasons or potential data loss. I can live
with slow writes, reads should not be
On 4/26/2010 10:28 AM, Tim Clewlow wrote:
Ok, I found the answer to my second question - it fails the entire
disk. So the first question remains.
I just figured that out---and I see you have too.
The difference between what we would like it to do, and what it actually
does can be frustratin
On 4/26/2010 9:29 AM, Tim Clewlow wrote:
Hi there,
I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
but the intention is to add more drives as storage requirements
increase.
My research/googling suggests ext3 supports 16TB volumes if block
size is 4096 bytes, but some sites sug
Ok, I found the answer to my second question - it fails the entire
disk. So the first question remains.
Does ext3 (and relevent utilities, particularly resize2fs and
e2fsck) on 32 bit i386 arch support 16TB volumes?
Regards, Tim.
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.or
Hi there,
I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
but the intention is to add more drives as storage requirements
increase.
My research/googling suggests ext3 supports 16TB volumes if block
size is 4096 bytes, but some sites suggest the 32 bit arch means it
is restricted
55 matches
Mail list logo