Pascal Hambourg wrote:
> Bob Proulx a écrit :
> > Do you have any articles or blogs or postings you have written that
> > would summarize raid alternatives? I would enjoy reading whatever you
> > have written on the subject. Or if you recommended other references.
>
> There is no need to write a
Bob Proulx a écrit :
>
> Pascal Hambourg wrote:
>> Bob Proulx a écrit :
>>> I favor RAID6's extra redundancy for more safety but I
>>> still use RAID1 too.
>> RAID 1 can provide as much or more redundancy than RAID 6.
>> RAID 1 on 3 disks provides as much redundancy as RAID 6.
>> RAID 1 on 4 disks
Hello Pascal,
Pascal Hambourg wrote:
> Bob Proulx a écrit :
> > I favor RAID6's extra redundancy for more safety but I
> > still use RAID1 too.
>
> RAID 1 can provide as much or more redundancy than RAID 6.
> RAID 1 on 3 disks provides as much redundancy as RAID 6.
> RAID 1 on 4 disks provides mo
Bob Proulx a écrit :
>
> I favor RAID6's extra redundancy for more safety but I
> still use RAID1 too.
RAID 1 can provide as much or more redundancy than RAID 6.
RAID 1 on 3 disks provides as much redundancy as RAID 6.
RAID 1 on 4 disks provides more redundancy than RAID 6 (but half the
usable sp
Gary Dale wrote:
> Mart van de Wege wrote:
> > The problem is not that RAID5 does not provide resilience against a
> > single disk failure. The problem is that with modern disk capacities,
> > the chances of *another* disk failing while the array is rebuilding have
&g
Gary Dale a écrit :
> On 05/12/14 03:35 PM, Pascal Hambourg wrote:
>>
> You can think of the RAID algorithms as parity checks. A mirror is even
> parity.
This point of view is a bit twisted, but I can understand and won't argue.
> While the disks are not physically assigned to be data or
> pari
On 05/12/14 03:35 PM, Pascal Hambourg wrote:
Hello,
Some mistakes in what you wrote.
Gary Dale a écrit :
RAID 1 and RAID 5 are both immune to single disk
failures in their most common configurations (1 or more data disks with
1 parity disk). RAID 10 is also immune to single disk failure but us
On 12/05/2014 03:35 PM, Pascal Hambourg wrote:
Linux can use a special RAID 10 mode (mirror+stripe) with two or three
disks.
with 6 disks, RAID 6 will give you double the capacity of 4 disks
or get you immunity to 3 disks failing.
RAID 6 can survive 2 disk failures regarless of the number of
Hello,
Some mistakes in what you wrote.
Gary Dale a écrit :
>
> RAID 1 and RAID 5 are both immune to single disk
> failures in their most common configurations (1 or more data disks with
> 1 parity disk). RAID 10 is also immune to single disk failure but uses
> half the disks for parity.
RAID
On 05/12/14 05:01 AM, Mart van de Wege wrote:
Gary Dale writes:
On 04/12/14 12:51 PM, Dan Ritter wrote:
On Thu, Dec 04, 2014 at 02:13:59PM +0100, mad wrote:
Hi!
I wanted to create a RAID5 with lvm. The basic setup is something like
lvcreate --type raid5 -i 2 -L 1G -n my_lv my_vg
which
Gary Dale writes:
> On 04/12/14 12:51 PM, Dan Ritter wrote:
>> On Thu, Dec 04, 2014 at 02:13:59PM +0100, mad wrote:
>>> Hi!
>>>
>>> I wanted to create a RAID5 with lvm. The basic setup is something like
>>>
>>> lvcreate --type raid5 -i 2 -L
On 04/12/14 12:51 PM, Dan Ritter wrote:
On Thu, Dec 04, 2014 at 02:13:59PM +0100, mad wrote:
Hi!
I wanted to create a RAID5 with lvm. The basic setup is something like
lvcreate --type raid5 -i 2 -L 1G -n my_lv my_vg
which would mean 3 physical drives would be used in this RAID5. But can
I
On Thu, Dec 04, 2014 at 02:13:59PM +0100, mad wrote:
> Hi!
>
> I wanted to create a RAID5 with lvm. The basic setup is something like
>
> lvcreate --type raid5 -i 2 -L 1G -n my_lv my_vg
>
> which would mean 3 physical drives would be used in this RAID5. But can
> I s
Hi!
I wanted to create a RAID5 with lvm. The basic setup is something like
lvcreate --type raid5 -i 2 -L 1G -n my_lv my_vg
which would mean 3 physical drives would be used in this RAID5. But can
I specify that one drive is missing as it is possible with mdadm?
TIA
mad
--
To UNSUBSCRIBE
On Wed, 18 Apr 2012 15:03:04 +0200, Daniel Koch wrote:
>- Zero all the superblocks on all the disks. (mdadm --zero-superblock
>/dev/sd{b..d})
>- Recreate the array with the "--assume-clean" option. (mdadm
>--create --verbose /dev/md0 --auto=yes --assume-clean --level=5
>--raid-
- Zero all the superblocks on all the disks. (mdadm --zero-superblock
/dev/sd{b..d})
- Recreate the array with the "--assume-clean" option. (mdadm --create
--verbose /dev/md0 --auto=yes --assume-clean --level=5 --raid-devices=3
/dev/sdb /dev/sdc /dev/sdd)
- Mark it possibly dirty
Hi all
I have a home server with a raid5.
After switching to a new case my graphics card died and I replaced it
with an old but very powerful card just to try things.
Unfortunately the PSU wasn't strong enough and there were some crashes.
I could fix the problems of the other disks but
Hey there,
I want to grow my md-raid5 which has 4 Disks each 1,5TB. I have a disk here
which is 2,0TB and i need to know if it is save to do that.
I know if i am using a 2,0TB disk it would appear as 1,5TB.
Anybody did that before ? Something have an eye on ?
Thank you for your help
On Tue, Nov 01, 2011 at 12:45:33PM +0100, Denny Schierz wrote:
> With raid5 22TB, which seems to be better ...
>
> But, what is the best?
>
> To create only one big MD is something bad, I think.
>
> On FreeBSD I have with ZFS and 2 x raidz round about 21TB.
>
> an
you will have 20TB RAID 5 array or you can build 22TB RAID 5 array
without a spare drive.
md8 : active raid5 sdm1[6] sdn1[5](S) sdl1[3] sdk1[2] sdj1[1] sdi1[0]
7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4]
[_]
[>] recovery = 4.1%
/ 2097152 bytes
Disk identifier: 0x
Only 8TB ...
md8 : active raid5 sdm1[6] sdn1[5](S) sdl1[3] sdk1[2] sdj1[1] sdi1[0]
7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4]
[_]
[>] recovery = 4.1% (80504380/1953512960)
finish=330.3min speed=
Hello there,
On Sat, Dec 04, 2010 at 01:56:36PM +0100, Reiner Buehl wrote:
> I would like to create another RAID5 array consisting of 4 2TB
> disks. When creating the array with
>
> /sbin/mdadm --create --verbose /dev/md5 --level=5
> --raid-devices=4 /dev/sdm1 /dev/sdn1 /dev
Hi all,
on my Debian Lenny system (with latest updates), I have the following
problem:
I would like to create another RAID5 array consisting of 4 2TB disks.
When creating the array with
/sbin/mdadm --create --verbose /dev/md5 --level=5 --raid-devices=4
/dev/sdm1 /dev/sdn1 /dev/sdo1
cool name put forth on 6/15/2010 9:47 AM:
> Mike Bird-2 wrote:
>>
>> Do you really want to use hardware RAID? Note that if your hardware RAID
>> controller dies, you're going to need a compatible replacement or you'll
>> lose all of your data.
> utter bullshit. Is it true that not EVERY replace
On Tuesday 15 June 2010 09:47:11 cool name wrote:
> Mike Bird-2 wrote:
> > Do you really want to use hardware RAID? Note that if your hardware RAID
> > controller dies, you're going to need a compatible replacement or you'll
> > lose all of your data.
>
> utter bullshit. Is it true that not EVERY
ler will do the
job but most out there will, RAID is standardized, you know only if you
use proprietary RAID-version you'll be fucked up, anything else will be most
likely fine.
--
View this message in context:
http://old.nabble.com/RAID5-on-S5000PSL-ServerBoard-tp11506551p288922
Hi
just picked up a 51645 adaptec - having some problems with the gui can't
login and if I try and boot of the adaptec lun it crashed grub - magic
failed.
A
On Sat, Apr 17, 2010 at 11:26 PM, Camaleón wrote:
> On Sat, 17 Apr 2010 07:57:51 -0500, Stan Hoeppner wrote:
>
> > Camaleón put forth on
On Sat, 17 Apr 2010 07:57:51 -0500, Stan Hoeppner wrote:
> Camaleón put forth on 4/17/2010 3:12 AM:
>> On Sat, 17 Apr 2010 07:24:20 +0200, Israel Garcia wrote:
>>
>>> On Sat, Apr 17, 2010 at 2:12 AM, Stan Hoeppner wrote:
>>
What PCIe RAID card are you using?
>>> Adaptec AAC-RAID card inside
Camaleón put forth on 4/17/2010 3:12 AM:
> On Sat, 17 Apr 2010 07:24:20 +0200, Israel Garcia wrote:
>
>> On Sat, Apr 17, 2010 at 2:12 AM, Stan Hoeppner wrote:
>
>>> What PCIe RAID card are you using?
>> Adaptec AAC-RAID card inside a supermicro server.
Which Apaptec model, specifically? Some of
On Sat, 17 Apr 2010 07:24:20 +0200, Israel Garcia wrote:
> On Sat, Apr 17, 2010 at 2:12 AM, Stan Hoeppner wrote:
>> What PCIe RAID card are you using?
> Adaptec AAC-RAID card inside a supermicro server.
I wish you the best (similar setup here and bad experience with adaptec
raid cards) :-(
As
On Sat, Apr 17, 2010 at 2:12 AM, Stan Hoeppner wrote:
> Israel Garcia put forth on 4/16/2010 11:11 AM:
>> Hi, maybe OT but, I\m trying to install debian lenny on a raid5 with
>> 4TB. OS sees one big sda with 4TB, I can particion /boot, / and swap
>> but it only recognize
Israel Garcia put forth on 4/16/2010 11:11 AM:
> Hi, maybe OT but, I\m trying to install debian lenny on a raid5 with
> 4TB. OS sees one big sda with 4TB, I can particion /boot, / and swap
> but it only recognize 78GB instead the 4TB available. Is this because
> the partition in boot
On Sat, 17 Apr 2010 03:34:36 +1000, Tim Clewlow wrote:
>> Hi, maybe OT but, I\m trying to install debian lenny on a raid5 with
>> 4TB. OS sees one big sda with 4TB, I can particion /boot, / and swap
>> but it only recognize 78GB instead the 4TB available. Is this because
&
> Hi, maybe OT but, I\m trying to install debian lenny on a raid5 with
> 4TB. OS sees one big sda with 4TB, I can particion /boot, / and swap
> but it only recognize 78GB instead the 4TB available. Is this
> because
> the partition in booteable? Can I install debian OS on this hdd
Hi, maybe OT but, I\m trying to install debian lenny on a raid5 with
4TB. OS sees one big sda with 4TB, I can particion /boot, / and swap
but it only recognize 78GB instead the 4TB available. Is this because
the partition in booteable? Can I install debian OS on this hdd with
these 3 partitions
also sprach Henrique de Moraes Holschuh [2009.04.30.1925
+0200]:
> Where can I read about that, before I freak out needlesly? :-)
In the ANNOUNCE files and the upstream changelog, both of which are
not included in the Debian package by result of some weird chain of
events.
Try
http://git.de
On Thu, 30 Apr 2009, martin f krafft wrote:
> also sprach Henrique de Moraes Holschuh [2009.04.30.1615
> +0200]:
> > 1.0 superblocks are widely used. Please don't do that. Either
> > implement support for both, or use mdadm (which knows both).
> >
> > This kind of stuff really should not be do
On Thu, 30 Apr 2009, Boyd Stephen Smith Jr. wrote:
> He who codes, decides. Either put forth the effort to
> design/write/review/test/apply the patch or don't be surprised if your
> preferences are not highly weighted in the resulting code.
Will lvm upstream take something that makes lvm align
In <20090430141527.gc28...@khazad-dum.debian.net>, Henrique de Moraes Holschuh
wrote:
>On Wed, 29 Apr 2009, Boyd Stephen Smith Jr. wrote:
>> In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
>> Holschuh wrote:
>> >On Wed, 29 Apr 2009, martin f krafft wrote:
>> >> One should thus
also sprach Henrique de Moraes Holschuh [2009.04.30.1615
+0200]:
> 1.0 superblocks are widely used. Please don't do that. Either
> implement support for both, or use mdadm (which knows both).
>
> This kind of stuff really should not be done halfway, it can
> suprise someone into a dataloss sce
On Wed, 29 Apr 2009, Boyd Stephen Smith Jr. wrote:
> In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
> Holschuh wrote:
> >On Wed, 29 Apr 2009, martin f krafft wrote:
> >> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
> +0200]:
> >> > As always, you MUST forbid lvm
In <20090429192819.gb1...@khazad-dum.debian.net>, Henrique de Moraes
Holschuh wrote:
>On Wed, 29 Apr 2009, martin f krafft wrote:
>> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
>> > As always, you MUST forbid lvm of ever touching md component
>> > devices even if md is offli
On Wed, 29 Apr 2009, martin f krafft wrote:
> also sprach Henrique de Moraes Holschuh [2009.04.29.1522
> +0200]:
> > As always, you MUST forbid lvm of ever touching md component
> > devices even if md is offline, and that includes whatever crap is
> > inside initrds...
>
> One should thus fix LV
also sprach martin f krafft [2009.04.29.1847 +0200]:
> Absolutely. I've put Neil Brown, upstream mdadm on Bcc so he can
> pitch in if this is something he'd implement or accept patches for.
On second thought, there *is* the sysfs interface, but I don't think
it exposes md-specific information unl
also sprach Boyd Stephen Smith Jr. [2009.04.29.1808
+0200]:
> I'm down with LVM running something like:
> mdadm --has-superblock /dev/block/device
> for devices that have a PV header and refusing to automatically treat them
> as PVs if it returns success, as long as it doesn't affect md-on-LVM.
In <20090429141142.ga19...@piper.oerlikon.madduck.net>, martin f krafft
wrote:
>also sprach Boyd Stephen Smith Jr. [2009.04.29.1557
+0200]:
>> >One should thus fix LVM to be a bit more careful...
>>
>> LVM allows you to strictly limit what devices it scans for PV headers.
>
>That's not enough; L
also sprach Boyd Stephen Smith Jr. [2009.04.29.1557
+0200]:
> >One should thus fix LVM to be a bit more careful...
>
> LVM allows you to strictly limit what devices it scans for PV headers.
That's not enough; LVM knows that md exists, and LVM-on-md is about
99.8% of the sane use-cases, so L
In <20090429134916.gb17...@piper.oerlikon.madduck.net>, martin f krafft wrote:
>also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
>> As always, you MUST forbid lvm of ever touching md component
>> devices even if md is offline, and that includes whatever crap is
>> inside initrds..
also sprach Henrique de Moraes Holschuh [2009.04.29.1522
+0200]:
> As always, you MUST forbid lvm of ever touching md component
> devices even if md is offline, and that includes whatever crap is
> inside initrds...
One should thus fix LVM to be a bit more careful...
--
.''`. martin f. kraf
On Tue, 21 Apr 2009, Alex Samad wrote:
> > Learned my lesson though - no real reason to have root on lvm - it's now
> > on 3-disk RAID 1.
>
> all ways thought this, KISS
Exactly. I have servers with 4, sometimes 6-disk RAID1 root partitions,
because of KISS: all disks in the raid set should be
Alex Samad wrote:
On Mon, Apr 20, 2009 at 08:03:38PM -0400, Miles Fidelman wrote:
I just got badly bit by this. I had root on lvm on md (RAID 1). After
one of the component drives died, lvm came back up on top of the other
component drive - during boot from initrd - making it impossible
On Mon, Apr 20, 2009 at 08:03:38PM -0400, Miles Fidelman wrote:
> Alex Samad wrote:
>> On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
>>
>>> Hoping somebody might be able to provide me with some pointers that
>>> may just help me recover a lot of data, a home system with no backups
>>> bu
Alex Samad wrote:
On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
Hoping somebody might be able to provide me with some pointers that
may just help me recover a lot of data, a home system with no backups
but a lot of photos, yes I know the admin rule, backup backup backup,
but I ran out
On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
> Hoping somebody might be able to provide me with some pointers that
> may just help me recover a lot of data, a home system with no backups
> but a lot of photos, yes I know the admin rule, backup backup backup,
> but I ran out of backup space
md0
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde)
The next morning when I woke up mdadm had worked its magic and I had a
bigger RAID5 set, I use whole drives so I don't partition them first,
I'm hoping this isn't my first mistake.
I also run LVM2 ontop of my RAID5 set so I then issued
On Fri, 16 Jan 2009 04:44:11 +1100, "Alex Samad"
said:
> On Wed, Jan 14, 2009 at 07:45:24PM -0800, whollyg...@letterboxes.org
> wrote:
> >
> > I wonder if that would have helped with the larger drives. Too late:)
> > The smaller drives shouldn't have been bad. All I did to them was fail
>
>
On Wed, Jan 14, 2009 at 07:45:24PM -0800, whollyg...@letterboxes.org wrote:
> On Tue, 13 Jan 2009 15:07:37 +1100, "Alex Samad"
> said:
> > On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org
> > wrote:
> > >
[snip]
>
> I wonder if that would have helped with the larger drives.
On Tue, 13 Jan 2009 15:07:37 +1100, "Alex Samad"
said:
> On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org
> wrote:
> >
> > On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
> > said:
> > > On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
>
> [snip]
>
> >
> > But, t
On Mon, Jan 12, 2009 at 07:46:08PM -0800, whollyg...@letterboxes.org wrote:
>
> On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
> said:
> > On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
[snip]
>
> But, this has all become moot anyway. When I put the original, smaller
> drives bac
On Fri, 09 Jan 2009 10:45:56 +, "John Robinson"
said:
> On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
> > But anyway, I don't think that is going to matter. The issue I am
> > trying to
> > solve is how to de-activate the bitmap. It was suggested on the
> > linux-raid
> > list tha
On 09/01/2009 02:41, whollyg...@letterboxes.org wrote:
But anyway, I don't think that is going to matter. The issue I am
trying to
solve is how to de-activate the bitmap. It was suggested on the
linux-raid
list that my problem may have been caused by running the grow op on an
active
bitmap an
t; > mdadm -S /dev/md/0
> >
>
> [snip]
>
> >
> > Hope you can help,
>
> Hi
>
> I have grown raid5 arrays either by disk number or disk size, I have
> only ever used --grow and never used the -z option
>
> I would re copy the info over fro
which seems to disassemble
> the array. Maybe this is because I've only tried it on the degraded
> array this problem has left with. At any rate, after
>
> mdadm -S /dev/md/0
>
[snip]
>
> Hope you can help,
Hi
I have grown raid5 arrays either by disk number or
On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" said:
> On Monday January 5, jpis...@lucidpixels.com wrote:
> > cc linux-raid
> >
> > On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
> >
> > >
[snip]
> > > The RAID reassembled fine at each boot as the drives
> > > were replaced one by o
;. I seems to me I couldn't do anything with
the array until it was activated.
Hmm, just noticed something else that seems weird. There seem
to be 10 and 11 place holders (3 drives each) in the "Array Slot"
field below which is respectively 4 and 5 more places than there
are dri
On Monday January 5, jpis...@lucidpixels.com wrote:
> cc linux-raid
>
> On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
>
> > I think growing my RAID array after replacing all the
> > drives with bigger ones has somehow hosed the array.
> >
> > The system is Etch with a stock 2.6.18 kernel
cc linux-raid
On Mon, 5 Jan 2009, whollyg...@letterboxes.org wrote:
I think growing my RAID array after replacing all the
drives with bigger ones has somehow hosed the array.
The system is Etch with a stock 2.6.18 kernel and
mdadm v. 2.5.6, running on an Athlon 1700 box.
The array is 6 disk (5
I think growing my RAID array after replacing all the
drives with bigger ones has somehow hosed the array.
The system is Etch with a stock 2.6.18 kernel and
mdadm v. 2.5.6, running on an Athlon 1700 box.
The array is 6 disk (5 active, one spare) RAID 5
that has been humming along quite nicely f
rote:
>> On Sat, Sep 13, 2008 at 11:39:47PM -0400, Michael Habashy wrote:
>>> the current state of my system -- it is up and running but i do not
>>> belive for long:
>>>
>>> [EMAIL PROTECTED]:~# cat /proc/mdstat
>>> Personalities : [raid1] [raid6]
/mdstat
>> Personalities : [raid1] [raid6] [raid5] [raid4]
>> md1 : active raid5 sda5[3](F) sdb5[4](F) sdd5[2] sdc5[1]
>> 781417344 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]
>>
>> md2 : active raid1 sda6[0]
>> 9767424 blocks [2/1] [
On Sat, Sep 13, 2008 at 11:39:47PM -0400, Michael Habashy wrote:
> the current state of my system -- it is up and running but i do not
> belive for long:
>
> [EMAIL PROTECTED]:~# cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md1 : active raid5 sda5[3](F)
On Mon, Jun 23, 2008 at 11:30:32PM -0400, Matt Gracie wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Alex Samad wrote:
> > On Sun, Jun 22, 2008 at 12:28:03PM -0400, Matt Gracie wrote:
> >
> >> [snip]
[snip]
> mogwai:~# uname -a
> Linux mogwai 2.6.25-2-686 #1 SMP Tue May 27 15:38:35
failure.
>
>> sounds like you have done all the things I would have done.
>
>> can you post a cat /proc/mdstat
mogwai:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[0] sdc1[2] sda1[1]
732587712 blocks level 5, 64k chunk, algorith
On Sun, Jun 22, 2008 at 12:28:03PM -0400, Matt Gracie wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
[snip]
>
> The problem is that when I tried, using "mdadm /dev/md0 --add
> /dev/sdd1", the rebuild would kick off and then fail after a short time,
> marking all four drives as fault
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
For quite a while now, I've been running a software RAID5 using external
firewire disks on a Debian Unstable system. The exact architecture was
four 250 GB disks on one firewire controller, for a total of 750 GB of
usable space.
Last night, w
On Mon, May 05, 2008 at 12:59:52AM +0200, Dexter Filmore wrote:
> Am Freitag, 2. Mai 2008 22:47:07 schrieb Alex Samad:
> > On Fri, May 02, 2008 at 02:45:04PM +0200, Dexter Filmore wrote:
> > > So here's the story:
> >
> > [snip]
>
> >
> > another thing you can try is entering busybox during the in
Am Freitag, 2. Mai 2008 22:47:07 schrieb Alex Samad:
> On Fri, May 02, 2008 at 02:45:04PM +0200, Dexter Filmore wrote:
> > So here's the story:
>
> [snip]
>
> > Now: what's going on here? both onboard 3114 and pci 3114 controllers are
> > handled by the same kernel module, so either initrd sees all
Am Sonntag, 4. Mai 2008 14:49:19 schrieb martin f krafft:
> also sprach Dexter Filmore <[EMAIL PROTECTED]> [2008.05.03.1723 +0100]:
> > initrd has its own log...?
>
> No, it just prints to the console.
>
> I suggest you add break=bottom to the kernel command line (and
> remove the raid=noautodetect
also sprach Dexter Filmore <[EMAIL PROTECTED]> [2008.05.03.1723 +0100]:
> initrd has its own log...?
No, it just prints to the console.
I suggest you add break=bottom to the kernel command line (and
remove the raid=noautodetect, which you don't need) and then reboot,
inspect the console output an
On Fri, May 02, 2008 at 02:45:04PM +0200, Dexter Filmore wrote:
> So here's the story:
[snip]
> Now: what's going on here? both onboard 3114 and pci 3114 controllers are
> handled by the same kernel module, so either initrd sees all or none.
> Why would it not wanna see the 5th disk from initrd,
also sprach Dexter Filmore <[EMAIL PROTECTED]> [2008.05.02.1345 +0100]:
> Two resyncs later I decided to reconf mdadm to *not* start from
> the initrd and not auto-assemble at boot time. I then assembled
> the array manually and tadaa, all fine, array works and is synced.
>
> Now: what's going on
So here's the story:
Software raid5 on debian etch with 2.6.22 kernel from backports.
Hardware: Asus K8N-E Deluxe, nForce3/250Gb chipset.
Has:
2 sATA ports from the nF3 (sata_nv)
4 sATA ports from an onboard Silicon Image 3114 (sata_sil)
4 sATA ports from an PCI controller, Silicon Image
Am 2008-01-17 02:15:55, schrieb Scott Gifford:
> Also, some hardware RAID systems require the system to be offline to
> do a rebuild, which is less than ideal.
Never had such Hardware-Raids...
Thanks, Greetings and nice Day
Michelle Konzack
Tamay Dogan Network
Debian GNU/Linux Consult
You know, after reading all the howtos and such, I created a RAID 5
array with just three commands
(after creating the partitions with fdisk)
mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/
sdb1 /dev/sdc1 /dev/sdd1
mkfs.ext3 /dev/md0
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.co
[EMAIL PROTECTED] writes:
[...]
> The motherboard I'm using is an intel d945gnt. It has an intel Martix
> driver that will let me do RAID 5 in the bios. Then, linux should see
> one big whopping device. That sounds like the easiest solution to me.
>
> Option two is to use linux software RAID.
On Wed, Jan 16, 2008 at 09:28:30AM -0800, David Brodbeck wrote:
> On Jan 15, 2008, at 6:14 PM, Gregory Seidman wrote:
> >On Tue, Jan 15, 2008 at 03:40:15PM -0800, [EMAIL PROTECTED] wrote:
> >>On Jan 15, 9:10 am, Gregory Seidman >>[EMAIL PROTECTED]> wrote:
> >>I have an existing setup that uses fo
On Jan 15, 2008, at 6:14 PM, Gregory Seidman wrote:
On Tue, Jan 15, 2008 at 03:40:15PM -0800, [EMAIL PROTECTED] wrote:
On Jan 15, 9:10 am, Gregory Seidman wrote:
anything that
kills your motherboard (short circuit in the memory, CPU
overheating, etc.)
also takes out your RAID controller. T
On Tue, Jan 15, 2008 at 09:14:26PM -0500, Gregory Seidman wrote:
> On Tue, Jan 15, 2008 at 03:40:15PM -0800, [EMAIL PROTECTED] wrote:
> > On Jan 15, 9:10 am, Gregory Seidman > [EMAIL PROTECTED]> wrote:
> > > anything that
> > > kills your motherboard (short circuit in the memory, CPU overheating,
On Tue, Jan 15, 2008 at 03:40:15PM -0800, [EMAIL PROTECTED] wrote:
> On Jan 15, 9:10 am, Gregory Seidman [EMAIL PROTECTED]> wrote:
> > anything that
> > kills your motherboard (short circuit in the memory, CPU overheating, etc.)
> > also takes out your RAID controller. To be able to access your d
On Jan 15, 9:10 am, Gregory Seidman wrote:
> anything that
> kills your motherboard (short circuit in the memory, CPU overheating, etc.)
> also takes out your RAID controller. To be able to access your data you'll
> need the same RAID controller
doh! I hadn't thought of that. Thanks. Software
On Tue, Jan 15, 2008 at 06:24:52AM -0800, [EMAIL PROTECTED] wrote:
> I'm looking for general advice/tips/admonitions of doom. I'm getting
> ready to build a file server and I want to have some data redundancy.
> I've ordered four 350G SATA drives and I plan to put them into some
> kind of RAID 5 c
re failure out of the equation.
Anyway, that's my justification for using software RAID.
Incidentally, you might consider RAID10 instead of RAID5, depending on how
much disk space you are willing to trade for reliability. RAID5 with four
350GB disks gives you 1050GB of space and the abi
I'm looking for general advice/tips/admonitions of doom. I'm getting
ready to build a file server and I want to have some data redundancy.
I've ordered four 350G SATA drives and I plan to put them into some
kind of RAID 5 configuration. The boot disk will be a separate IDE
drive.
The motherboard
id1.
> ^^ do you mean two?
yep
> >
> > I could remove one of the 200G drive - run the raid5 in degraded mode and
> > create a raid1 in degraded mode.
> >
> > I also use lvm2, so moving most of it should be easy ?
> >
> > I don't have any media t
On Mon, Oct 15, 2007 at 01:43:42PM +1000, Alex Samad wrote:
>
> I have a system with 3 x 200G setup as a raid 5 config, I have just purchased
> to 750g drives with the thought of using raid1.
^^ do you mean two?
>
> I could remove one of the 200G drive - run the raid5 in de
Hi
I have a system with 3 x 200G setup as a raid 5 config, I have just purchased
to 750g drives with the thought of using raid1.
I could remove one of the 200G drive - run the raid5 in degraded mode and
create a raid1 in degraded mode.
I also use lvm2, so moving most of it should be easy ?
I
also sprach Hal Vaughan <[EMAIL PROTECTED]> [2007.08.21.0003 +0200]:
>Any suggestions or warnings from others so I can make sure this doens't
>happen again are appreciated. Remember, the two drives I've already
>removed that mdadm had said were bad have tested out as fine. I suspect
>
On Monday 20 August 2007, martin f krafft wrote:
> also sprach Hal Vaughan <[EMAIL PROTECTED]> [2007.08.20.2114 +0200]:
> > It did on the first failure. Then another failed and I turned the
> > machine off. When I got 2 more drives, I put them in and it
> > rebuilt the array using 3 of the drives
also sprach Hal Vaughan <[EMAIL PROTECTED]> [2007.08.20.2114 +0200]:
> It did on the first failure. Then another failed and I turned the
> machine off. When I got 2 more drives, I put them in and it rebuilt
> the array using 3 of the drives with one as a spare. Then when it
> failed this time
On Monday 20 August 2007, martin f krafft wrote:
> also sprach Hal Vaughan <[EMAIL PROTECTED]> [2007.08.20.2022
+0200]:
> > In this case, I had 4 drives, so if one failed, then the spare
> > should have been added but that hadn't happened.
>
> I thought your original email said it did resync the s
1 - 100 of 223 matches
Mail list logo