On Sunday, July 18, 2021 09:37:53 AM David wrote:
> On Sun, 18 Jul 2021 at 21:08, wrote:
> > Interesting -- not surprising, makes sense, but something (for me, at
> > least) to keep in mind -- probably not a good idea to run on an old
> > drive that hasn't been backed up.
>
> Sorry if my language
On 7/18/21 2:29 PM, Urs Thuermann wrote:
David Christensen writes:
You should consider upgrading to Debian 10 -- more people run that and
you will get better support.
It's on my TODO list. As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo
David Christensen writes:
> You should consider upgrading to Debian 10 -- more people run that and
> you will get better support.
It's on my TODO list. As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM. It's only my pr
On 7/18/21 2:16 AM, Reco wrote:
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda. Trying to f
On 2021-07-18 14:37, David wrote:
On Sun, 18 Jul 2021 at 21:08, wrote:
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surp
On Sun, 18 Jul 2021 at 21:08, wrote:
> On Saturday, July 17, 2021 09:30:56 PM David wrote:
> > The 'smartctl' manpage explains how to run and abort self-tests.
> > It also says that a running test can degrade the performance of the drive.
> Interesting -- not surprising, makes sense, but somethi
On 7/17/21 6:30 PM, David wrote:
On Sun, 18 Jul 2021 at 07:03, David Christensen
wrote:
On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch,
the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/s
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surprising, makes sense, but something (for me, at least)
to keep in mind -- proba
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
> > But much more noticable is the difference of data reads of the two
> > disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> > from /dev/sdb compared to /dev/sda. Trying to figure out the reason
>
On Sun, 18 Jul 2021 at 07:03, David Christensen
wrote:
> On 7/17/21 5:34 AM, Urs Thuermann wrote:
> > On my server running Debian stretch,
> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.
On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch,
You should consider upgrading to Debian 10 -- more people run that and
you will get better support.
I migrated to FreeBSD.
the storage setup is as follows:
Two identical SATA disks with 1 partition on each dri
Hi Urs,
Your plan to change the SATA cable seems wise - your various error
rates are higher than I have normally seen.
Also worth bearing in mind that Linux MD RAID 1 will satisfy all
read IO for a given operation from one device in the mirror. If
you have processes that do occasional big reads t
On 7/17/21 08:34, Urs Thuermann wrote:
Here, the noticable lines are IMHO
Raw_Read_Error_Rate (208245592 vs. 117642848)
Command_Timeout (8 14 17 vs. 0 0 0)
UDMA_CRC_Error_Count(11058 vs. 29)
Do these numbers indicate a serious problem with my /dev/sda drive?
And i
I'm going to echo your final thought there: Replace the SATA cables with 2
NEW ones of the same model. Then see how it goes, meaning rerun the tests
you just ran. If possible, try to make the geometries of the cables as
similar as you can: roughly same (short?) lengths, roughly as straight and
cong
On 2021-01-24 21:23, mick crane wrote:
On 2021-01-24 20:10, David Christensen wrote:
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.
I think I'll go with the first and last sugges
Thanks Andy and Linux-Fan, for the detailed reply.
Andy Smith writes:
Hi Pankaj,
Not wishing to put words in Linux-Fan's mouth, but my own views
are…
On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> > I achieve this by having OS and Swap
mick crane wrote:
> I think I'll go with the first and last suggestion to just have 2 disks
> in raid1.
> It seems that properly you'd want 2 disks in raid for the OS, 2 at least
> for the pool and maybe 1 for the cache.
> Don't have anything big enough I could put 5 disks in.
> I could probably g
Hi Pankaj,
Not wishing to put words in Linux-Fan's mouth, but my own views
are…
On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> > I achieve this by having OS and Swap on MDADM RAID 1
> >
On Du, 24 ian 21, 23:21:38, Linux-Fan wrote:
> mick crane writes:
>
> > On 2021-01-24 17:37, Andrei POPESCU wrote:
>
> [...]
>
> > > If you want to combine Linux RAID and ZFS on just two drives you could
> > > partition the drives (e.g. two partitions on each drive), use the first
> > > partitio
Linux-Fan writes:
> * OS data bitrot is not covered, but OS single HDD failure is.
> I achieve this by having OS and Swap on MDADM RAID 1
> i.e. mirrored but without ZFS.
I am still learning.
1. By "by having OS and Swap on MDADM", did you mean the /boot partition
and swap.
2. Why did y
On 2021-01-24 20:10, David Christensen wrote:
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the
mick crane writes:
On 2021-01-24 17:37, Andrei POPESCU wrote:
[...]
If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will have
to confi
On Du, 24 ian 21, 17:50:06, Andy Smith wrote:
>
> Once it's up and running you can then go and create a second
> partition that spans the rest of each disk, and then when you are
> ready to create your zfs pool:
>
> > "zpool create tank mirror disk1 disk2"
>
> # zpool create tank mirror /dev/dis
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the same
RAID mirror of two HDD's, and why you cannot add one (or two?) more
devices for the OS.
David
Andy Smith writes:
>...
>So personally I would just do the install of Debian with both disks
>inside the machine, manual partitioning, create a single partition
>big enough for your OS on the first disk and then another one the
>same on the second disk. Mark them as RAID members, set them to
>RAID
On 2021-01-24 17:37, Andrei POPESCU wrote:
On Du, 24 ian 21, 11:36:09, mick crane wrote:
I know I'm a bit thick about these things, what I'm blocked about is
where
is the OS.
Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.
Ok
Install headers
Hi Mick,
On Sun, Jan 24, 2021 at 11:36:09AM +, mick crane wrote:
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
Wherever you installed it.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.
I think y
On Du, 24 ian 21, 11:36:09, mick crane wrote:
>
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.
Ok
> Install headers and ZFS-utils.
> I put other disk in
On 2021-01-23 22:01, David Christensen wrote:
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and th
mick crane writes:
On 2021-01-23 17:11, Linux-Fan wrote:
mick crane writes:
[...]
Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian
%20Buster%20Root%20on%20ZFS.html
For my current system I actually use
On 2021-01-23 17:11, Linux-Fan wrote:
mick crane writes:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the
mick crane writes:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is ju
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and this new/old one is just
backup
for them
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
> hello,
> I want to tidy things up as suggested.
> Have one old PC that I'll put 2 disks in and tidy everything up so what's
> scattered about is on the running disks and this new/old one is just backup
> for them.
> Can I assume that Debian installer
On 2021-01-22 15:10, David Christensen wrote:
A key issue with storage is bit rot.
I should have said "bit rot protection".
David
On 2021-01-22 14:26, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's scattered about is on the running disks and this new/old one is
just backup for them.
Can I assume that Debian installer in some expert
mick crane writes:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is just backup
for them.
Can I assume that Debian installer in some expert mode will sort ou
On 10/26/2020 7:55 AM, Bill wrote:
Hi folks,
So we're setting up a small server with a pair of 1 TB hard disks
sectioned into 5x100GB Raid 1 partition pairs for data, with 400GB+
reserved for future uses on each disk.
Oh, also, why are you leaving so much unused space on the drives? On
This might be better handled on linux-r...@vger.kernel.org
On 10/26/2020 10:35 AM, Dan Ritter wrote:
Bill wrote:
So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
future uses on each disk.
On 10/26/20 4:55 AM, Bill wrote:
> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
>
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for
Hi folks,
So we're setting up a small server with a pair of 1 TB hard
diskssectioned into 5x100GB Raid 1 partition pairs for data, with
400GB+reserved for future uses on each disk.I'm not sure what
happened, we had the five pairs of disk partitions setup properly
through the installer without
Bill wrote:
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
> future uses on each disk.
That's weird, but I expect you have a reason for it.
> I'm not sure what happened, we had the five pairs
Thank you! I will try this procedure this week.
Tim
On 9/18/2015 5:04 PM, linuxthefish wrote:
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/
is what I followed.
Thanks,
Edmund
On 18 September 2
Hello,
Gregory Seidman a écrit :
> I have two eSATA drives in a RAID 1, and smartd has started reporting
> errors on it:
>
> Device: /dev/sdb [SAT], 9 Currently unreadable (pending) sectors
> Device: /dev/sdb [SAT], 9 Offline uncorrectable sectors
>
> The first message, on June 11, w
On Wed, Dec 19, 2012 at 6:07 AM, Bob Proulx wrote:
> Note that after a power cycle even if the RAID 1 array needs to be
> sync'd between the mirrored disks that the system will still boot okay
> and will operate normally. I have no idea what other systems do but
> you can boot the system, log in
Bob Proulx wrote:
> The Linus software raid also had the capability to use a block bitmap
> to speed up resync after a crash because then it tracks which blocks
> are dirty.
> See the documentation on this mdadm command to configure an internal
> bitmap to speed up a re-sync after an event such
yudi v wrote:
> I am looking at using Debian software RAID mirroring and would like
> to know how it handles system crashes and disk failures.
It handles it quite well.
> My only experience with software RAID 1 is with windows 7 inbuilt
> option. Whenever the system does not shutdown cleanly, up
On Thu, 22 Dec 2011 11:00:10 +1000, yudi v wrote:
>> I'd go for hardware RAID as long as there is a true and real hardware
>> RAID controller behind with a battery backup et al (in brief, a *good*
>> RAID controller, not the motherboard's one which are usually nothing
>> but fakeraid and a pile of
>
> Might be better to put LUKS on top of LVM instead of vice versa? Not sure.
>
>
By having LVM over LUKS, I will only have one container to unlock and from
what I understand Debian cannot unlock several LUKS containers at start-up
unlike Fedora.
My laptop currently has LVM over LUKS and works
On Wed, Dec 21, 2011 at 17:00, yudi v wrote:
>
>> I'd go for hardware RAID as long as there is a true and real hardware
>> RAID controller behind with a battery backup et al (in brief, a *good*
>> RAID controller, not the motherboard's one which are usually nothing but
>> fakeraid and a pile of un
> I'd go for hardware RAID as long as there is a true and real hardware
> RAID controller behind with a battery backup et al (in brief, a *good*
> RAID controller, not the motherboard's one which are usually nothing but
> fakeraid and a pile of unforeseen problems).
>
> Otherwise I would use softwa
yudi v wrote:
> > You would need a second compatible hardware raid controller to use
> > in order to extract the data from the drives. The hardware raid
> > controllers I have used have not allowed me to access the data
> > without a compatible raid controller.
>
> If it's in RAID 1, I was under
On Wed, 21 Dec 2011 16:11:32 +1000, yudi v wrote:
> Will be installing a new system and would like to have the following
> set-up:
>
> RAID 1 > LUKS > LVM
>
> Should I use the RAID controller on the motherboard (not sure how
> reliable it will be) or use software RAID?
I'd go for hardware RAID
> You would need
> a second compatible hardware raid controller to use in order to
> extract the data from the drives. The hardware raid controllers I
> have used have not allowed me to access the data without a compatible
> raid controller.
If it's in RAID 1, I was under the impression that I w
yudi v wrote:
> Should I use the RAID controller on the motherboard (not sure how reliable
> it will be) or use software RAID?
The problem with the hardware raid on the motherboard is what do you
do if the motherboard fails? You will at that time have good data on
your disks but probably no way t
On Dec 20, 2007, at 5:52 AM, Daniel Dickinson wrote:
So no, likely there is nothing wrong your raid configuration. I'd
suggest
scsi drives and, better yet, hardware scsi raid if you can afford
them, but
with standard ide components there's not much to be done. hdparm
_might_
allow you to
On Wednesday 19 December 2007 07:50, S Scharf wrote:
> I am running a Debian 3.1 (Sarge) server with Raid 1 mirroring on the disk
> drive.
>
> Recently, one of the disks failed. The system sent root a proper e-mail
> notification of the failure. Unfortunately,
> the system seemed to continue to try
On Dec 19, 2007 12:12 PM, David Brodbeck <[EMAIL PROTECTED]> wrote:
>
> On Dec 19, 2007, at 4:50 AM, S Scharf wrote:
>
> > I am running a Debian 3.1 (Sarge) server with Raid 1 mirroring on
> > the disk drive.
> >
> > Recently, one of the disks failed. The system sent root a proper e-
> > mail noti
On Dec 19, 2007, at 4:50 AM, S Scharf wrote:
I am running a Debian 3.1 (Sarge) server with Raid 1 mirroring on
the disk drive.
Recently, one of the disks failed. The system sent root a proper e-
mail notification of the failure. Unfortunately,
the system seemed to continue to try to use the
On Jul 18, 2007, at 4:27 PM, Alex Samad wrote:
I don;t have to worry about dying controller cards and
incompatibilities and I
am in the early stages of growing my raid5 from 200G drives to 500G
drives live
Just as a point of interest, my experience with 3ware cards is that
compatibility i
On Thu, Jul 19, 2007 at 08:19:27AM +0200, martin f krafft wrote:
> also sprach Andrew Sackville-West <[EMAIL PROTECTED]> [2007.07.19.0126 +0200]:
> > > is it true that with software raid 1 that i cannot have the boot drive
> > > protected?
> >
> > no. raid 1 boots just fine. I do it on my server
also sprach Andrew Sackville-West <[EMAIL PROTECTED]> [2007.07.19.0126 +0200]:
> > is it true that with software raid 1 that i cannot have the boot drive
> > protected?
>
> no. raid 1 boots just fine. I do it on my server at home with a raid1
> /boot partition and a raid5 / partition.
that's not
On Wed, Jul 18, 2007 at 04:47:23PM -0700, Mike Bird wrote:
> On Wednesday 18 July 2007 16:33, Andrew Sackville-West wrote:
> > note the root line (hd0,0). that could just as easily be (hd1,0) or
> > (hd2,0) or (hd3,0). I know I tested it when I built the array, but now
> > you've got me worried tha
On Wed, Jul 18, 2007 at 04:23:38PM -0700, Mike Bird wrote:
> On Wednesday 18 July 2007 15:50, besonen wrote:
> > is it true that with software raid 1 that i cannot have the boot drive
> > protected?
>
> No. However, getting the mirror of the boot drive to be bootable
> under grub in readiness for
On Thu, Jul 19, 2007 at 12:50:45AM +0200, besonen wrote:
>
> i'm interested in learning what i need to know to stop using (or reduce my
> dependence on) 3ware raid cards for mirroring.
>
> i would like to reduce my dependence on proprietary hardware.
>
>
> some questions to get started:
>
> is it
On Wednesday 18 July 2007 16:33, Andrew Sackville-West wrote:
> note the root line (hd0,0). that could just as easily be (hd1,0) or
> (hd2,0) or (hd3,0). I know I tested it when I built the array, but now
> you've got me worried that maybe it doesn't work. ack.
>
> my pertinent array looks like thi
On Wed, Jul 18, 2007 at 04:23:38PM -0700, Mike Bird wrote:
> On Wednesday 18 July 2007 15:50, besonen wrote:
> > is it true that with software raid 1 that i cannot have the boot drive
> > protected?
>
> No. However, getting the mirror of the boot drive to be bootable
> under grub in readiness for
On Thu, Jul 19, 2007 at 12:50:45AM +0200, besonen wrote:
>
> i'm interested in learning what i need to know to stop using (or reduce my
> dependence on) 3ware raid cards for mirroring.
>
> i would like to reduce my dependence on proprietary hardware.
good plan...
>
>
> some questions to get star
On Wednesday 18 July 2007 15:50, besonen wrote:
> is it true that with software raid 1 that i cannot have the boot drive
> protected?
No. However, getting the mirror of the boot drive to be bootable
under grub in readiness for when the boot drive fails does not seem
to me to be quite as easy/reli
hello Jacek,
Check out the howto link for a raid1 install from sarge installer.
http://nepotismia.com/debian/raidinstall/
It may help you get the raid 1 up and running
Regards
peter colton
On Monday 29 May 2006 16:15, jacek wrote:
> Hi al
jacek wrote:
i rebuilded , /boot has: initrd.img-2.6.16 ( created by mkinitrd) ,
vmlinuz-2.6.16 and everything is set fine in grub (menu.lst file ) , it
detects the drives during boot but then somehow want to create raid
using old names...
did you try rebuilding a new kernel with raid & dis
i rebuilded , /boot has: initrd.img-2.6.16 ( created by mkinitrd) , vmlinuz-2.6.16 and everything is set fine in grub (menu.lst file ) , it detects the drives during boot but then somehow want to create raid using old names...
On 5/29/06, Laurent CARON <[EMAIL PROTECTED]> wrote:
jacek wrote:> Hi
jacek wrote:
Hi all,
I set up raid 1 during installing system with kernel 2.4.27-2, then i
used mdadm to create mdadm.conf file . my hard drives was detected as
/dev/hdc[1-9] /dev/hde[1-9] because the kernel was missing proper
drivers , so then i compiled new kernel 2.6.16 and drives changed
Quoting Paul Dwerryhouse <[EMAIL PROTECTED]>:
On Fri, Jun 17, 2005 at 11:33:40AM +0300, Nicos Chrysanthou wrote:
I am trying to install Sarge using two identical 80GB drives for
Raid 1. [...] While trying to create further partitions on the RAID
partition I always get a message that t
Paul Dwerryhouse wrote:
> I don't know if this can help with your specific problem, but I've
> written a detailed Sarge RAID install guide here:
>
> http://nepotismia.com/debian/raidinstall/
This is a well-written tutorial, but there's really no reason to drag
the user through expert mode to inst
On Fri, Jun 17, 2005 at 11:33:40AM +0300, Nicos Chrysanthou wrote:
>I am trying to install Sarge using two identical 80GB drives for
>Raid 1. [...] While trying to create further partitions on the RAID
>partition I always get a message that the file system failed to
>mount. What ca
Thanks Mike. Now this makes sense.
Nicos C. Chrysanthou
LL.B (UCL) LL.M.
(Soton), MCIArb, Inner Temple Barrister,
CHR.
CHRYSANTHOU & ASSOCIATES
_
Barristers &
Advocates
Palais D'Ivoire House,
2nd Floor, 12 Them. Dervis Ave.,
PO B
Quoting Nicos Chrysanthou <[EMAIL PROTECTED]>:
Hi,
I am trying to install Sarge using two identical 80GB drives for Raid 1.
I have made one primary RAID partition per drive and with the aid of the
software RAID installer have created a RAID1 device #0 partition, so far so
good.
While trying
Alexei Chetroi wrote:
See http://alioth.debian.org/projects/rootraiddoc/ for some docs
regarding root on raid. I don't know about raidtools2, but mdadm may
operate without config file at all, if you know what're you doing. See
man mdadm, it contains all you need to know. For example, assuming you
On Wed, Sep 01, 2004 at 03:44:22PM -0500, Roger wrote:
> Date: Wed, 01 Sep 2004 15:44:22 -0500
> From: Roger <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: raid 1 setup.
>
> Roger wrote:
>
> >I'm trying to get raid 1 going on my Sarge unstable bo
Roger wrote:
I'm trying to get raid 1 going on my Sarge unstable box.
When I installed raidltools2 and mdadm doing a /proc/mdstat gave the
following
[EMAIL PROTECTED]:/etc/network# cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 ide/host0/bus0/target1/lun0/part
Alvin Oga wrote:
whacky :-) but than again .. guess that's okay for an empty /etc/raidtab
I'd disagree. With an empty raidtab nothing should be configured. I'd
like to know where this config came from... :|
The thing is I have yet to create any md devices and my
/etc/raidtab <-which raidto
On Wed, 1 Sep 2004, Roger wrote:
> I'm trying to get raid 1 going on my Sarge unstable box.
>
> When I installed raidltools2 and mdadm doing a /proc/mdstat gave the
> following
>
> [EMAIL PROTECTED]:/etc/network# cat /proc/mdstat
> Personalities : [raid1]
...
> 25728000 blocks [2/1] [_U]
hi "suporte"
On Mon, 10 Nov 2003, Suporte Linux Solutions wrote:
>
> Hi all,
> I have setup a raid 1 install with Debian Woody.
> I got two IDE RAID disks (hda and hdb) and put two root partitions (hda2 and
> hdb2). The first was for swap.
if you wan to do have raid1 mirrori
On Sun, 24 Aug 2003 20:49:58 +0200,
"Claudia Maiwald" <[EMAIL PROTECTED]> wrote in message
<[EMAIL PROTECTED]>:
> Ich habe einen HIGHPOINT Controller und möchte damit ein RAID 1
> betreiben.
>
> Als Festplatten habe ich 2 x 40 GB im System eingebaut und von diesen
> Platten muß ich auch booten.
On Sat, Feb 23, 2002 at 01:10:08PM -0800, Richard Weil wrote:
> One last question ... how can use the swap partitions
> on both /dev/hda8 and /dev/hdc8? I did _not_ put the
> swap partitions in a RAID setup. Should I simply list
> both swap partitions in /etc/fstab? Thanks.
Yep. You'll probably a
One last question ... how can use the swap partitions
on both /dev/hda8 and /dev/hdc8? I did _not_ put the
swap partitions in a RAID setup. Should I simply list
both swap partitions in /etc/fstab? Thanks.
Richard
--- Richard Weil <[EMAIL PROTECTED]> wrote:
> It took a little bit of experimenting,
It took a little bit of experimenting, but the basic
steps you laid out worked. Success!
For anyone else who might want to try this, I found I
had to do two things differently:
1. I had to use mkraid instead of raidstart to get
the RAID devices working on /dev/hdc.
2. After copying root and /b
On Fri, Feb 22, 2002 at 04:28:46PM -0800, Richard Weil wrote:
> Thanks, this is great. A couple of follow-up
> questions:
>
> 1. How do I get to single user mode without rebooting?
> (I know I should already know this.)
init 1
> 2. Do I need to do anything special to copy the
> partitions from /
hiya
to make an existing distro into a raid1 setup after the fact
is a little dangerous to its data ...
- backup your data first
numerous ways to convert /dev/hda into raid1 with hda and hdc
http://www.1U-Raid5.net/HowTo/SW-Raid-HOWTO.txt
( lots of "fun" reading...
Thanks, this is great. A couple of follow-up
questions:
1. How do I get to single user mode without rebooting?
(I know I should already know this.)
2. Do I need to do anything special to copy the
partitions from /dev/hdaX to /dev/mdY? Once I'm in
single user mode can I just "cp -R /dev/hdaX
/dev/
On Fri, Feb 22, 2002 at 10:32:17AM -0800, Richard Weil wrote:
> I need some help setting up RAID 1 on a fresh Woody
> install. The software is newer than the docs,
> particularly for Lilo, so any help from those with
> experience would be most appreicated.
The Software-RAID-HOWTO got me through th
> "MK" == Matt Kopishke <[EMAIL PROTECTED]> writes:
MK> Hi, I am running potato and a 2.2.16 kernel on our file server. I
MK> have two raid 1 devices set up, each 10 gigs big. The problem is that it
MK> does not seem to cleanly unmount/update the superblock when I
MK> restart/shu
96 matches
Mail list logo