On Sat, 11 Jan 2025, Michael Stone wrote:
> > root@titan ~ mdadm --misc /dev/md4 --stop
>
> This is incorrect syntax, and a no-op (so the array did not stop). You want
> `mdadm --misc --stop /dev/md4`. The --misc is implied so you can just use
> `mdadm --stop /dev/md4`
I ran the command
root@t
On Sat, Jan 11, 2025 at 12:11:39PM +0100, Roger Price wrote:
I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat
reported
md4 : active raid1 sdb7[0]
20970368 blocks super 1.0 [2/1] [U_]
bitmap: 1/1 pages [4KB], 65536KB chunk
I understand that the array has to
> Sent: Saturday, January 11, 2025 at 9:51 AM
> From: "Roger Price"
> To: "debian-user Mailing List"
> Subject: Re: Removing an unwanted RAID 1 array
>
> On Sat, 11 Jan 2025, Greg Wooledge wrote:
>
> > On Sat, Jan 11, 2025 at 13:10:51 +010
On Sat, 11 Jan 2025, Greg Wooledge wrote:
> On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote:
> > On Sat, 11 Jan 2025, Michel Verdier wrote:
> >
> > > If I remember well you have to first set the device as faulty with --fail
> > > before --remove could be accepted.
> >
> > No luck :
>
On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote:
> On Sat, 11 Jan 2025, Michel Verdier wrote:
>
> > If I remember well you have to first set the device as faulty with --fail
> > before --remove could be accepted.
>
> No luck :
>
> root@titan ~ mdadm --fail /dev/md4 --remove /dev/sdb7
On Sat, 11 Jan 2025, Michel Verdier wrote:
> If I remember well you have to first set the device as faulty with --fail
> before --remove could be accepted.
No luck :
root@titan ~ mdadm --fail /dev/md4 --remove /dev/sdb7
mdadm: hot remove failed for /dev/sdb7: Device or resource busy
> But if
On 2025-01-11, Roger Price wrote:
> root@titan ~ umount /dev/md4
> root@titan ~ mdadm --misc /dev/md4 --stop
> root@titan ~ mdadm --manage /dev/md4 --remove /dev/sdb7
> mdadm: hot remove failed for /dev/sdb7: Device or resource busy
If I remember well you have to first set the device as fault
I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat
reported
md4 : active raid1 sdb7[0]
20970368 blocks super 1.0 [2/1] [U_]
bitmap: 1/1 pages [4KB], 65536KB chunk
I understand that the array has to be inactive before it can be removed, so I
stopped it, but
On Sunday, July 18, 2021 09:37:53 AM David wrote:
> On Sun, 18 Jul 2021 at 21:08, wrote:
> > Interesting -- not surprising, makes sense, but something (for me, at
> > least) to keep in mind -- probably not a good idea to run on an old
> > drive that hasn't been backed up.
>
> Sorry if my language
On 7/18/21 2:29 PM, Urs Thuermann wrote:
David Christensen writes:
You should consider upgrading to Debian 10 -- more people run that and
you will get better support.
It's on my TODO list. As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo
db1. Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
>
>
> ext4? That lacks integrity checking.
>
>
> btrfs? That has integrity checking, but requires periodic balancing.
Mostly ext4 for / /var /var/spool/news /usr /usr/local and /home
On 7/18/21 2:16 AM, Reco wrote:
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda. Trying to f
On 2021-07-18 14:37, David wrote:
On Sun, 18 Jul 2021 at 21:08, wrote:
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surp
On Sun, 18 Jul 2021 at 21:08, wrote:
> On Saturday, July 17, 2021 09:30:56 PM David wrote:
> > The 'smartctl' manpage explains how to run and abort self-tests.
> > It also says that a running test can degrade the performance of the drive.
> Interesting -- not surprising, makes sense, but somethi
/sda1 and /dev/sdb1. Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
- 9 Power_On_Hours -O--CK 042 042
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surprising, makes sense, but something (for me, at least)
to keep in mind -- proba
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
> > But much more noticable is the difference of data reads of the two
> > disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> > from /dev/sdb compared to /dev/sda. Trying to figure out the reason
>
.e. /dev/sda1 and /dev/sdb1. Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
> > --
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
> &
drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1. Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
ext4? That lacks integrity checking.
btrfs? That has integrity checking, but requires periodic balancing.
I use ZFS. That has integrity checking. It is
Hi Urs,
Your plan to change the SATA cable seems wise - your various error
rates are higher than I have normally seen.
Also worth bearing in mind that Linux MD RAID 1 will satisfy all
read IO for a given operation from one device in the mirror. If
you have processes that do occasional big reads
rives. See:
https://arstechnica.com/gadgets/2020/06/western-digital-adds-red-plus-branding-for-non-smr-hard-drives/
So be careful to get the Pro version if you decide to try WD. I use the
WD4003FFBX (4T) drives (Raid 1) and have them at 2.8 years running 24/7 with no
problems.
If you value y
M Urs Thuermann wrote:
> On my server running Debian stretch, the storage setup is as follows:
> Two identical SATA disks with 1 partition on each drive spanning the
> whole drive, i.e. /dev/sda1 and /dev/sdb1. Then, /dev/sda1 and
> /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
On my server running Debian stretch, the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1. Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
The disk I/O shows very different
On 2021-01-24 21:23, mick crane wrote:
On 2021-01-24 20:10, David Christensen wrote:
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.
I think I'll go with the first and last sugges
Thanks Andy and Linux-Fan, for the detailed reply.
e this by having OS and Swap on MDADM RAID 1
> > i.e. mirrored but without ZFS.
>
> I am still learning.
>
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>and swap.
When people say, "I put OS and Swap on MDADM" they typically
mick crane wrote:
> I think I'll go with the first and last suggestion to just have 2 disks
> in raid1.
> It seems that properly you'd want 2 disks in raid for the OS, 2 at least
> for the pool and maybe 1 for the cache.
> Don't have anything big enough I could put 5 disks in.
> I could probably g
Hi Pankaj,
Not wishing to put words in Linux-Fan's mouth, but my own views
are…
On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> > I achieve this by having OS a
On Du, 24 ian 21, 23:21:38, Linux-Fan wrote:
> mick crane writes:
>
> > On 2021-01-24 17:37, Andrei POPESCU wrote:
>
> [...]
>
> > > If you want to combine Linux RAID and ZFS on just two drives you could
> > > partition the drives (e.g. two partitions on each drive), use the first
> > > partitio
Linux-Fan writes:
> * OS data bitrot is not covered, but OS single HDD failure is.
> I achieve this by having OS and Swap on MDADM RAID 1
> i.e. mirrored but without ZFS.
I am still learning.
1. By "by having OS and Swap on MDADM", did you mean the /boot partition
an
On 2021-01-24 20:10, David Christensen wrote:
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the
mple stupid: Let the installer setup
RAID 1 MDADM for OS, swap and data and be done with it, avoid ZFS unless
there is some reason to need it :)
For sure MDADM lacks the bit rot protection, but it is easier to setup
especially for the OS and you can mitigate the bit rot (to some extent)
On Du, 24 ian 21, 17:50:06, Andy Smith wrote:
>
> Once it's up and running you can then go and create a second
> partition that spans the rest of each disk, and then when you are
> ready to create your zfs pool:
>
> > "zpool create tank mirror disk1 disk2"
>
> # zpool create tank mirror /dev/dis
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the same
RAID mirror of two HDD's, and why you cannot add one (or two?) more
devices for the OS.
David
rs, set them to
>RAID-1, install on that.
>...
You don't say if this is or will become a secure boot system, which
would require an EFI partition. Leaving a bit of space just in case
seems a good idea.
On 2021-01-24 17:37, Andrei POPESCU wrote:
On Du, 24 ian 21, 11:36:09, mick crane wrote:
I know I'm a bit thick about these things, what I'm blocked about is
where
is the OS.
Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.
Ok
Install headers
The first one would be for
the OS, and the second one would be for ZFS.
If you are going to keep your OS separate, I don't see any reason
not to use mdadm RAID-1 for the OS even if you're going to use zfs
for your data. Yes you could just install the OS onto a single
partition of a single disk
On Du, 24 ian 21, 11:36:09, mick crane wrote:
>
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.
Ok
> Install headers and ZFS-utils.
> I put other disk in
On 2021-01-23 22:01, David Christensen wrote:
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and th
I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit from the bit rot protection for
the acutally important data while maintaining basic redundancy for the
OS installation. YMMV.
Here are m
ite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html
For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit
//openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html
For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer for
installation purposes and benefit from the bit rot
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and this new/old one is just
backup
for them
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
> hello,
> I want to tidy things up as suggested.
> Have one old PC that I'll put 2 disks in and tidy everything up so what's
> scattered about is on the running disks and this new/old one is just backup
> for them.
> Can I assume that Debian installer
On 2021-01-22 15:10, David Christensen wrote:
A key issue with storage is bit rot.
I should have said "bit rot protection".
David
ome expert mode will sort out the
raid or do I need to install to one disk and then mirror it manually
before invoking the raid thing ?
I would install a small SSD and do a fresh install of the OS onto that.
I would then install the two HDD's and set up a mirror (RAID 1). Linux
create the partitions, then create MDADM RAID 1
devices on top of them and finally let them be formatted with
ext4/filesystem of choice and be the installation target.
AFAIK "Guided" installation modes do not automatically create RAID, i.e. I
recommend using the manual partitioning mod
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's scattered about is on the running disks and this new/old one is
just backup for them.
Can I assume that Debian installer in some expert mode will sort out the
raid or do I nee
On 10/26/2020 7:55 AM, Bill wrote:
Hi folks,
So we're setting up a small server with a pair of 1 TB hard disks
sectioned into 5x100GB Raid 1 partition pairs for data, with 400GB+
reserved for future uses on each disk.
Oh, also, why are you leaving so much unused space on the d
This might be better handled on linux-r...@vger.kernel.org
On 10/26/2020 10:35 AM, Dan Ritter wrote:
Bill wrote:
So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
future uses on each
On 10/26/20 4:55 AM, Bill wrote:
> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
>
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for
Hi folks,
So we're setting up a small server with a pair of 1 TB hard
diskssectioned into 5x100GB Raid 1 partition pairs for data, with
400GB+reserved for future uses on each disk.I'm not sure what
happened, we had the five pairs of disk partitions setup properly
through the
Bill wrote:
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
> future uses on each disk.
That's weird, but I expect you have a reason for it.
> I'm not sure wha
Hi folks,
So we're setting up a small server with a pair of 1 TB hard disks
sectioned into 5x100GB Raid 1 partition pairs for data, with 400GB+
reserved for future uses on each disk.
I'm not sure what happened, we had the five pairs of disk partitions set
up properly through the
Marc Auslander wrote:
> The installer manual is silent about installing in an existing raid
> partition. I could follow my nose but wondered if there is any advice
> you can provide.
Does this help https://wiki.debian.org/DebianInstaller/SoftwareRaidRoot
regards
I have a debian system with three raid 1 partitions, root and 2 data
partitions. It's x86 and I've decided to clean install amd64 and face
the music of re configuring everything. My plan/hope is to install a
new amd64 stretch in my root partition and then clean up the mess.
The
Peter Ludikovsky writes:
> Ad 1: Yes, the SATA controller has to support Hot-Swap. You _can_ remove
> the device nodes by running
> # echo 1 > /sys/block//device/delete
Thanks, I have now my RAID array fully working again. This is what I
have done:
1. Like you suggested above I deleted the dri
Le 19/07/2016 à 16:01, Urs Thuermann a écrit :
Shouldn't the device nodes and entries in /proc/partitions
disappear when the drive is pulled? Or does the BIOS or the SATA
controller have to support this?
2. Can I hotplug the new drive and rebuild the RAID array?
As others replied, t
Hi Urs,
On Tue, Jul 19, 2016 at 04:01:39PM +0200, Urs Thuermann wrote:
> 2. Can I hotplug the new drive and rebuild the RAID array?
It should work, if your SATA port supports hotplug. Plug the new
drive in and see if the new device node appears. If it does then
you're probably good to go.
You ca
is within the BIOS boot order, that
should work.
Regards,
/peter
Am 19.07.2016 um 16:01 schrieb Urs Thuermann:
> In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1
> and /dev/sdb1 the first drive /dev/sda has failed. I have called
> mdadm --fail and mdadm --remove on th
In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1
and /dev/sdb1 the first drive /dev/sda has failed. I have called
mdadm --fail and mdadm --remove on that drive and then pulled the
cables and removed the drive. The RAID array continues to work fine
but in degraded mode.
I have
> I don't see Debian doing anything wrong. fdisk showing a 2.3T
> partition I am assuming comes on your Arch Linux disk and is a result
> of it using the wrong block size. I'm not sure if this is due to the
> use of a USB adapter.
>
> mdadm -E /dev/sdb should fail because /dev/sdb is not a RAID de
On 22/12/15 04:44 PM, Narunas Krasauskas wrote:
I have this HDD (WD3200BPVT) which used to be part of the RAID-1 array
which has been created with Debian (Wheezy) installer, then AES
encrypted, then split into LVM volumes.
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1
I have this HDD (WD3200BPVT) which used to be part of the RAID-1 array
which has been created with Debian (Wheezy) installer, then AES encrypted,
then split into LVM volumes.
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
311462720 blocks super 1.2 [2/2] [UU]
md0 : active
Thank you! I will try this procedure this week.
Tim
On 9/18/2015 5:04 PM, linuxthefish wrote:
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-c
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/
is what I followed.
Thanks,
Edmund
On 18 S
I've used Debian Linux for a number of years but up until now always
with a single hard drive.
I want to build a new system that will have a pair of 1TB drives
configured as a RAID-1 mirror. In reading the mdadm Wiki the discussion
begins with installing mdadm.
My goal is to have a s
mett writes:
> Hi,
>
> I'm running Squeeze under raid 1 with mdadm.
> One of the raid failed and I replace it with space I had available on
> that same disk.
>
> Today, when rebooting I got an error cause the boot flag was still on
> both partitions(sdb1 and sdb3 b
> Version: GnuPG v1.4.10 (GNU/Linux)
> >
> > iF4EAREIAAYFAlRMaDMACgkQGYZUGVwcQVJTNQEAtTFXt5o+TJUA6v7XQiUL1MCQ
> > f24zTUpe7Zqrcz6XLi4BAJNEuPRx8QFZZeSHK9f1Qg/zAHhXBVTn3G21ODgEp+XQ
> > =eaQS
> > -END PGP SIGNATURE-
> As I undertand your issue:
> - you had RAID 1 arra
On 25/10/14 11:19 PM, mett wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi,
I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.
Today, when rebooting I got an error cause the boot flag was still on
On 10/25/2014 08:19 PM, mett wrote:
I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.
I suggest that your read the SMART data, download the manufacturer's
diagnostics utility disk, and run the manufactu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi,
I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.
Today, when rebooting I got an error cause the boot flag was still on
both partitions(sdb1 and sdb3 belo
Hello,
Gregory Seidman a écrit :
> I have two eSATA drives in a RAID 1, and smartd has started reporting
> errors on it:
>
> Device: /dev/sdb [SAT], 9 Currently unreadable (pending) sectors
> Device: /dev/sdb [SAT], 9 Offline uncorrectable sectors
>
> The firs
I have two eSATA drives in a RAID 1, and smartd has started reporting
errors on it:
Device: /dev/sdb [SAT], 9 Currently unreadable (pending) sectors
Device: /dev/sdb [SAT], 9 Offline uncorrectable sectors
The first message, on June 11, was 6 sectors. I ordered a new HD and
On Wed, Dec 19, 2012 at 6:07 AM, Bob Proulx wrote:
> Note that after a power cycle even if the RAID 1 array needs to be
> sync'd between the mirrored disks that the system will still boot okay
> and will operate normally. I have no idea what other systems do but
> you can boo
Bob Proulx wrote:
> The Linus software raid also had the capability to use a block bitmap
> to speed up resync after a crash because then it tracks which blocks
> are dirty.
> See the documentation on this mdadm command to configure an internal
> bitmap to speed up a re-sync after an event such
yudi v wrote:
> I am looking at using Debian software RAID mirroring and would like
> to know how it handles system crashes and disk failures.
It handles it quite well.
> My only experience with software RAID 1 is with windows 7 inbuilt
> option. Whenever the system does not shut
Hi all,
I am looking at using Debian software RAID mirroring and would like to know
how it handles system crashes and disk failures.
My only experience with software RAID 1 is with windows 7 inbuilt option.
Whenever the system does not shutdown cleanly, upon reboot the disks start
resynching and
Hi,
Originally set my Debian system up on a raid array (raid 1 made up of
2x1Tb drives) containing four logical volumes, containing the following
ext4 partitions. /boot is ext3 and separate, and swap is one partition
on each drive.
/dev/mapper/VolGroup-LogVol00 /
/dev/mapper/VolGroup
On Wed, Aug 08, 2012 at 04:25:01PM +0300, Georgi Naplatanov wrote:
> Hi.
>
> I'm going to configure Debian GNU/Linux with software RAID 1 and I
> think to put swap area on RAID 1 (/dev/mdX), but someone told me
> that if the computer have multiple swap partition (e.g. /dev/sda
On 08/08/12 01:45 PM, Georgi Naplatanov wrote:
On 08/08/2012 08:14 PM, � wrote:
On Wed, 08 Aug 2012 16:25:01 +0300, Georgi Naplatanov wrote:
I'm going to configure Debian GNU/Linux with software RAID 1 and I
think
to put swap area on RAID 1 (/dev/mdX), but someone told me that i
On 08/08/2012 08:14 PM, � wrote:
On Wed, 08 Aug 2012 16:25:01 +0300, Georgi Naplatanov wrote:
I'm going to configure Debian GNU/Linux with software RAID 1 and I think
to put swap area on RAID 1 (/dev/mdX), but someone told me that if the
computer have multiple swap partition (e.g. /dev
On 8/8/2012 8:25 AM, Georgi Naplatanov wrote:
Hi.
I'm going to configure Debian GNU/Linux with software RAID 1 and I think
to put swap area on RAID 1 (/dev/mdX), but someone told me that if the
computer have multiple swap partition (e.g. /dev/sda2, /dev/sdb2) with
equal priorities, linux k
On Wed, 08 Aug 2012 16:25:01 +0300, Georgi Naplatanov wrote:
> I'm going to configure Debian GNU/Linux with software RAID 1 and I think
> to put swap area on RAID 1 (/dev/mdX), but someone told me that if the
> computer have multiple swap partition (e.g. /dev/sda2, /dev/sdb2
Hi.
I'm going to configure Debian GNU/Linux with software RAID 1 and I think
to put swap area on RAID 1 (/dev/mdX), but someone told me that if the
computer have multiple swap partition (e.g. /dev/sda2, /dev/sdb2) with
equal priorities, linux kernel will mirror swap area between diff
ler can be from $250 up to thousands USD, that's the
price to pay for stable and well tested hardware.
> I am just trying to get some form of redundancy.
Redundancy is fine, but don't forget about backups. They're usually -and
wrongly- underestimated and sometimes even more valu
>
> Might be better to put LUKS on top of LVM instead of vice versa? Not sure.
>
>
By having LVM over LUKS, I will only have one container to unlock and from
what I understand Debian cannot unlock several LUKS containers at start-up
unlike Fedora.
My laptop currently has LVM over LUKS and works
On Wed, Dec 21, 2011 at 17:00, yudi v wrote:
>
>> I'd go for hardware RAID as long as there is a true and real hardware
>> RAID controller behind with a battery backup et al (in brief, a *good*
>> RAID controller, not the motherboard's one which are usually nothing but
>> fakeraid and a pile of un
> I'd go for hardware RAID as long as there is a true and real hardware
> RAID controller behind with a battery backup et al (in brief, a *good*
> RAID controller, not the motherboard's one which are usually nothing but
> fakeraid and a pile of unforeseen problems).
>
> Otherwise I would use softwa
yudi v wrote:
> > You would need a second compatible hardware raid controller to use
> > in order to extract the data from the drives. The hardware raid
> > controllers I have used have not allowed me to access the data
> > without a compatible raid controller.
>
>
On Wed, 21 Dec 2011 16:11:32 +1000, yudi v wrote:
> Will be installing a new system and would like to have the following
> set-up:
>
> RAID 1 > LUKS > LVM
>
> Should I use the RAID controller on the motherboard (not sure how
> reliable it will be) or use software RAID
> You would need
> a second compatible hardware raid controller to use in order to
> extract the data from the drives. The hardware raid controllers I
> have used have not allowed me to access the data without a compatible
> raid controller.
If it's in RAID 1, I was under t
yudi v wrote:
> Should I use the RAID controller on the motherboard (not sure how reliable
> it will be) or use software RAID?
The problem with the hardware raid on the motherboard is what do you
do if the motherboard fails? You will at that time have good data on
your disks but probably no way t
Will be installing a new system and would like to have the following set-up:
RAID 1 > LUKS > LVM
Should I use the RAID controller on the motherboard (not sure how reliable
it will be) or use software RAID?
--
Kind regards,
Yudi
aI L, as this stupid window
suddenIy dispIays a space instead of an I. The font on the window
itseIf etc. is fine.
2011/10/12 ML mail :
> Well what I now did is to create a dummy unused partition of 100 MB at the
> beginning of my hard disk and then create a root and a swap partition which
Well what I now did is to create a dummy unused partition of 100 MB at the
beginning of my hard disk and then create a root and a swap partition which
both are in a RAID 1 set. For that I followed these
instructions: http://www.unix.com/linux/141253-sparc-linux-raid1-silo.html
Unfortunately
I was happy to have a MEPIS Live CD around with ext4 support. I've
used it to move data about and pre-partition/pre-format the
partitions, except for the ones used for sRAID.
I don't know if MEPIS supports the architecture you're using, but
maybe Knoppix would be more suitable. Knoppix has it's De
with RAID 1 at installation
First of all, I am not very experienced and use the i686 architecture,
but what I did might help:
I've put a variety of partitions on the 2 disks I use for the purpose
of dual booting. One holds the filesystems etc. for the other OS,
intended for legacy applications
First of all, I am not very experienced and use the i686 architecture,
but what I did might help:
I've put a variety of partitions on the 2 disks I use for the purpose
of dual booting. One holds the filesystems etc. for the other OS,
intended for legacy applications that I still use... may use.
I
1 - 100 of 370 matches
Mail list logo