Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-18 Thread Yassine Chaouche
ll usable enough to retrieve diagnostic info from whatever chip or special memory that would have been set aside, like blackboxes in airplanes. A tool that queries the structure of the raid can tell you what drives the raid expects to see, regardless of whether they are available or not> [...]

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-16 Thread Yassine Chaouche
are can use for forensics and diagnosis. A tool that queries the structure of the raid can tell you what drives the raid expects to see, regardless of whether they are available or not[...] [...] You can look for the (non-free) ssacli tool from HPE[...] I figure that's the tool to query the s

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-16 Thread Yassine Chaouche
behind the RAID controller, so we need special software and drivers that send low level commands directly to raid controllers. Best, -- yassine -- sysadm http://about.me/ychaouche Looking for side gigs.

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-15 Thread Erwan David
that, > for servers I didn't install, > I need to know their RAID setup in advance, > or at least the total number of disks, > then compare with the output of smartctl or cciss_vol_status. You can get the layout with lsblk, IIRC -- Erwan David

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-13 Thread Charles Curley
On Thu, 13 Mar 2025 12:36:46 -0400 Michael Stone wrote: > I guess I don't understand how you expect smartctl to query a dead > disk. It's dead, that means it's not going to respond. Not quite. The electronics may respond even if the head-disk assembly (HDA) is broken. It probably won't give a co

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-13 Thread Michael Stone
On Thu, Mar 13, 2025 at 11:35:08AM -0600, Charles Curley wrote: On Thu, 13 Mar 2025 12:36:46 -0400 Michael Stone wrote: I guess I don't understand how you expect smartctl to query a dead disk. It's dead, that means it's not going to respond. Not quite. The electronics may respond even if the

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-13 Thread Michael Stone
how you expect smartctl to query a dead disk. It's dead, that means it's not going to respond. A tool that queries the structure of the raid can tell you what drives the raid expects to see, regardless of whether they are available or not. A tool that queries drives can't tell you

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-13 Thread Yassine Chaouche
Le 3/12/25 à 17:51, Greg a écrit> According to my (very old) scripts you should use something like /dev/cciss/c1d0 instead of /dev//sg? It could be the old driver/kernel module interface. The one I have is /dev/sgx. I have no /dev/cciss* present. Besides, for HP Smart Array, the example found

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-13 Thread Yassine Chaouche
Le 3/12/25 à 23:11, Michael Stone a écrit : Two of the drives are dead, you're not going to see anything from them So this means I can't rely on smartctl to list physical disks, which implies that, for servers I didn't install, I need to know their RAID setup in advance, or at

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-12 Thread Michael Stone
o see anything from them and this is the output from cciss root@messagerie-recup[10.10.10.22] ~ # cciss_vol_status -V /dev/sg0 Controller: Smart Array P420i Board ID: 0x3354103c Logical drives: 2 Running firmware: 3.22 ROM firmware: 3.22 /dev/sda: (Smart Array P420i) RAID 1(1+0) Vol

Re: Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-12 Thread Greg
    ROM firmware: 3.22   /dev/sda: (Smart Array P420i) RAID 1(1+0) Volume 0 status: Using interim recovery mode.     Failed drives:    connector 1I box 2 bay 2  HP EG0450FBLSF  6XQ1HLAJB239FN4Z HPD7   Total of 1 failed physical

Accessing individual disk's SMART diagnostics behing a physical RAID

2025-03-12 Thread Yassine Chaouche
-recup[10.10.10.22] ~ # and this is the output from cciss root@messagerie-recup[10.10.10.22] ~ # cciss_vol_status -V /dev/sg0 Controller: Smart Array P420i Board ID: 0x3354103c Logical drives: 2 Running firmware: 3.22 ROM firmware: 3.22 /dev/sda: (Smart Array P420i) RAID 1

Re: Removing an unwanted RAID 1 array

2025-01-12 Thread Roger Price
On Sat, 11 Jan 2025, Michael Stone wrote: > > root@titan ~ mdadm --misc /dev/md4 --stop > > This is incorrect syntax, and a no-op (so the array did not stop). You want > `mdadm --misc --stop /dev/md4`. The --misc is implied so you can just use > `mdadm --stop /dev/md4` I ran the command root@t

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread Michael Stone
On Sat, Jan 11, 2025 at 12:11:39PM +0100, Roger Price wrote: I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat reported md4 : active raid1 sdb7[0] 20970368 blocks super 1.0 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk I understand that the array has to

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread pocket
> Sent: Saturday, January 11, 2025 at 9:51 AM > From: "Roger Price" > To: "debian-user Mailing List" > Subject: Re: Removing an unwanted RAID 1 array > > On Sat, 11 Jan 2025, Greg Wooledge wrote: > > > On Sat, Jan 11, 2025 at 13:10:51 +010

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread Roger Price
On Sat, 11 Jan 2025, Greg Wooledge wrote: > On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote: > > On Sat, 11 Jan 2025, Michel Verdier wrote: > > > > > If I remember well you have to first set the device as faulty with --fail > > > before --remove could be accepted. > > > > No luck : >

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread Greg Wooledge
On Sat, Jan 11, 2025 at 13:10:51 +0100, Roger Price wrote: > On Sat, 11 Jan 2025, Michel Verdier wrote: > > > If I remember well you have to first set the device as faulty with --fail > > before --remove could be accepted. > > No luck : > > root@titan ~ mdadm --fail /dev/md4 --remove /dev/sdb7

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread Roger Price
> But if this is the last device you can erase the partition to remove RAID > informations. I intend to erase the partition, but I hoped for something cleaner from an mdadm RAID management point of view. Roger

Re: Removing an unwanted RAID 1 array

2025-01-11 Thread Michel Verdier
he device as faulty with --fail before --remove could be accepted. But if this is the last device you can erase the partition to remove RAID informations.

Removing an unwanted RAID 1 array

2025-01-11 Thread Roger Price
I am unable to erase an unwanted RAID 1 array. Command cat /proc/mdstat reported md4 : active raid1 sdb7[0] 20970368 blocks super 1.0 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk I understand that the array has to be inactive before it can be removed, so I stopped it, but

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Charles Curley
On Tue, 24 Dec 2024 15:45:31 +0100 (CET) Roger Price wrote: > File /proc/mdstat indicates a dying RAID device with an output > section such as > > md3 : active raid1 sdg6[0] > 871885632 blocks super 1.0 [2/1] [U_] > bitmap: 4/7 pages [16KB], 65536KB chun

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Roger Price
On Tue, 24 Dec 2024, Greg Wooledge wrote: On Tue, Dec 24, 2024 at 15:45:31 +0100, Roger Price wrote: md3 : active raid1 sdg6[0] 871885632 blocks super 1.0 [2/1] [U_] bitmap: 4/7 pages [16KB], 65536KB chunk Note the [U-]. There isn't any [U-] in that output. There is [U_].

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Andy Smith
Hi, On Tue, Dec 24, 2024 at 03:45:31PM +0100, Roger Price wrote: > I would like to scan /proc/mdstat and set a flag if [U-], [-U] or [--] > occur. Others have pointed out your '-' vs '_' confusion. But are you sure you wouldn't rather just rely on the "mdadm --monitor" command that emails you whe

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Nicolas George
Roberto C. Sánchez (12024-12-24): > I think that '==' is the wrong tool. string1 == string2 string1 = string2 True if the strings are equal. = should be used with the test command for POSIX conformance. When used with the [[ command,

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Greg Wooledge
On Tue, Dec 24, 2024 at 10:37:29 -0500, Roberto C. Sánchez wrote: > I think that '==' is the wrong tool. That is testing for string > equality, whilst you are looking for a partial match. This is what I was > able to get working after hacking on it for a minute or two: > > #! /bin/bash -u > set -x

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Roberto C . Sánchez
Hi Roger, On Tue, Dec 24, 2024 at 03:45:31PM +0100, Roger Price wrote: > File /proc/mdstat indicates a dying RAID device with an output section such > as > > md3 : active raid1 sdg6[0] >871885632 blocks super 1.0 [2/1] [U_] >bitmap: 4/7 pages [16KB], 65536KB c

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Stefan Monnier
> File /proc/mdstat indicates a dying RAID device with an output section such > as > > md3 : active raid1 sdg6[0] >871885632 blocks super 1.0 [2/1] [U_] >bitmap: 4/7 pages [16KB], 65536KB chunk > > Note the [U-]. I can't see a "[U-]", only a "[U_]" Stefan

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Greg Wooledge
On Tue, Dec 24, 2024 at 15:45:31 +0100, Roger Price wrote: > File /proc/mdstat indicates a dying RAID device with an output section such > as > > md3 : active raid1 sdg6[0] >871885632 blocks super 1.0 [2/1] [U_] >bitmap: 4/7 pages [16KB], 65536KB chunk > &g

Re: Bash expression to detect dying RAID devices

2024-12-24 Thread Nicolas George
Roger Price (12024-12-24): > File /proc/mdstat indicates a dying RAID device with an output section such > as Maybe try to find a more script-friendly source for that information in /sys/class/block/md127/md/? Regards, -- Nicolas George

Bash expression to detect dying RAID devices

2024-12-24 Thread Roger Price
File /proc/mdstat indicates a dying RAID device with an output section such as md3 : active raid1 sdg6[0] 871885632 blocks super 1.0 [2/1] [U_] bitmap: 4/7 pages [16KB], 65536KB chunk Note the [U-]. The "-" says /dev/sdh is dead. I would like to scan /proc/mdstat and

Re: I/O errors during RAID check but no SMART errors

2024-10-10 Thread Franco Martelli
On 09/10/24 at 21:10, Jochen Spieker wrote: Andy Smith: Hi, On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote: Do you know whether MD is clever enough to send an email to root when it fails the device? Or have I to keep an eye on /proc/mdstat? For more than a decade mdadm has s

Re: I/O errors during RAID check but no SMART errors

2024-10-09 Thread Jochen Spieker
Andy Smith: > Hi, > > On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote: >> Do you know whether MD is clever enough to send an email to root when it >> fails the device? Or have I to keep an eye on /proc/mdstat? > > For more than a decade mdadm has shipped with a service that runs i

Re: I/O errors during RAID check but no SMART errors

2024-10-09 Thread Andy Smith
Hi, On Wed, Oct 09, 2024 at 08:41:38PM +0200, Franco Martelli wrote: > Do you know whether MD is clever enough to send an email to root when it > fails the device? Or have I to keep an eye on /proc/mdstat? For more than a decade mdadm has shipped with a service that runs in monitor mode to do thi

Re: I/O errors during RAID check but no SMART errors

2024-10-09 Thread Franco Martelli
On 08/10/24 at 20:40, Andy Smith wrote: Hi, On Tue, Oct 08, 2024 at 04:58:46PM +0200, Jochen Spieker wrote: Why is the RAID still considered healthy? At some point I would expect the disk to be kicked from the RAID. This will happen when/if MD can't compensate by reading data from

Re: I/O errors during RAID check but no SMART errors

2024-10-09 Thread Jochen Spieker
That is exactly what was confusing me here. > What I would not do at this point is subject it to more physical > stress than unavoidable. Unless you absolutely must, do not physically > unplug or remove that disk before the RAID array has resilvered onto > the new disk. It's currently

Re: I/O errors during RAID check but no SMART errors

2024-10-09 Thread Jochen Spieker
e...@gmx.us: > On 10/8/24 16:07, Jochen Spieker wrote: >>| Oct 06 14:27:11 jigsaw kernel: I/O error, dev sdb, sector 9361257600 op >>0x0:(READ) flags 0x0 phys_seg 150 prio class 3 >>| Oct 06 14:27:30 jigsaw kernel: I/O error, dev sdb, sector 9361275264 op >>0x0:(READ) flags 0x4000 phys_seg 161 pr

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread Michael Kjörling
would definitely question a value of 0 for failed (current pending and offline uncorrectable) _and_ reallocated sectors for a disk that's reporting I/O errors, for example. _At least_ one of those should be >0 for a truthful storage device in that situation. What I would not do at this

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread eben
On 10/8/24 16:07, Jochen Spieker wrote: | Oct 06 14:27:11 jigsaw kernel: I/O error, dev sdb, sector 9361257600 op 0x0:(READ) flags 0x0 phys_seg 150 prio class 3 | Oct 06 14:27:30 jigsaw kernel: I/O error, dev sdb, sector 9361275264 op 0x0:(READ) flags 0x4000 phys_seg 161 prio class 3 | Oct 06 1

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread Jochen Spieker
ncern right now though. > > Here is a thing I wrote about it quite some time ago: > > > https://strugglers.net/~andy/mothballed-blog/2015/11/09/linux-software-raid-and-drive-timeouts/#how-to-check-set-drive-timeouts Thanks a lot again. >> Do you think I should do remove the dr

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread Jochen Spieker
| Oct 06 14:37:20 jigsaw kernel: I/O error, dev sdb, sector 9400871680 op 0x0:(READ) flags 0x0 phys_seg 160 prio class 3 … and so on. On the second RAID check, the numbers are not the same, but in the same range. > If the disk is a few days away from being replaced, I would not > bother sh

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread Andy Smith
so when there's issues. The kernel SCSI layer will try several times, so the drive's timeout is multiplied. Only if this ends up exceeding 30s will you get a read error, and the message from MD about rescheduling the sector. > The data is still readable from the other disk in the RAID, ri

Re: I/O errors during RAID check but no SMART errors

2024-10-08 Thread Dan Ritter
Jochen Spieker wrote: > I have two disks in a RAID-1: > > | $ cat /proc/mdstat > | Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] > [raid4] [raid10] > | md0 : active raid1 sdb1[2] sdc1[0] > | 5860390400 blocks super 1.2 [2/2] [UU] > |

I/O errors during RAID check but no SMART errors

2024-10-08 Thread Jochen Spieker
Hey, please forgive me for posting a question that is not Debian-specific, but maybe somebody here can explain this to me. Ten years ago I would have posted to Usenet instead. I have two disks in a RAID-1: | $ cat /proc/mdstat | Personalities : [raid1] [linear] [multipath] [raid0] [raid6

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-28 Thread Franco Martelli
Hi Marc, On 20/05/24 at 14:35, Marc SCHAEFER wrote: 3. grub BOOT FAILS IF ANY LV HAS dm-integrity, EVEN IF NOT LINKED TO / if I reboot now, grub2 complains about rimage issues, clear the screen and then I am at the grub2 prompt. Booting is only possible with Debian rescue, disabling the dm-int

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Marc SCHAEFER
ld record the exact address where the kernel & initrd was, regardless of abstractions layers :->) Recently, I have been playing with RAID-on-LVM (I was mostly using LVM on md before, which worked with grub), and it works too. Where grub fails, is if you have /boot on the same LVM volume g

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Stefan Monnier
> I found this [1], quoting: "I'd also like to share an issue I've > discovered: if /boot's partition is a LV, then there must not be a > raidintegrity LV anywhere before that LV inside the same VG. Otherwise, > update-grub will show an error (disk `lvmid/.../...' not found) and GRUB > cannot boot.

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Marc SCHAEFER
Hello, On Wed, May 22, 2024 at 10:13:06AM +, Andy Smith wrote: > metadata tags to some PVs prevented grub from assembling them, grub is indeed very fragile if you use dm-integrity anywhere on any of your LVs on the same VG where /boot is (or at least if in the list of LVs, the dm-integrity pr

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Andy Smith
Hello, On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote: > I will try this work-around and report back here. As I said, I can > live with /boot on RAID without dm-integrity, as long as the rest can be > dm-integrity+raid protected. I'm interested in how you get on.

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Marc SCHAEFER
Hello, On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote: > I will try this work-around and report back here. As I said, I can > live with /boot on RAID without dm-integrity, as long as the rest can be > dm-integrity+raid protected. So, enable dm-integrity on all LVs,

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-21 Thread Marc SCHAEFER
ity enabled. I will try this work-around and report back here. As I said, I can live with /boot on RAID without dm-integrity, as long as the rest can be dm-integrity+raid protected. [1] https://unix.stackexchange.com/questions/717763/lvm2-integrity-feature-breaks-lv-activation

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-21 Thread Marc SCHAEFER
egritysetup (from LUKS), but LVM RAID PVs -- I don't use LUKS encryption anyway on that system 2) the issue is not the kernel not supporting it, because when the system is up, it works (I have done tests to destroy part of the underlying devices, they get detected and fixed correctly)

Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-21 Thread Franco Martelli
On 20/05/24 at 14:35, Marc SCHAEFER wrote: Any idea what could be the problem? Any way to just make grub2 ignore the rimage (sub)volumes at setup and boot time? (I could live with / aka vg1/root not using dm-integrity, as long as the data/docker/etc volumes are integrity-protected) ? Or how to

Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-20 Thread Marc SCHAEFER
Hello, 1. INITIAL SITUATION: WORKS (no dm-integrity at all) I have a Debian bookwork uptodate system that boots correctly with kernel 6.1.0-21-amd64. It is setup like this: - /dev/nvme1n1p1 is /boot/efi - /dev/nvme0n1p2 and /dev/nvme1n1p2 are the two LVM physical volumes - a volume g

Re: Data disaster preparedness and recovery without RAID

2023-12-14 Thread Pocket
files that glow blue. ;-) My files glow Greene so I am safe > > >>> On 12/13/23 10:42, Pocket wrote: >>>> After removing raid, I completely redesigned my network to be more inline >>>> with the howtos and other information. >>> >>> Plea

Re: Data disaster preparedness and recovery without RAID

2023-12-14 Thread David Christensen
), restoring from the snapshot should produce a set of files that work correctly. Radioactive I see Do not eat files that glow blue. ;-) On 12/13/23 10:42, Pocket wrote: After removing raid, I completely redesigned my network to be more inline with the howtos and other information. Please

Re: Data disaster preparedness and recovery without RAID

2023-12-14 Thread Pocket
Sent from my iPad > On Dec 14, 2023, at 4:09 AM, David Christensen > wrote: > > On 12/13/23 08:51, Pocket wrote: >> I gave up using raid many years ago and I used the extra drives as backups. >> Wrote a script to rsync /home to the backup drives. > > >

Data disaster preparedness and recovery without RAID

2023-12-14 Thread David Christensen
On 12/13/23 08:51, Pocket wrote: I gave up using raid many years ago and I used the extra drives as backups. Wrote a script to rsync /home to the backup drives. While external HDD enclosures can work, my favorite is mobile racks: https://www.startech.com/en-us/hdd/drw150satbk https

Re: Raid Array and Changing Motherboard

2023-07-02 Thread David Christensen
On 7/2/23 13:11, Mick Ab wrote: On 19:58, Sun, 2 Jul 2023 David Christensen On 7/2/23 10:23, Mick Ab wrote: I have a software RAID 1 array of two hard drives. Each of the two disks contains the Debian operating system and user data. I am thinking of changing the motherboard because of

Re: Raid Array and Changing Motherboard

2023-07-02 Thread Alexander V. Makartsev
On 02.07.2023 22:23, Mick Ab wrote: I have a software RAID 1 array of two hard drives. Each of the two disks contains the Debian operating system and user data. I am thinking of changing the motherboard because of problems that might be connected to the current motherboard. The new

Re: Raid Array and Changing Motherboard

2023-07-02 Thread Mick Ab
On 19:58, Sun, 2 Jul 2023 David Christensen > On 7/2/23 10:23, Mick Ab wrote: > > I have a software RAID 1 array of two hard drives. Each of the two disks > > contains the Debian operating system and user data. > > > > I am thinking of changing the motherboard becaus

Re: Raid Array and Changing Motherboard

2023-07-02 Thread David Christensen
On 7/2/23 10:23, Mick Ab wrote: I have a software RAID 1 array of two hard drives. Each of the two disks contains the Debian operating system and user data. I am thinking of changing the motherboard because of problems that might be connected to the current motherboard. The new motherboard

Re: Raid Array and Changing Motherboard

2023-07-02 Thread Charles Curley
On Sun, 2 Jul 2023 18:23:31 +0100 Mick Ab wrote: > I am thinking of changing the motherboard because of problems that > might be connected to the current motherboard. The new motherboard > would be the same make and model as the current motherboard. > > Would I need to recreate t

Raid Array and Changing Motherboard

2023-07-02 Thread Mick Ab
I have a software RAID 1 array of two hard drives. Each of the two disks contains the Debian operating system and user data. I am thinking of changing the motherboard because of problems that might be connected to the current motherboard. The new motherboard would be the same make and model as

Re: More RAID weirdness: external RAID over network

2023-03-20 Thread Nicolas George
Tim Woodall (12023-03-17): > Yes. It's possible. Took me about 5 minutes to work out the steps. All > of which are already mentioned upthread. All of them, except one. > mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing Until now, all suggestions with mdadm starte

Re: RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-18 Thread Dan Ritter
ly fail a disk then store it in a safe deposit box or > > > > something as > > > > a backup, but I have not gotten around to it. > > > > > > > > It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID > > > > as >

Re: RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-18 Thread David Christensen
(plus a hot spare). On top of that is LUKS, and on top of that is LVM. I keep meaning to manually fail a disk then store it in a safe deposit box or something as a backup, but I have not gotten around to it. It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID as an additional

Re: RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-17 Thread Gregory Seidman
gt; spare). On top of that is LUKS, and on top of that is LVM. I keep meaning > > to manually fail a disk then store it in a safe deposit box or something as > > a backup, but I have not gotten around to it. > > > > It sounds to me like adding an iSCSI volume (e.g. from AWS)

Re: RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-17 Thread David Christensen
On 3/17/23 12:36, Gregory Seidman wrote: On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote: [...] PS There's that old saying, "RAID is not a substitute for a backup". What you're trying to do sounds suspiciously similar to an old "RAID split-mirror" backup

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Tim Woodall
d umount /mnt/fred mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing echo "Mounting single disk raid" mount ${md} /mnt/fred ls -al /mnt/fred mdadm ${md} --add ${d2} sleep 10 echo "Done sleeping - sync had better be done!" mdadm ${md} --fail ${d2} mdadm ${md

Re: RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-17 Thread Dan Ritter
Gregory Seidman wrote: > On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote: > [...] > > PS There's that old saying, "RAID is not a substitute for a backup". > > What you're trying to do sounds suspiciously similar to an old "RAID > > split-m

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
Nicolas George (12023-03-17): > It is not vagueness, it is genericness: /dev/something is anything and > contains anything, and I want a solution that works for anything. Just to be clear: I KNOW that what I am asking, the ability to synchronize an existing block device onto another over the netwo

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
Greg Wooledge (12023-03-17): > > I have a block device on the local host /dev/something with data on it. ^^^ There. I have data, therefore, any solution that assumes the data is not there can only be proposed by somebody who di

RAID1 + iSCSI as backup (was Re: More RAID weirdness: external RAID over network)

2023-03-17 Thread Gregory Seidman
On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote: [...] > PS There's that old saying, "RAID is not a substitute for a backup". > What you're trying to do sounds suspiciously similar to an old "RAID > split-mirror" backup technique. Just saying. This t

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Greg Wooledge
On Fri, Mar 17, 2023 at 05:01:57PM +0100, Nicolas George wrote: > Dan Ritter (12023-03-17): > > If Reco didn't understand your question, it's because you are > > very light on details. > > No. Reco's answers contradict the very first sentence of my first > e-mail. The first sentence of your first

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Tim Woodall
On Fri, 17 Mar 2023, Nicolas George wrote: Dan Ritter (12023-03-17): If Reco didn't understand your question, it's because you are very light on details. No. Reco's answers contradict the very first sentence of my first e-mail. Is this possible? How can Reco's answers contradict that. Re

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
Dan Ritter (12023-03-17): > If Reco didn't understand your question, it's because you are > very light on details. No. Reco's answers contradict the very first sentence of my first e-mail. -- Nicolas George

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Dan Ritter
Nicolas George wrote: > Reco (12023-03-17): > > Well, theoretically you can use Btrfs instead. > > No, I cannot. Obviously. > > > What you're trying to do sounds suspiciously similar to an old "RAID > > split-mirror" backup technique. > >

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
Reco (12023-03-17): > Well, theoretically you can use Btrfs instead. No, I cannot. Obviously. > What you're trying to do sounds suspiciously similar to an old "RAID > split-mirror" backup technique. Absolutely not. If you do not understand the question, it is okay to no

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Reco
nclusion, implementing mdadm + iSCSI + ext4 would be probably the best way to achieve whatever you want to do. PS There's that old saying, "RAID is not a substitute for a backup". What you're trying to do sounds suspiciously similar to an old "RAID split-mirror" backup technique. Just saying. Reco

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
Reco (12023-03-17): > Yes, it will destroy the contents of the device, so backup No. If I accepted to have to rely on an extra copy of the data, I would not be trying to do something complicated like that. -- Nicolas George

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Reco
ool resilvering" (syncronization between mirror sides) concerns only actual data residing in a zpool. I.e. if you have 1Tb mirrored zpool which is filled to 200Gb you will resync 200Gb. In comparison, mdadm RAID resync will happily read 1Tb from one drive and write 1Tb to another *unless* yo

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
/md0 --level=mirror --force --raid-devices=1 \ > --metadata=1.0 /dev/local_dev missing > > --metadata=1.0 is highly important here, as it's one of the few mdadm > metadata formats that keeps said metadata at the end of the device. Well, I am sorry to report that you did not rea

Re: More RAID weirdness: external RAID over network

2023-03-17 Thread Reco
sor architecture restrictions, and somewhat unusual design decisions for the filesystem storage. So let's keep it on MDADM + iSCSI for now. > What I want to do: > > 1. Stop programs and umount /dev/something > > 2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \ &g

More RAID weirdness: external RAID over network

2023-03-17 Thread Nicolas George
). What I want to do: 1. Stop programs and umount /dev/something 2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \ --metadata-file /data/raid_something /dev/something → Now I have /dev/md0 that is an exact image of /dev/something, with changes on it synced instantaneously. 3

Re: Whole-disk RAID and GPT/UEFI

2023-02-24 Thread David Christensen
On 2/23/23 11:05, Tim Woodall wrote: On Wed, 22 Feb 2023, Nicolas George wrote: Is there a solution to have a whole-disk RAID (software, mdadm) that is also partitioned in GPT and bootable in UEFI? I've wanted this ... I think only hardware raid where the bios thinks it's a s

Re: Whole-disk RAID and GPT/UEFI

2023-02-23 Thread Tim Woodall
On Wed, 22 Feb 2023, Nicolas George wrote: Hi. Is there a solution to have a whole-disk RAID (software, mdadm) that is also partitioned in GPT and bootable in UEFI? I've wanted this but settled for using dd to copy the start of the disk, fdisk to rewrite the GPT properly then mda

Re: Whole-disk RAID and GPT/UEFI

2023-02-22 Thread Juri Grabowski
Hello, I have seen some installations with following setup: GPT sda1 sdb1 bios_grub md1 0.9 sda2 sdb2 efi md2 0.9 sda3 sdb3 /boot md3 0.9 sda4 sdb4 / md? 1.1 on such installations it's important, that grub installation is made with "grub-install --removable" I mean it was some grub bugs about

Re: Whole-disk RAID and GPT/UEFI

2023-02-22 Thread DdB
Am 22.02.2023 um 17:07 schrieb Nicolas George: > Unfortunately, that puts the partition table > and EFI partition outside the RAID: if you have to add/replace a disk, > you need to partition and reinstall GRUB, that makes a few more > manipulations on top of syncing the RAID. Yes, i g

Re: Whole-disk RAID and GPT/UEFI

2023-02-22 Thread Dan Ritter
Nicolas George wrote: > Hi. > > Is there a solution to have a whole-disk RAID (software, mdadm) that is > also partitioned in GPT and bootable in UEFI? Not that I know of. An EFI partition needs to be FAT32 or VFAT. What I think you could do: Partition the disks with GPT: 2 par

Re: Whole-disk RAID and GPT/UEFI

2023-02-22 Thread Nicolas George
o make an USB stick that was bootable in legacy mode, bootable in UEFI mode and usable as a regular USB stick (spoiler: it worked, until I tried it with Windows.) But it will not help for this issue. > The only issue, i have had a look at, was the problem to have a raid, > that is bootable

Re: Whole-disk RAID and GPT/UEFI

2023-02-22 Thread DdB
up (not use them at all) and unfortunately that applies to standard GPT tools as well, but the dual bootability can solve some problems. The only issue, i have had a look at, was the problem to have a raid, that is bootable no matter which one of the drives initially fails, a problem, that can

Whole-disk RAID and GPT/UEFI

2023-02-22 Thread Nicolas George
Hi. Is there a solution to have a whole-disk RAID (software, mdadm) that is also partitioned in GPT and bootable in UEFI? What I imagine: - RAID1, mirroring: if you ignore the RAID, the data is there. - The GPT metadata is somewhere not too close to the beginning of the drive nor too close

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-24 Thread David Christensen
computers. When I boot the flash drive in a Dell Precision 3630 Tower that has Windows 11 Pro installed on the internal NVMe drive, the internal PCIe NVMe drive is not visible to Linux: The work-around is to change CMOS Setup -> System Configuration -> SATA Operation from "RAID On: to "

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-24 Thread David Christensen
acity storage costs to a minimum." I believe that is marketing speak for "the computer supports Optane Memory", not "every machine comes with Optane Memory". I believe that's the pseudo-RAID you are seeing in the UEFI setup screen. Maybe you can see the physical

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-24 Thread Andrew M.A. Cater
8:41 11.2G 0 part > `-sda4_crypt 254:00 11.2G 0 crypt / > sr0 11:01 1024M 0 rom > > 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com > # l /dev/n* > /dev/null /dev/nvram > > /dev/net: > ./ ../ tun > > > The w

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-24 Thread Jeffrey Walton
s/dfb/p/precision-3630-workstation/pd, the machine has Optane. I believe that's the pseudo-RAID you are seeing in the UEFI setup screen. Maybe you can see the physical drives using raid utilities. Jeff

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-24 Thread David Christensen
boot the flash drive in a Dell Precision 3630 Tower that has Windows 11 Pro installed on the internal NVMe drive, the internal PCIe NVMe drive is not visible to Linux: The work-around is to change CMOS Setup -> System Configuration -> SATA Operation from "RAID On: to "AHCI".

Re: Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-23 Thread Jeffrey Walton
.2G 0 part >`-sda4_crypt 254:00 11.2G 0 crypt / > sr0 11:01 1024M 0 rom > > 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com > # l /dev/n* > /dev/null /dev/nvram > > /dev/net: > ./ ../ tun > > > The work-around is to c

Dell CMOS Setup -> System Configuration -> SATA Operation -> RAID On vs AHCI

2022-12-23 Thread David Christensen
254:00 11.2G 0 crypt / sr0 11:01 1024M 0 rom 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com # l /dev/n* /dev/null /dev/nvram /dev/net: ./ ../ tun The work-around is to change CMOS Setup -> System Configuration -> SATA Operation from "RAID On: to

network raid (Re: deduplicating file systems: VDO with Debian?)

2022-11-11 Thread hede
Am 10.11.2022 14:40, schrieb Curt: (or maybe a RAID array is conceivable over a network and a distance?). Not only conceivable, but indeed practicable: Linbit DRBD

  1   2   3   4   5   6   7   8   9   10   >