I've never done full disk RAID1. Always, done it with partitions.
fdisk -l /dev/sda (and /dev/sdb) looks like this:
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sda1 2048 97722367 97720320 46.6G Linux RAID
/dev/sda2 97722368 99809279 2086912 101
On Tue, 2020-05-26 at 11:07 -0700, Samuel Sieb wrote:
> On 5/26/20 4:15 AM, Patrick O'Callaghan wrote:
> > On Mon, 2020-05-25 at 23:22 -0500, Gabriel Ramirez wrote:
> > > On 5/25/20 5:23 PM, Patrick O'Callaghan wrote:
> > > > Yes, I understand that. I still think the behaviour of mdadm in this
> >
On 5/26/20 4:15 AM, Patrick O'Callaghan wrote:
On Mon, 2020-05-25 at 23:22 -0500, Gabriel Ramirez wrote:
On 5/25/20 5:23 PM, Patrick O'Callaghan wrote:
Yes, I understand that. I still think the behaviour of mdadm in this
case is counter-intuitive. When I explicitly ask for the creation of an
ar
On Tue, 2020-05-26 at 09:32 -0500, Roger Heflin wrote:
> If you want the name to stay the same then create a file in
>
> /etc/mdadm.conf with something like this in it:
>
> # mdadm.conf written out by anaconda
>
> MAILADDR root
>
> AUTO +imsm +1.x -all
>
>
>
> ARRAY /dev/md13 metadata=1.2 le
If you want the name to stay the same then create a file in
/etc/mdadm.conf with something like this in it:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md13 metadata=1.2 level=raid6 num-devices=7
name=localhost.localdomain:11 UUID=a54550f7:da200f3e:90606715:0
On Mon, 2020-05-25 at 23:22 -0500, Gabriel Ramirez wrote:
> On 5/25/20 5:23 PM, Patrick O'Callaghan wrote:
> > Yes, I understand that. I still think the behaviour of mdadm in this
> > case is counter-intuitive. When I explicitly ask for the creation of an
> > array called /dev/md0 and the command f
On Mon, 2020-05-25 at 15:43 -0700, Samuel Sieb wrote:
> > That decision was taken by mdadm without input from me, i.e. it's the
> > default. I see there is an "--assume-clean" option which would possibly
> > have skipped that step, though the man page doesn't recommend it unless
> > you know what y
On 5/25/20 5:23 PM, Patrick O'Callaghan wrote:
Yes, I understand that. I still think the behaviour of mdadm in this
case is counter-intuitive. When I explicitly ask for the creation of an
array called /dev/md0 and the command first of all warns me that this
will (not "may") destroy the existing p
On 5/25/20 2:22 PM, Patrick O'Callaghan wrote:
On Mon, 2020-05-25 at 11:03 -0700, Samuel Sieb wrote:
On 5/25/20 2:25 AM, Patrick O'Callaghan wrote:
On Sun, 2020-05-24 at 16:22 -0700, Samuel Sieb wrote:
On 5/24/20 3:39 PM, Patrick O'Callaghan wrote:
So although the above message says the exis
On Tue, 2020-05-26 at 05:42 +0800, Ed Greshko wrote:
> On 2020-05-26 00:24, Patrick O'Callaghan wrote:
> > I still ended up with /dev/md127p1 as before, and /dev/md0 wa's not
> > created.
>
> I didn't think you would. As I mentioned in another post, you didn't start
> out with a "fresh" drive.
On 2020-05-26 00:24, Patrick O'Callaghan wrote:
> I still ended up with /dev/md127p1 as before, and /dev/md0 wa's not
> created.
I didn't think you would. As I mentioned in another post, you didn't start out
with a "fresh" drive. It already
had info on it that mdadm had created and then just re
On Mon, 2020-05-25 at 11:03 -0700, Samuel Sieb wrote:
> On 5/25/20 2:25 AM, Patrick O'Callaghan wrote:
> > On Sun, 2020-05-24 at 16:22 -0700, Samuel Sieb wrote:
> > > On 5/24/20 3:39 PM, Patrick O'Callaghan wrote:
> > >
> > > > So although the above message says the existing partition table will b
On 5/25/20 2:25 AM, Patrick O'Callaghan wrote:
On Sun, 2020-05-24 at 16:22 -0700, Samuel Sieb wrote:
On 5/24/20 3:39 PM, Patrick O'Callaghan wrote:
So although the above message says the existing partition table will be
lost, for some reason I'm still getting a partition, while you
apparently
On Mon, 2020-05-25 at 06:24 -0600, Greg Woods wrote:
> In fairness to systemd, it has never been possible to edit /etc/fstab and
> have your changes automatically applied. It has always been necessary to
> run some sort of mount command (or reboot) after modifying fstab.
Which is what I thought I
On Mon, 2020-05-25 at 07:49 -0500, Roger Heflin wrote:
> His issue was he did the manual mount on /raid (already in fstab with
>
> a different device) and systemd immediately unmounted it. The mount
>
> succeeds with no error, and the umount happens so fast you are left
>
> confused about what
On Mon, 2020-05-25 at 06:24 -0600, Greg Woods wrote:
> I would guess that keeping an eye on dozens of config files to see if
> any of them have changed would use a lot of system resources over
> time, but I expect there are more serious and less obvious reasons
> why this is not done.
On a gigaHer
His issue was he did the manual mount on /raid (already in fstab with
a different device) and systemd immediately unmounted it. The mount
succeeds with no error, and the umount happens so fast you are left
confused about what is going on.It did at least note it in
messages so long as you can g
I had a bug submitted on a RHEL contract 2-3 years ago about it. I
get emails each quarter saying they are still evaluating it. I am not
holding my breath.
The reload could have bad effects since it might shuffle things around
if fstab changed that really only could happen on a reboot, the only
On Mon, May 25, 2020, 3:29 AM Patrick O'Callaghan
wrote:
> I wonder why systemd doesn't notice
> that the file has changed and reload accordingly.
>
The obvious as stupid answer is because it is not designed to work that
way. The process is actually documented; see for example
systemd-fstab-gene
On Sun, 2020-05-24 at 18:07 -0500, Roger Heflin wrote:
> Did you originally have /dev/md0p1 in fstab and you have edited fstab
>
> since you booted?
>
>
>
> If so the great and amazing systemd will not be amused and will still
>
> have a job for the old device, you will need to run systemctl
>
On Sun, 2020-05-24 at 16:22 -0700, Samuel Sieb wrote:
> On 5/24/20 3:39 PM, Patrick O'Callaghan wrote:
>
> > So although the above message says the existing partition table will be
> > lost, for some reason I'm still getting a partition, while you
> > apparently didn't. I copied the --create comma
On 5/24/20 3:39 PM, Patrick O'Callaghan wrote:
So although the above message says the existing partition table will be
lost, for some reason I'm still getting a partition, while you
apparently didn't. I copied the --create command directly from the man
page. Is this not the "standard" way you men
On 2020-05-25 06:39, Patrick O'Callaghan wrote:
> On Mon, 2020-05-25 at 05:34 +0800, Ed Greshko wrote:
>> On 2020-05-25 05:20, Patrick O'Callaghan wrote:
>>> On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
>> sda
Did you originally have /dev/md0p1 in fstab and you have edited fstab
since you booted?
If so the great and amazing systemd will not be amused and will still
have a job for the old device, you will need to run systemctl
daemon-reload for it to read the fstab file as it is not smart enough
to do th
On Mon, 2020-05-25 at 05:34 +0800, Ed Greshko wrote:
> On 2020-05-25 05:20, Patrick O'Callaghan wrote:
> > On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
> > > > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> > > > > sda8:00 50G 0 disk
> > > > > └─md
On 2020-05-25 05:20, Patrick O'Callaghan wrote:
> On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda8:00 50G 0 disk
└─md0 9:00 50G 0 raid1
sdb8
On Sun, 2020-05-24 at 13:58 -0500, Roger Heflin wrote:
> You need to show fstab. Systemd owns raid and its entry is not working.
> It will overrule you and unmount anything you put there since it thinks it
> owns it.
I thought I'd quoted that somewhere. Anyway, here's the line:
/dev/md127p1
On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
> > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> > > sda8:00 50G 0 disk
> > > └─md0 9:00 50G 0 raid1
> > > sdb8:16 0 50G 0 disk
> > > └─md0
On 2020-05-24 23:43, Patrick O'Callaghan wrote:
> On Sun, 2020-05-24 at 21:14 +0800, Ed Greshko wrote:
>> On 2020-05-24 19:37, Patrick O'Callaghan wrote:
>>> Still getting the hang of md. I had it working for several days (2
>>> disks in RAID1 config) but after a system update and reboot, it
>>> su
You need to show fstab. Systemd owns raid and its entry is not working.
It will overrule you and unmount anything you put there since it thinks it
owns it.
On Sun, May 24, 2020, 12:36 PM Patrick O'Callaghan
wrote:
> On Sun, 2020-05-24 at 19:29 +0200, Alexander Dalloz wrote:
> > > > Generally yo
On Sun, 2020-05-24 at 19:29 +0200, Alexander Dalloz wrote:
> > > Generally you partition the disks: sdd1 sde1, then create a RAID of of
> > > them: md127,
> > > then you format and mount md127.
>
>
> That's called partitioned RAID. Makes it easier if you need to replace
>
> an array member.
>
Am 24.05.2020 um 19:19 schrieb Patrick O'Callaghan:
On Sun, 2020-05-24 at 18:37 +0200, Roberto Ragusa wrote:
On 2020-05-24 13:37, Patrick O'Callaghan wrote:
sdd 8:48 0 931.5G 0 disk
└─md127 9:127 0 931.4G 0 raid1
└─md127p1
On Sun, 2020-05-24 at 18:37 +0200, Roberto Ragusa wrote:
> On 2020-05-24 13:37, Patrick O'Callaghan wrote:
>
> > sdd 8:48 0 931.5G 0 disk
> > └─md127 9:127 0 931.4G 0 raid1
> >└─md127p1 259:00 931.4G 0 part
>
On Sun, 2020-05-24 at 11:27 -0500, Roger Heflin wrote:
> what does the entry in /etc/fstab look like and what filesystem is it?
The filesystem is ext4:
/dev/md127p1/raid ext4
defaults0 0
poc
__
On 2020-05-24 13:37, Patrick O'Callaghan wrote:
sdd 8:48 0 931.5G 0 disk
└─md127 9:127 0 931.4G 0 raid1
└─md127p1 259:00 931.4G 0 part
sde 8:64 0 931.5G 0 disk
└─md127
what does the entry in /etc/fstab look like and what filesystem is it?
On Sun, May 24, 2020 at 10:44 AM Patrick O'Callaghan
wrote:
>
> On Sun, 2020-05-24 at 21:14 +0800, Ed Greshko wrote:
> > On 2020-05-24 19:37, Patrick O'Callaghan wrote:
> > > Still getting the hang of md. I had it working for
On Sun, 2020-05-24 at 21:14 +0800, Ed Greshko wrote:
> On 2020-05-24 19:37, Patrick O'Callaghan wrote:
> > Still getting the hang of md. I had it working for several days (2
> > disks in RAID1 config) but after a system update and reboot, it
> > suddenly shows no data:
> >
> > ]# lsblk
> > NAME
On Sun, 2020-05-24 at 23:40 +1000, fed...@eyal.emu.id.au wrote:
> On 2020-05-24 21:37, Patrick O'Callaghan wrote:
>
> > Still getting the hang of md. I had it working for several days (2
> > disks in RAID1 config) but after a system update and reboot, it
>
>
> A system update or a system upgrade
On Sun, 2020-05-24 at 07:38 -0500, Roger Heflin wrote:
> cat /proc/mounts and verify it is mounted, ls -l /raid
It isn't mounted, though the mount command didn't give an error. A look
at journalctl shows:
May 24 16:38:23 Bree kernel: EXT4-fs (md127p1): mounted filesystem with ordered
data mode.
On Sun, 2020-05-24 at 21:18 +0800, Ed Greshko wrote:
> On 2020-05-24 19:37, Patrick O'Callaghan wrote:
> > Still getting the hang of md. I had it working for several days (2
> > disks in RAID1 config) but after a system update and reboot, it
> > suddenly shows no data:
>
> Oh, you did make a /etc/
On 2020-05-24 21:37, Patrick O'Callaghan wrote:
Still getting the hang of md. I had it working for several days (2
disks in RAID1 config) but after a system update and reboot, it
A system update or a system upgrade?
suddenly shows no data:
]# lsblk
NAMEMAJ:MIN RM
On 2020-05-24 19:37, Patrick O'Callaghan wrote:
> Still getting the hang of md. I had it working for several days (2
> disks in RAID1 config) but after a system update and reboot, it
> suddenly shows no data:
Oh, you did make a /etc/mdadm.conf?
--
The key to getting good answers is to ask good
On 2020-05-24 19:37, Patrick O'Callaghan wrote:
> Still getting the hang of md. I had it working for several days (2
> disks in RAID1 config) but after a system update and reboot, it
> suddenly shows no data:
>
> ]# lsblk
> NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> [..
cat /proc/mounts and verify it is mounted, ls -l /raid
Unmount it and do ls -l /raid and make sure nothing is "under" it.
An issue with the raid and/or filesystem is going to be very unlikely
to cleanly remove all files like this, typically to do this you either
need to have done a rm -rf against
Still getting the hang of md. I had it working for several days (2
disks in RAID1 config) but after a system update and reboot, it
suddenly shows no data:
]# lsblk
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
[...]
sdd 8:48 0 931.5G 0 disk
45 matches
Mail list logo