Now that I think more about this, I think this is more wishlist then a bug. Though I don't have a better bug title yet in mind. So please feel free to change this.
Am Dienstag, den 19.08.2008, 19:21 +0200 schrieb martin f krafft: > > Unless you waited for /dev/sdc1 to resync, removing /dev/sdd1 will > basically destroy the array. Not much one can do about that. I looked now again at the FAQ, question 18 looks calculatable. I haven't bothered to look at the code, I don't understand yet that much C and so I need a while to get into it. But you have the numbers `mdadm -Q --detail' prints probable anyway already. If one device is removed the superblock is changed, so it could print out a warning/error that it's now broken. `mdadm -Q --detail' is telling me `clean,degraded' with one active disk and 3 spare. It could say `broken'. > Anyway, I think that's the problem: when you (re-)add a disk to an > array, it becomes a spare until it synchronised. If you wait for the > synchronisation between each add/remove step, you should never see > more than 1 spare. Ah, I assumed that a spare is always only used if one disk fails. Would it be possible to make a `mdadm --add-this-disk-as-active-instead-of-spare' ? or is this actually a good or bad idea? I have now added another virtual disk /dev/sdg to the VM. Recreated the raid10 Then I set faulty and removed sdc1 and added sdg1 Still 1 is removed and 2 spare disks and no resyncing, so I think this idea isn't that bad. fz-vm:~# mdadm -Q --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Tue Aug 19 19:52:16 2008 Raid Level : raid10 Array Size : 16771584 (15.99 GiB 17.17 GB) Used Dev Size : 8385792 (8.00 GiB 8.59 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Aug 19 20:22:52 2008 State : active, degraded Active Devices : 3 Working Devices : 5 Failed Devices : 0 Spare Devices : 2 Layout : near=2, far=1 Chunk Size : 64K UUID : b3d68384:06515443:89ccbef7:ff5abfb0 (local to host fz-vm) Events : 0.21 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 49 1 active sync /dev/sdd1 2 8 65 2 active sync /dev/sde1 3 8 81 3 active sync /dev/sdf1 4 8 33 - spare /dev/sdc1 5 8 97 - spare /dev/sdg1 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]