forcemerge 533848 534274
thanks

Hi,

martin f krafft ha scritto:
> also sprach Lior Chen <li...@lirtex.com> [2009.06.24.0800 +0200]:
>> I have managed to fully reproduce this. This situation arose from mistakenly 
>> installing the dmraid package along with the mdadm package (or maybe it was 
>> installed because of some package-dependency, I can't really tell). 
>>
>> Removing the dmraid package is not enough, and removing the mdadm package 
>> and 
>> reinstalling it again is required in order to update the initramfs. You were 
>> right about suspecting the  isw_* devices, they were generated by dmraid, 
>> and 
>> that was the problem.

When you remove dmraid, an "update-initramfs -u" is triggered. This doesn't mean
that all initiramfs will be updated, but only the initramfs of the newest
kernel. To use an oldest kernel, you should update your initramfs manually
(update-initramfs -u -k version)


>> The stranger thing is that this situation happens only with new dmraid 
>> versions (1.0.0.rc15*). Maybe the dmraid package should check for this 
>> conflict 
>> and warn the admin, because the result is an unbootable system.

This is because now the dmraid-activate script uses the newly introduced -Z flag
 to instruct the kernel to remove partition device nodes from array member disk.
This was necessary to avoid race conditions with udev and creating UUID/label
symbolic links either for member disk nodes, or dmraid device-mapper nodes.

> dmraid should definitely check which devices are under mdadm control
> and stay away from them.

I disagree, if user creates (with the controller bios utility) a fakeraid array,
dmraid assumes correctly the partition device nodes as part of a dmraid array.
If user adds them in a software (mdadm) raid, this is only a misconfiguration.


Cheers,
Giuseppe.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to