Hi,

Michael Tokarev:
> Aha. I mis-read your initial bugreport - i didn't notice
> you where starting your array manually.
> 
Had to, as the automatic version had the exact same problem.

> You said your array didn't come up during boot, but provided
> a command line which you executed which failed:
> 
>  $ mdadm -A /dev/md7 /dev/hd[bceijkl]1
>  mdadm: failed to add /dev/hdk1 to /dev/md7: Invalid argument
>  mdadm: failed to add /dev/hdl1 to /dev/md7: Invalid argument
>  mdadm: /dev/md7 assembled from 3 drives - not enough to start the array.
> 
> At this point, the situation should be something like:
> 
>  mdadm have choosen to use 5 drives out of 7 - only 3 of
>  hd[bceij]1 (first 5) and hd[kl]1.  Why it did so is impossible
>  to say - as there's no additional error messages, it means
>  mdadm tried to open all 7 devices in turn, found everything
>  is ok, but descided not to touch 2 of them.  Or, another
>  possible cause, there wasn't all of the nodes in question
>  present in /dev, so shell wildcard expanded to less than 7
>  entries.
> 
Hmm, no, /dev must have been OK -- I was able to start the array with
the same shell line (but omitting the two out-of-date disks).

>  for some reason, kernel didn't like the 2 last devices
>  (hd[kl]1) and complained.  dmesg should contain more info
>  (incl. exact reason why kernel didn't like the two) --

Their RAID superblock was out of date.
(No, don't remember the *exact* message, though if you give me three
alternatives I can probably tell you which one it was.)

>  and the dmesg from that time still *may* be in your logs.
> 
Unfortunately, no.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]

Attachment: signature.asc
Description: Digital signature

Reply via email to