On Fri, 2010-02-19 at 14:44 +0000, Stroller wrote: > On 19 Feb 2010, at 12:15, Iain Buchanan wrote: > > ... > > Can I randomly mount partitions read-only or will this screw things up > > further? > > If this is unsafe I will have ketchup & mustard on my baseball cap.
er... could you translate that? How about "dead horse on my baggy green"? Should I be able to mount them automatically and let the SW RAID module sort it out or do I have to know how they're tied together beforehand? The message from the kernel is: Linux version 2.4.19-snap (r...@buildsys) (gcc version egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)) #1 Tue Jul 13 20:24:35 PDT 2004 and later there's output from "md" which is (I assume) the linux software raid module (this is a grep, so there are other messages in between): md: linear personality registered as nr 1 md: raid0 personality registered as nr 2 md: raid1 personality registered as nr 3 md: raid5 personality registered as nr 4 md: spare personality registered as nr 8 md: md driver 0.91.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. md: bind<hdg2,1> md: bind<hde2,2> md: bind<hda2,3> md: hda2's event counter: 0000039d md: hde2's event counter: 0000039d md: hdg2's event counter: 0000039d md: md100: raid array is not clean -- starting background reconstruction md: RAID level 1 does not need chunksize! Continuing anyway. md100: max total readahead window set to 124k md100: 1 data-disks, max readahead per data-disk: 124k raid1: md100, not all disks are operational -- trying to recover array raid1: raid set md100 active with 3 out of 4 mirrors md: updating md100 RAID superblock on device md: hda2 [events: 0000039e]<6>(write) hda2's sb offset: 546112 md: recovery thread got woken up ... md: looking for a shared spare drive md100: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... md: hde2 [events: 0000039e]<6>(write) hde2's sb offset: 546112 md: hdg2 [events: 0000039e]<6>(write) hdg2's sb offset: 546112 md: bind<hdg5,1> md: bind<hde5,2> md: bind<hda5,3> md: hda5's event counter: 000003a4 md: hde5's event counter: 000003a4 md: hdg5's event counter: 000003a4 md: md101: raid array is not clean -- starting background reconstruction md: RAID level 1 does not need chunksize! Continuing anyway. md101: max total readahead window set to 124k md101: 1 data-disks, max readahead per data-disk: 124k raid1: md101, not all disks are operational -- trying to recover array raid1: raid set md101 active with 3 out of 4 mirrors md: updating md101 RAID superblock on device md: hda5 [events: 000003a5]<6>(write) hda5's sb offset: 273024 md: recovery thread got woken up ... md: looking for a shared spare drive md101: no spare disk to reconstruct array! -- continuing in degraded mode md: looking for a shared spare drive md100: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... md: hde5 [events: 000003a5]<6>(write) hde5's sb offset: 273024 md: hdg5 [events: 000003a5]<6>(write) hdg5's sb offset: 273024 XFS mounting filesystem md(9,100) Ending clean XFS mount for filesystem: md(9,100) The partitions look like: 9 100 546112 md100 9 101 273024 md101 34 0 78150744 hdg 34 1 16041 hdg1 34 2 546210 hdg2 34 3 1 hdg3 34 4 76656636 hdg4 34 5 273104 hdg5 34 6 273104 hdg6 33 0 78150744 hde 33 1 16041 hde1 33 2 546210 hde2 33 3 1 hde3 33 4 76656636 hde4 33 5 273104 hde5 33 6 273104 hde6 22 0 78150744 hdc 22 1 16041 hdc1 22 2 546210 hdc2 22 3 1 hdc3 22 4 76656636 hdc4 22 5 273104 hdc5 22 6 273104 hdc6 3 0 78150744 hda 3 1 16041 hda1 3 2 546210 hda2 3 3 1 hda3 3 4 76656636 hda4 3 5 273104 hda5 3 6 273104 hda6 many thanks! -- Iain Buchanan <iaindb at netspace dot net dot au> By golly, I'm beginning to think Linux really *is* the best thing since sliced bread. -- Vance Petree, Virginia Power