On 11/09/2013 04:48 PM, Gary Dale wrote: > On 09/11/13 04:05 PM, PaulNM wrote: >> Hi Folks, >> >> I've been dealing with a frustratingly vexing issue for a while, >> and am >> at a loss on where to go next. >> >> Basically, We have a 8x 3TB drive system that I'm trying to install >> Wheezy on. During the install each drive is partitioned with a 1MB BIOS >> Boot partition, followed by a RAID partition taking up the rest of the >> space. The 8 RAID partitions are made into a RAID6 array (8 active, 0 >> spare), which is used for a volume group. Two logical volumes (for now) >> are in there, for / and swap. (/ is ext4) >> >> When ever I try to boot the system, I'm stuck at the grub rescue >> prompt >> after Grub spits out "error: no such disk.". Doing ls shows all eight >> drives (hd0) as well as their associated partitions (hd3,gpt2). Unlike >> the multiple VMs of this setup that I've created on my laptop, there are >> no (md) or (Logical-Volume-Name) entries. Prefix and Root are set to >> (LV-OS), and attempts to set them to other values fail. >> >> I've also attempted an identical install, but with a 200MB Raid >> partition as the second partition, RAID1'd with ext2 /boot. No >> difference. (Actually, the installer fails to use it. I've manually >> copied the files and edited fstab. Updating/reinstalling grub afterwards >> gets the same result.) >> >> Every VM I've created works with no problems, but it always fails on >> the actual hardware. Every theory I can come up with should also fail on >> the VMs. Is there a way to make sure the bios boot partition is being >> used? Would the fact that the physical install puts the usb drive as sda >> be an issue? I've tried editing device.map and updating/reinstalling >> grub, but no dice. >> >> It seems to me that for some reason Grub can't get to it's raid >> related >> modules. I've checked /boot/grub/grub.cfg. The auto-generated file does >> contain all the insmod lines for raid, raid6rec,mdraid1x, lvm, a bunch >> of part_gpt, and ext2. >> >> Love to hear any ideas, no matter how far-fetched. >> >> -PaulNM > > I know that Wheezy has no trouble booting from a RAID5 array occupying > the entire disk partition (i.e. the partitions start after the GPT and > occupy the entire rest of the disks).
Thanks Gary, that worked for me as well in earlier test VMs prior to trying to install on the physical server. (Using RAID6) Everything I've read says Wheezy is perfectly capable of doing what I've set up, others have apparently done so, and it works perfectly fine in virtual machines. > > My md0 device is partitioned without using LVM. In fact I've never > really needed LVM so I can't comment on it. The partitions I have are > named like md0p1, md0p2, etc.. This of course requires that device md0 > has a partition table. > > > I'm confused by your comment that the installer fails to use the /boot > partition you created. If you tell the installer to use a partition as > /boot, it will do that. Yeah, it's weird. I've only tried a separate /boot twice. Once on the physical server, where I only noticed during troubleshooting that the /boot raid was empty, and the kernel/other files were in /boot on the rootfs. In the test VM afterwards, the installer did give a warning that it was unable to mount /boot. Maybe I missed it earlier? Anyway, I shouldn't need to use a separate boot, it works fine without it in VMs and other reports online. I only tried it for troubleshooting. > > RAID does of course require that /etc/mdadm/mdadm.conf be populated with > the correct RAID information and that the initramfs has the necessary > modules. Grub also needs to be updated to include the correct boot > information. > > The lack of grub md names suggests that it doesn't know about your RAID > setup. update-initramfs -u && update-grub may fix that. Also, make sure > that grub is actually installed on the RAID array. Hmm, my first thought was initramfs only has to do with the kernel, and not grub. That said, maybe grub does look at the initramfs config for hints. Certainly worth a look the next time I have my hands on the hardware. I've installed grub to the harddrives, sdb to sdi, not just during install, but again via chroot after booting rescue mode from the installer. Still curious as to why it works fine in VMs, though. I'm even using SATA and not Virtio (KVM/Qemu), as well as identically sized "disks". Also still open to other ideas and suggestions to try. If nothing works the next time I'm on the hardware (likely Tuesday), we're going to just plug a usb flash drive in an internal usb port and use that for the OS. (*Really* hoping to avoid that, though.) -PaulNM -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/527eb693.8080...@paulscrap.com