On Thursday, September 10, 2015 at 10:00:06 AM UTC-5, Pascal Hambourg wrote: > ray a écrit : > > I have only been able to boot the HDD instance. When I navigate to > > the SSD instance, nothing is there. > > Sorry, I should have mentionned that I never used rEFInd (fortunately > never needed it) and don't know how it works and what it looks like. > Could you describe what it displays step by step ? > > >> /dev/sdf is one of the SSD used for RAID 0 and LVM, right ? > > > > /dev/sdf is a HDD, no md or LVM. > > I was confused because you wrote in a previous post : > > > sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB > > (EFI) > > sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 > > GB (EFI) > > sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, > > 1 GB (EFI) > > sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q > > So it looks like some device names changed. > > >>> root@mc:/boot/efi/EFI# grub-install /dev/sdf > >>> Installing for x86_64-efi platform. > >>> Installation finished. No error reported. > >> > >> The device name is not used by grub-install with an EFI target. > >> You could have tried to use the option --boot-loader-id I mentioned in > >> a previous post. > > > > Which device name is not used by grub-install? > > Whatever you type as the device name in the command line, /dev/sdf here. > > > I did not find a way to use --boot-loader-id. I googled this exact > phrase and did not find anything but this posting. How do I use it? > > > I did not find a way to use --boot-loader-id. I googled this exact > > phrase and did not find anything but this posting. How do I use it? > > It is describonned in grub-install manpage. Just type "man grub-install" > in the command line to read it. This is one place I fell down, my instance of grub-install did not have that commend.
> > >>> root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi > >>> /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) > >>> x86-64 (stripped to external PDB), for MS Windows > >>> root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian > >>> Boot0000* debian > >>> HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x100000)/File(\EFI\debian\grubx64.efi) > >> Looks as expected. You can check with blkid which partition has > >> PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the > >> formard / in the boot entry pathname, that's because the UEFI uses > >> MS-style path. > > blkid shows PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565 to be /dev/sdf1. > > This is consistent with /dev/sdf1 being mounted on /boot/efi. > > >>> A baffling point: In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi > >> How is it baffling ? The EFI system partition is mounted on /boot/efi > >> and the path relative to the partition filesystem root is > >> /EFI/debian/grubx64.efi. The EFI firmware does not care about where you > >> mount the EFI system partition. > > > > Baffling: Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi > > What do you mean by "viewing with rEFInd" ? AFAIK, rEFInd is just a boot > loader, and pathnames such as /boot/efi/EFI/debian/grubx64.efi are used > only in a running system after the kernel takes over. > > >>> After booting up into the HDD instance, I get: > >> Booting how ? On its own or from rEFInd ? > > This is after booting on its own. > > Whether you boot the HDD Debian instance from rEFInd, the GRUB EFI > installed on HDD or any other boot loader should not make any difference > in the mounted filesystems... > > >> What's mounted on /boot/efi ? > > I am not sure what it means 'what's mounted on ...'. > > If "mount" or "df" show a line with /dev/sdf1 and /boot/efi, it means > that /dev/sdf1 is mounted on /boot/efi. It took me many times rereading this for it to sink in. > > > #mount | grep boot returns empty > > #mount | grep efi returns efivarfs on /sys/firmware/efi/efivars (...) > > Looks like nothing is mounted on /boot/efi, explaining why it looks > empty. But we have yet to explain why nothing is mounted. > Can you check the contents of /etc/fstab ? > > > root@md:/home/rayj# df -h /boot/ > > Filesystem Size Used Avail Use% Mounted on > > /dev/sdf2 1.4T 4.2G 1.3T 1% / > > Irrelevant. We are interested in /boot/efi, not /boot. > > > OK, a little more reading tells me /dev/sdf2 is mounted on /boot > > No, it is the root filesystem, mounted on /. There is no separate /boot. It looks like I lost my previous response to this conversation in my excitement. After rereading your messages and Davids', I found I needed to mount /dev/sdf1 on /boot/efi since I didn't find it there. The first fault was: # grub-install /dev/sdf --target=x86_64-efi --bootloader-id=test --recheck Unrecognized option `--target=x86_64-efi' I found from grub-install --help, there was no --bootloader-id= or --recheck. After some checking, I #apt-get install grub2 Now, grub-install had all the functions. None of the boot directories were updated. Research showed that I needed to: update-grub I was able to use PCManFM to see the new files. I rebooted. Now there are two choices Debian and test. I booted into Debian, but I could not go into the boot directories with PCManFM, I had to sudo into them in a shell. I rebooted and chose test. It booted up. But it is not the same instance of Debian, there is a different boot (in fact two boots - /boot and /boot/efi). Both on SSDs not the HDD. The / and home are on a different SSD. And the desk top was not the same. The /, /boot, and /home are on LVM. Yes, this is a test case. While I learned a lot, the result is not what I was looking for. I was looking for a way to rename the instance I have so I could build a new instance using Debian with a specific distribution of partitions. I am not sure how to recover. I don't really care about the new partitions across the drives, I can fix that. But I did not rename my current instance so I don't know how to install a new, controlled placement since Debian will crash into itself again - this is where I started. Now I have two, but one is overlayed ontop of some of the locations I was planning on using for my target system. I am reluctant to just remove the current 'debian' instance and attempt a fresh install; if/when I screw up the first (few) attempts, I won't have an intact instance from which to work. Any suggestions?