Added grub-pc package version

** Description changed:

  Description:  Ubuntu 20.04.4 LTS
  Release:      20.04
  grub-pc         2.04-1ubuntu26.15
- 
  
  Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk.
  I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, 
bios-grub flag.
  The Ubuntu bootable partition is a plain 4TB ext4.
  Suddenly, after a routine automatic Ubuntu kernel update, the boot started to 
break with message:
  "error: attempt to read or write outside of disk (hd0)."
  Boot-Repair didn't find nor fix anything.
+ fscheck found nothing bad.
  
  After a painful search, I realized that part of the new kernel file had been 
allocated by the filesystem above the 2TiB limit...
  Some more investigation in the Grub documentation suggested that by default, 
Grub uses BIOS drivers to load files from the target partition. This is tersely 
documented in the Grub 2.06 documentation, in the "nativedisk" command 
paragraph.
  And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB.
  The native Grub drivers don't have this limitation (empirical find, when they 
are used, the same kernel file loads!)
  So when using native grub drivers (ahci in my case), everything works.
  Native drivers can be activated from Grub Rescue using
  nativedisk
  But a better a longer-lasting solution is to insert them into the bootblocks 
by running grub-install with a parameter such as
  --disk-module=ahci
  (could be ehci, ATA, ohci or uhci according to harware).
  
  So I solved the problem by re-running a grub-install on that disk with 
parameter
  --disk-module=ahci
  The problem with that approach is that any further grub-install without those 
parms (like an Ubuntu software update or upgrade might decide to do?) will zap 
the native driver from the Grub partition, and break the boot again.
  
  grub-install (and/or update-grub) should never generate a potential broken 
boot when it can avoid it:
  Couldn't it (shouldn't it) detect when one of the boot partitions in the boot 
menu crosses the 2TiB mark, give a warning, and generate a grub-install with 
the appropriate --disk-module=MODULE parameter?
  
  4TB SSD disk prices dropping fast (below 350€ these days). This problem
  might increasingly show up...

** Description changed:

  Description:  Ubuntu 20.04.4 LTS
  Release:      20.04
  grub-pc         2.04-1ubuntu26.15
  
  Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk.
  I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, 
bios-grub flag.
  The Ubuntu bootable partition is a plain 4TB ext4.
  Suddenly, after a routine automatic Ubuntu kernel update, the boot started to 
break with message:
  "error: attempt to read or write outside of disk (hd0)."
  Boot-Repair didn't find nor fix anything.
  fscheck found nothing bad.
  
  After a painful search, I realized that part of the new kernel file had been 
allocated by the filesystem above the 2TiB limit...
  Some more investigation in the Grub documentation suggested that by default, 
Grub uses BIOS drivers to load files from the target partition. This is tersely 
documented in the Grub 2.06 documentation, in the "nativedisk" command 
paragraph.
  And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB.
- The native Grub drivers don't have this limitation (empirical find, when they 
are used, the same kernel file loads!)
+ The native Grub drivers don't have this limitation (empirical find, when they 
are used, the same kernel file loads - and using them fixes the problem for 
good).
  So when using native grub drivers (ahci in my case), everything works.
  Native drivers can be activated from Grub Rescue using
  nativedisk
  But a better a longer-lasting solution is to insert them into the bootblocks 
by running grub-install with a parameter such as
  --disk-module=ahci
  (could be ehci, ATA, ohci or uhci according to harware).
  
  So I solved the problem by re-running a grub-install on that disk with 
parameter
  --disk-module=ahci
  The problem with that approach is that any further grub-install without those 
parms (like an Ubuntu software update or upgrade might decide to do?) will zap 
the native driver from the Grub partition, and break the boot again.
  
  grub-install (and/or update-grub) should never generate a potential broken 
boot when it can avoid it:
  Couldn't it (shouldn't it) detect when one of the boot partitions in the boot 
menu crosses the 2TiB mark, give a warning, and generate a grub-install with 
the appropriate --disk-module=MODULE parameter?
  
  4TB SSD disk prices dropping fast (below 350€ these days). This problem
  might increasingly show up...

** Description changed:

  Description:  Ubuntu 20.04.4 LTS
  Release:      20.04
  grub-pc         2.04-1ubuntu26.15
  
  Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk.
  I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, 
bios-grub flag.
  The Ubuntu bootable partition is a plain 4TB ext4.
  Suddenly, after a routine automatic Ubuntu kernel update, the boot started to 
break with message:
  "error: attempt to read or write outside of disk (hd0)."
  Boot-Repair didn't find nor fix anything.
  fscheck found nothing bad.
  
  After a painful search, I realized that part of the new kernel file had been 
allocated by the filesystem above the 2TiB limit...
  Some more investigation in the Grub documentation suggested that by default, 
Grub uses BIOS drivers to load files from the target partition. This is tersely 
documented in the Grub 2.06 documentation, in the "nativedisk" command 
paragraph.
  And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB.
  The native Grub drivers don't have this limitation (empirical find, when they 
are used, the same kernel file loads - and using them fixes the problem for 
good).
  So when using native grub drivers (ahci in my case), everything works.
  Native drivers can be activated from Grub Rescue using
  nativedisk
- But a better a longer-lasting solution is to insert them into the bootblocks 
by running grub-install with a parameter such as
+ But a better and longer-lasting solution is to insert them into the 
bootblocks by running grub-install with a parameter such as
  --disk-module=ahci
  (could be ehci, ATA, ohci or uhci according to harware).
  
  So I solved the problem by re-running a grub-install on that disk with 
parameter
  --disk-module=ahci
  The problem with that approach is that any further grub-install without those 
parms (like an Ubuntu software update or upgrade might decide to do?) will zap 
the native driver from the Grub partition, and break the boot again.
  
  grub-install (and/or update-grub) should never generate a potential broken 
boot when it can avoid it:
  Couldn't it (shouldn't it) detect when one of the boot partitions in the boot 
menu crosses the 2TiB mark, give a warning, and generate a grub-install with 
the appropriate --disk-module=MODULE parameter?
  
  4TB SSD disk prices dropping fast (below 350€ these days). This problem
  might increasingly show up...

** Description changed:

  Description:  Ubuntu 20.04.4 LTS
  Release:      20.04
  grub-pc         2.04-1ubuntu26.15
  
  Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk.
  I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, 
bios-grub flag.
  The Ubuntu bootable partition is a plain 4TB ext4.
  Suddenly, after a routine automatic Ubuntu kernel update, the boot started to 
break with message:
  "error: attempt to read or write outside of disk (hd0)."
  Boot-Repair didn't find nor fix anything.
  fscheck found nothing bad.
  
  After a painful search, I realized that part of the new kernel file had been 
allocated by the filesystem above the 2TiB limit...
  Some more investigation in the Grub documentation suggested that by default, 
Grub uses BIOS drivers to load files from the target partition. This is tersely 
documented in the Grub 2.06 documentation, in the "nativedisk" command 
paragraph.
  And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB.
  The native Grub drivers don't have this limitation (empirical find, when they 
are used, the same kernel file loads - and using them fixes the problem for 
good).
  So when using native grub drivers (ahci in my case), everything works.
  Native drivers can be activated from Grub Rescue using
  nativedisk
  But a better and longer-lasting solution is to insert them into the 
bootblocks by running grub-install with a parameter such as
  --disk-module=ahci
- (could be ehci, ATA, ohci or uhci according to harware).
+ (could be ahci, ehci, ATA, ohci or uhci according to harware).
  
  So I solved the problem by re-running a grub-install on that disk with 
parameter
  --disk-module=ahci
  The problem with that approach is that any further grub-install without those 
parms (like an Ubuntu software update or upgrade might decide to do?) will zap 
the native driver from the Grub partition, and break the boot again.
  
  grub-install (and/or update-grub) should never generate a potential broken 
boot when it can avoid it:
  Couldn't it (shouldn't it) detect when one of the boot partitions in the boot 
menu crosses the 2TiB mark, give a warning, and generate a grub-install with 
the appropriate --disk-module=MODULE parameter?
  
  4TB SSD disk prices dropping fast (below 350€ these days). This problem
  might increasingly show up...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1970236

Title:
  Grub2 bios-install defaults to BIOS disk drivers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1970236/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to