The diff between test6 and test2: pick e259e3258f3f PCI: Wait for device readiness with Configuration RRS pick b11342ebfd0e PCI: Avoid FLR for Mediatek MT7922 WiFi pick 249761fabd5b ALSA: hda: Support for Ideapad hotkey mute LEDs pick 50989dda6391 platform/x86:lenovo-wmi-hotkey-utilities.c: Support for mic and audio mute LEDs pick 89e2f17177e8 UBUNTU: [Config] Enable Lenovo wmi hotkey driver pick b9e6c21cc7f2 intel_idle: add Granite Rapids Xeon support pick acb72c70f996 intel_idle: add Granite Rapids Xeon D support
Now I strongly suspect the commit "e259e3258f3f PCI: Wait for device readiness with Configuration RRS" is the culprit. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/2111521 Title: nvme no longer detected on boot after upgrade to 6.8.0-60 Status in linux package in Ubuntu: Triaged Bug description: Short version: booting 6.8.0-59-generic or any earlier version from the grub menu works; 6.8.0-60-generic dumps me at the initramfs prompt with no disks. We have some servers running Ubuntu 24.04.2 LTS. They have NVME solid-state disks which (in a working kernel) are detected as follows: [ 3.537968] nvme nvme0: pci function 10000:01:00.0 [ 3.539285] nvme 10000:01:00.0: PCI INT A: no GSI [ 5.897819] nvme nvme0: 32/0/0 default/read/poll queues [ 5.905451] nvme nvme0: Ignoring bogus Namespace Identifiers [ 5.909057] nvme0n1: p1 p2 p3 On the PCI bus they look like this: 10000:01:00.0 Non-Volatile memory controller [0108]: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] [8086:0a54] $ ls -l /sys/class/nvme/nvme0 lrwxrwxrwx 1 root root 0 May 22 16:56 /sys/class/nvme/nvme0 -> ../../devices/pci0000:d7/0000:d7:05.5/pci10000:00/10000:00:02.0/10000:01:00.0/nvme/nvme0 Four identical servers updated their kernel this morning to: ii linux-image-6.8.0-60-generic 6.8.0-60.63 amd64 Signed kernel image generic ...and rebooted. All four failed to come up and ended up at the (initramfs) prompt. Rebooting and selecting 6.8.0-59-generic from the grub menu allowed them to boot as normal. There is no sign that the initramfs generation went wrong (on all four servers) and the initramfs does contain all the same nvme modules for -60 that the one for -59 does. I am at a loss to explain this, and the initramfs environment is a bit limited for debugging. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2111521/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp