I was able to git clone the v4.4.47 kernel over to the machine having issues. When trying to apply the patches after 0001, it's unable to locate the file to patch for some. I am seeing this same error on another machine I've previously successfully upgraded to the 4.4.47 kernel with the same files.
root@savbu-qa-colusa3-24-2:~# patch -p1 < 0002-UBUNTU-SAUCE-add-vmlinux.strip-to-BOOT_TARGETS1-on-p.patch can't find file to patch at input line 16 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |From 87f65999aab113d4ecf0d7fedfa5a4cc9c1141b5 Mon Sep 17 00:00:00 2001 |From: Andy Whitcroft <a...@canonical.com> |Date: Fri, 9 Sep 2016 14:02:29 +0100 |Subject: [PATCH 2/6] UBUNTU: SAUCE: add vmlinux.strip to BOOT_TARGETS1 on | powerpc | |Signed-off-by: Andy Whitcroft <a...@canonical.com> |--- | arch/powerpc/Makefile | 2 +- | 1 file changed, 1 insertion(+), 1 deletion(-) | |diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile |index 96efd82..96f49dd 100644 |--- a/arch/powerpc/Makefile |+++ b/arch/powerpc/Makefile -------------------------- File to patch: ^C root@savbu-qa-colusa3-24-2:~# root@savbu-qa-colusa3-24-2:~# patch -p1 < 0003-UBUNTU-SAUCE-tools-hv-lsvmbus-add-manual-page.patch patching file tools/hv/lsvmbus.8 root@savbu-qa-colusa3-24-2:~# patch -p1 < 0004-UBUNTU-SAUCE-no-up-disable-pie-when-gcc-has-it-enabl.patch can't find file to patch at input line 37 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |From 5d5e1ec9dcd36f965aee2285f678b7fcc3553c2f Mon Sep 17 00:00:00 2001 |From: Steve Beattie <steve.beat...@canonical.com> |Date: Tue, 10 May 2016 12:44:04 +0100 |Subject: [PATCH 4/6] UBUNTU: SAUCE: (no-up) disable -pie when gcc has it | enabled by default | |In Ubuntu 16.10, gcc's defaults have been set to build Position |Independent Executables (PIE) on amd64 and ppc64le (gcc was configured |this way for s390x in Ubuntu 16.04 LTS). This breaks the kernel build on |amd64. The following patch disables pie for x86 builds (though not yet |verified to work with gcc configured to build PIE by default i386 -- |we're not planning to enable it for that architecture). | |The intent is for this patch to go upstream after expanding it to |additional architectures where needed, but I wanted to ensure that |we could build 16.10 kernels first. I've successfully built kernels |and booted them with this patch applied using the 16.10 compiler. | |Patch is against yakkety.git, but also applies with minor movement |(no fuzz) against current linus.git. | |Signed-off-by: Steve Beattie <steve.beat...@canonical.com> |[a...@canonical.com: shifted up so works in arch/<arch/Makefile.] |BugLink: http://bugs.launchpad.net/bugs/1574982 |Signed-off-by: Andy Whitcroft <a...@canonical.com> |Acked-by: Tim Gardner <tim.gard...@canonical.com> |Acked-by: Stefan Bader <stefan.ba...@canonical.com> |Signed-off-by: Kamal Mostafa <ka...@canonical.com> |--- | Makefile | 6 ++++++ | 1 file changed, 6 insertions(+) | |diff --git a/Makefile b/Makefile |index 7b233ac..3c6e704 100644 |--- a/Makefile |+++ b/Makefile -------------------------- File to patch: -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1661131 Title: kernel crash when NVMe drive inserted in one slot Status in linux package in Ubuntu: Incomplete Bug description: Opening this on behalf of one of my colleagues at Cisco, we're seeing an issue on our new S-series S3260 server that's causing the kernel to crash. If we have an NVMe device inserted into one of two drive slots, we will see kernel crash only with Ubuntu. With an NVMe drive in the bad slot, other OS's will work fine. If we move the NVMe drive out of the bad slot and into the other slot, everything is working fine as expected. We only see the kernel crash with an NVMe drive in that bad slot when using Ubuntu. We tested with HGST and Intel NVMe drives and were able to reproduce the issue with both. HGST reviewed some logs and they don't believe at this time the issue is with the NVMe drives. We're hoping someone from Canonical can take a look to understand what is the difference between the working and failing slot. The data collection was done with the NVMe drive inserted in the working slot so we could access the OS. I had a connection time out when trying to use ubuntu-bug, so I saved the apport file and will attach to the bug. I have collected the kernel and syslog as well, but they are ~9GB. I found a call trace in the kernel log start on Jan 25 06:02:54 and floods the logs afterwards. I will include the call trace in a separate text file on the attachment. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1661131/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp