Current suspects are out of date apparmor features in livecd-rootfs
pending https://launchpad.net/ubuntu/+source/livecd-rootfs/23.10.55

kernel, apparmor, snapd, lxd, snapd again having fits about all of them
because of:

......................................................................
Make snap "snapd" (20092) available to the system

2023-10-05T19:04:57Z INFO Requested daemon restart (snapd snap).

......................................................................
Copy snap "lxd" data

2023-10-05T19:04:56Z ERROR unlinkat
/var/snap/lxd/common/var/lib/lxcfs/proc/cpuinfo: function not
implemented

......................................................................
Run install hook of "lxd" snap if present

2023-10-05T19:04:55Z ERROR run hook "install": cannot read mount
namespace identifier of pid 1: Permission denied


and also because of:

Oct 05 19:21:39 mantic-con-priv systemd[1]: snapd.service: Got notification 
message from PID 2560, but reception only permitted for main PID 2338
Oct 05 19:21:39 mantic-con-priv snapd[2338]: taskrunner.go:299: [change 7 
"Setup snap \"snapd\" (20092) security profiles" task] failed: cannot reload 
udev rules: exit status 1
Oct 05 19:21:39 mantic-con-priv snapd[2338]: udev output:
Oct 05 19:21:39 mantic-con-priv snapd[2338]: Failed to send reload request: No 
such file or directory
Oct 05 19:21:39 mantic-con-priv systemd[1]: snap-snapd-20092.mount: Deactivated 
successfully.
Oct 05 19:21:39 mantic-con-priv systemd[1]: snap-snapd-20092.mount: Unit 
process 2559 (snapfuse) remains running after unit stopped.
Oct 05 19:21:39 mantic-con-priv systemd[1]: Reloading requested from client PID 
2565 (unit snapd.service)...
Oct 05 19:21:39 mantic-con-priv systemd[1]: Reloading...
Oct 05 19:21:39 mantic-con-priv (sd-gens)[2568]: Read-only bind remount failed, 
ignoring: Permission denied


and because of:

Oct 05 19:20:58 cloudimg kernel: audit: type=1400
audit(1696533658.780:276): apparmor="DENIED" operation="mount"
class="mount" info="failed type match" error=-13 profile="lxd-dominant-
goldfish_</var/snap/lxd/common/lxd>" name="/snap/" pid=1940 comm="(sd-
gens)" flags="ro, remount, bind"

but could be util-linux too

** Also affects: apparmor (Ubuntu)
   Importance: Undecided
       Status: New

** Also affects: lxd (Ubuntu)
   Importance: Undecided
       Status: New

** Also affects: snapd (Ubuntu)
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2038567

Title:
  Mantic 6.5.0-7 kernel causes regression in LXD container usage

Status in Release Notes for Ubuntu:
  New
Status in apparmor package in Ubuntu:
  New
Status in linux package in Ubuntu:
  Incomplete
Status in lxd package in Ubuntu:
  New
Status in snapd package in Ubuntu:
  New

Bug description:
  Following upgrade to 6.5.0-7 kernel in mantic cloud images we are
  seeing a regression in our cloud image tests. The test runs the
  following:

  ```
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  ```

  The `lxc exec mantic -- cloud-init status --wait` times out after 240s
  and will fail our test as a result.

  I have been able to replicate in a local VM

  ```
  wget 
http://cloud-images.ubuntu.com/mantic/20231005/mantic-server-cloudimg-amd64.img 
  wget --output-document=launch-qcow2-image-qemu.sh 
https://gist.githubusercontent.com/philroche/14c241c086a5730481e24178b654268f/raw/7af95cd4dfc8e1d0600e6118803d2c866765714e/gistfile1.txt
 
  chmod +x launch-qcow2-image-qemu.sh 

  ./launch-qcow2-image-qemu.sh --password passw0rd --image 
./mantic-server-cloudimg-amd64.img 
  cat <<EOF > "./reproducer.sh"
  #!/bin/bash -eux
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  EOF
  chmod +x ./reproducer.sh
  sshpass -p passw0rd scp -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -P 2222 ./reproducer.sh ubuntu@127.0.0.1:~/
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p 2222 ubuntu@127.0.0.1 sudo apt-get update
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p 2222 ubuntu@127.0.0.1 sudo apt-get upgrade 
--assume-yes
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p 2222 ubuntu@127.0.0.1 ./reproducer.sh
  ```

  The issue is not present with the 6.5.0-5 kernel and the issue is
  present regardless of the container launched. I tried the jammy
  container to test this.

  From my test VM

  ```
  ubuntu@cloudimg:~$ uname --all
  Linux cloudimg 6.5.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 29 
09:14:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@cloudimg:~$ uname --kernel-release
  6.5.0-7-generic
  ```

  This is a regression in our test that will block 23.10 cloud image
  release next week.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/2038567/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to