[Kernel-packages] [Bug 1849493] Re: CONFIG_ANDROID_BINDER_IPC=m is missing in the custom rolling kernels

2020-03-27 Thread Roufique Hossain
rtat.net

** Changed in: linux-gcp (Ubuntu Bionic)
 Assignee: (unassigned) => Roufique Hossain (roufique)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1849493

Title:
  CONFIG_ANDROID_BINDER_IPC=m is missing in the custom rolling kernels

Status in linux package in Ubuntu:
  Incomplete
Status in linux-azure package in Ubuntu:
  Incomplete
Status in linux-gcp package in Ubuntu:
  Confirmed
Status in linux source package in Bionic:
  Incomplete
Status in linux-azure source package in Bionic:
  New
Status in linux-gcp source package in Bionic:
  Fix Released
Status in linux source package in Disco:
  Incomplete
Status in linux-azure source package in Disco:
  New
Status in linux-gcp source package in Disco:
  Fix Released
Status in linux source package in Eoan:
  Incomplete
Status in linux-azure source package in Eoan:
  New
Status in linux-gcp source package in Eoan:
  New

Bug description:
  The rolling GCP kernel for bionic is missing
  CONFIG_ANDROID_BINDER_IPC=m which is enabled in the standard Ubuntu
  kernel since 19.04 and available through the HWE kernels in Bionic.

  As we require CONFIG_ANDROID_BINDER_IPC=m for a not released product
  in our kernels it would be great if we can import the relevant config
  changes to the GCP kernel (haven't yet checked our other cloud
  kernels).

  All relevant changes from Christian Brauner to enable binder in the
  Ubuntu kernel are present in the GCP kernel (see
  https://git.launchpad.net/~canonical-kernel/ubuntu/+source/linux-
  gcp/+git/bionic/log/?h=gcp-edge&qt=grep&q=brauner).

  See https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  
bionic.git/commit/debian.master/config/config.common.ubuntu?h=hwe&id=a758aeb0bb0f52ccbee99f850709c57711753b33
  and https://kernel.ubuntu.com/git/ubuntu/ubuntu-
  
bionic.git/commit/debian.master/config/config.common.ubuntu?h=hwe&id=4b44b695fb5ee2f405d0ad4eda2fc2cad856414c
  for the relevant config changes in the Ubuntu kernel from Seth.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1849493/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1225922] Re: Support static network configuration even on already configured devices

2020-03-27 Thread Roufique Hossain
** Also affects: linux-gcp (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: linux-gcp (Ubuntu)
   Status: New => Confirmed

** Changed in: linux-gcp (Ubuntu)
 Assignee: (unassigned) => Roufique Hossain (roufique)

** Changed in: cloud-init
 Assignee: (unassigned) => Roufique Hossain (roufique)

** Changed in: cloud-init
   Status: Confirmed => Fix Committed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-gcp in Ubuntu.
https://bugs.launchpad.net/bugs/1225922

Title:
  Support static network configuration even on already configured
  devices

Status in cloud-init:
  Fix Committed
Status in linux-gcp package in Ubuntu:
  Confirmed

Bug description:
  Some datasources (e.g. OpenNebula) support full static network
  configuration. It's done in local execution phase by pushing new
  interfaces configuration to *distro.apply_network*. This new
  configuration is written on disk and activated by calling ifup on
  particular devices. Unfortunatelly  it can't be guaranteed that full
  local phase is executed before any network configuration is done by
  system. Mentioned steps are OK only for devices not present in former
  network configuration or not configurated to start on boot (e.g. eth0
  configured on boot to take address from DHCP, the new static
  configuration is not applied on this device, it's already up and ifup
  just passes).

  It would be good to first put interfaces down before writing new
  configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1225922/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1873809] Re: Make linux-kvm bootable in LXD VMs

2020-04-20 Thread Roufique Hossain
** Changed in: cloud-images
   Status: Invalid => Confirmed

** Changed in: cloud-images
 Assignee: (unassigned) => Roufique Hossain (roufique)

** Changed in: linux-kvm (Ubuntu)
 Assignee: Colin Ian King (colin-king) => Roufique Hossain (roufique)

** Changed in: linux-kvm (Ubuntu)
   Status: New => Incomplete

** Changed in: linux-kvm (Ubuntu)
   Status: Incomplete => Confirmed

** Bug watch added: Email to roufique@rtat #
   mailto:roufi...@rtat.net

** Also affects: cloud-bl-tutorials via
   mailto:roufi...@rtat.net
   Importance: Undecided
   Status: New

** Changed in: cloud-bl-tutorials
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/1873809

Title:
  Make linux-kvm bootable in LXD VMs

Status in Cloud-Bio-Linux Tutorials:
  Confirmed
Status in cloud-images:
  Confirmed
Status in linux-kvm package in Ubuntu:
  Confirmed

Bug description:
  The `disk-kvm.img` images which are to be preferred when run under
  virtualization, currently completely fail to boot under UEFI.

  A workaround was put in place such that LXD instead will pull generic-
  based images until this is resolved, this however does come with a
  much longer boot time (as the kernel panics, reboots and then boots)
  and also reduced functionality from cloud-init, so we'd still like
  this fixed in the near future.

  To get things behaving, it looks like we need the following config
  options to be enable in linux-kvm:

   - CONFIG_EFI_STUB
   - CONFIG_VSOCKETS
   - CONFIG_VIRTIO_VSOCKETS
   - CONFIG_VIRTIO_VSOCKETS_COMMON

  == Rationale ==
  We'd like to be able to use the linux-kvm based images for LXD, those will 
directly boot without needing the panic+reboot behavior of generic images and 
will be much lighter in general.

  We also need the LXD agent to work, which requires functional virtio
  vsock.

  == Test case ==
   - wget 
http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-lxd.tar.xz
   - wget 
http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img
   - lxc image import focal-server-cloudimg-amd64-lxd.tar.xz 
focal-server-cloudimg-amd64-disk-kvm.img --alias bug1873809
   - lxc launch bug1873809 v1
   - lxc console v1
   - 
   - 
   - lxc exec v1 bash

  To validate a new kernel, you'll need to manually repack the .img file
  and install the new kernel in there.

  == Regression potential ==
  I don't know who else is using those kvm images right now, but those changes 
will cause a change to the kernel binary such that it contains the EFI stub 
bits + a signature. This could cause some (horribly broken) systems to no 
longer be able to boot that kernel. Though considering that such a setup is 
common to our other kernels, this seems unlikely.

  Also, this will be introducing virtio vsock support which again, could
  maybe confused some horribly broken systems?

  
  In either case, the kernel conveniently is the only package which ships 
multiple versions concurently, so rebooting on the previous kernel is always an 
option, mitigating some of the risks.

  
  -- Details from original report --
  User report on the LXD side: https://github.com/lxc/lxd/issues/7224

  I've reproduced this issue with:
   - wget 
http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img
   - qemu-system-x86_64 -bios /usr/share/ovmf/OVMF.fd -hda 
focal-server-cloudimg-amd64-disk-kvm.img -m 1G

  On the graphical console, you'll see EDK2 load (TianoCore) followed by basic 
boot messages and then a message from grub (error: can't find command 
`hwmatch`).
  Those also appear on successful boots of other images so I don't think 
there's anything concerning that. However it'll hang indefinitely and eat up 
all your CPU.

  Switching to the text console view (serial0), you'll see the same
  issue as that LXD report:

  BdsDxe: failed to load Boot0001 "UEFI QEMU DVD-ROM QM3 " from 
PciRoot(0x0)/Pci(0x1,0x1)/Ata(Secondary,Master,0x0): Not Found
  BdsDxe: loading Boot0002 "UEFI QEMU HARDDISK QM1 " from 
PciRoot(0x0)/Pci(0x1,0x1)/Ata(Primary,Master,0x0)
  BdsDxe: starting Boot0002 "UEFI QEMU HARDDISK QM1 " from 
PciRoot(0x0)/Pci(0x1,0x1)/Ata(Primary,Master,0x0)
  error: can't find command `hwmatch'.
  e X64 Exception Type - 0D(#GP - General Protection)  CPU Apic ID - 
 
  ExceptionData - 
  RIP  - 3FF2DA12, CS  - 0038, RFLAGS - 00200202
  RAX  - AFAFAFAFAFAFAFAF, RCX - 3E80F108, RDX - AFAFAFAFAFAFAFAF
  RBX  - 0398, RSP - 3FF1C638, RBP - 3FF34360
  RSI  - 3FF343B8, RDI - 1000
  R8   - 3E80F108, R9  - 3E815B98, R10 - 0065
  R11  - 00