[Bug 2068529] [NEW] Focal: Reverse proxy POST with with body length >1000 is missing body

2024-06-05 Thread Wesley Hershberger
Public bug reported:

POST requests to an apache2 server with the below configuration do not
forward the message body.

Affected versions:
apache2 2.4.41-4ubuntu3.17 in focal

Steps to reproduce:

sudo apt-get install apache2
sudo a2enmod proxy
sudo a2enmod proxy_http

Add /etc/apache2/sites-enabled/test_proxy.conf
```
Listen 9443

ServerName focal.cld.lan

ProxyRequests Off
ProxyPass "/" "http://127.0.0.1:8899/";
ProxyPassReverse "/" "http://127.0.0.1:8899/";

ErrorLog ${APACHE_LOG_DIR}/testproxy-error.log
CustomLog ${APACHE_LOG_DIR}/testproxy-access.log combined

```

sudo systemctl restart apache2
nc -k -l 8899

wget http://archive.ubuntu.com/ubuntu/dists/jammy-proposed/InRelease
curl -d "@InRelease" -H "Content-type: text/plain" -X POST 
http://127.0.0.1:9443/

Curl hangs for a while until the request times out.

** Affects: apache2 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2068529

Title:
  Focal: Reverse proxy POST with with body length >1000 is missing body

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/2068529/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2068529] Re: Focal: Reverse proxy POST with with body length >1000 is missing body

2024-06-07 Thread Wesley Hershberger
Thanks for the note Mauricio.

The bug is not present in Jammy.

I've edited the bug description to include a reproducer that reflects
the message length limit.

** Description changed:

  POST requests to an apache2 server with the below configuration do not
- forward the message body.
+ forward the message body if it is larger than 1024 bytes.
  
  Affected versions:
  apache2 2.4.41-4ubuntu3.17 in focal
  
  Steps to reproduce:
  
  sudo apt-get install apache2
  sudo a2enmod proxy
  sudo a2enmod proxy_http
  
  Add /etc/apache2/sites-enabled/test_proxy.conf
  ```
  Listen 9443
  
- ServerName focal.cld.lan
+ ServerName focal.cld.lan
  
- ProxyRequests Off
- ProxyPass "/" "http://127.0.0.1:8899/";
- ProxyPassReverse "/" "http://127.0.0.1:8899/";
+ ProxyRequests Off
+ ProxyPass "/" "http://127.0.0.1:8899/";
+ ProxyPassReverse "/" "http://127.0.0.1:8899/";
  
- ErrorLog ${APACHE_LOG_DIR}/testproxy-error.log
- CustomLog ${APACHE_LOG_DIR}/testproxy-access.log combined
+ ErrorLog ${APACHE_LOG_DIR}/testproxy-error.log
+ CustomLog ${APACHE_LOG_DIR}/testproxy-access.log combined
  
  ```
  
  sudo systemctl restart apache2
  nc -k -l 8899
  
  wget http://archive.ubuntu.com/ubuntu/dists/jammy-proposed/InRelease
  curl -d "@InRelease" -H "Content-type: text/plain" -X POST 
http://127.0.0.1:9443/
  
  Curl hangs for a while until the request times out.
+ 
+ EDIT: The first curl here succeeds, the second does not:
+ 
+ DATA=`tr -dc A-Za-z0-9 http://127.0.0.1:9443 -vvv
+ 
+ DATA=`tr -dc A-Za-z0-9 http://127.0.0.1:9443 -vvv

** Summary changed:

- Focal: Reverse proxy POST with with body length >1000 is missing body
+ Focal: Reverse proxy POST with with body length >1024 is missing body

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2068529

Title:
  Focal: Reverse proxy POST with with body length >1024 is missing body

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/2068529/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064176] [NEW] LXD fan bridge causes blocked tasks

2024-04-29 Thread Wesley Hershberger
Public bug reported:

Hi, cross posting this from
https://github.com/canonical/lxd/issues/12161

I've got a lxd cluster running across 3 VMs using the fan bridge. I'm
using a dev revision of LXD based on 6413a948. Creating a container
causes the trace in the attached syslog snippet; this causes the
container creation process to hang indefinitely. ssh logins, `lxc shell
cluster1`, and `ps -aux` also hang.

Apr 29 17:15:01 cluster1 kernel: [  161.250951] [ cut here 
]
Apr 29 17:15:01 cluster1 kernel: [  161.250957] Voluntary context switch within 
RCU read-side critical section!
Apr 29 17:15:01 cluster1 kernel: [  161.250990] WARNING: CPU: 2 PID: 510 at 
kernel/rcu/tree_plugin.h:320 rcu_note_context_switch+0x2a7/0x2f0
Apr 29 17:15:01 cluster1 kernel: [  161.251003] Modules linked in: nft_masq 
nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 vxlan 
ip6_udp_tunnel udp_tunnel dummy br
idge stp llc zfs(PO) spl(O) nf_tables libcrc32c nfnetlink vhost_vsock vhost 
vhost_iotlb binfmt_misc nls_iso8859_1 intel_rapl_msr intel_rapl_common 
kvm_intel kvm irqbypass crct10dif
_pclmul crc32_pclmul virtio_gpu polyval_clmulni polyval_generic 
ghash_clmulni_intel sha256_ssse3 sha1_ssse3 virtio_dma_buf aesni_intel 
vmw_vsock_virtio_transport 9pnet_virtio xhci_
pci drm_shmem_helper i2c_i801 ahci 9pnet vmw_vsock_virtio_transport_common 
xhci_pci_renesas drm_kms_helper libahci crypto_simd joydev virtio_input cryptd 
lpc_ich virtiofs i2c_smbus
 vsock psmouse input_leds mac_hid serio_raw rapl qemu_fw_cfg vmgenid nfsd 
dm_multipath auth_rpcgss scsi_dh_rdac nfs_acl lockd scsi_dh_emc scsi_dh_alua 
grace sch_fq_codel drm sunrpc
 efi_pstore virtio_rng ip_tables x_tables autofs4
Apr 29 17:15:01 cluster1 kernel: [  161.251085] CPU: 2 PID: 510 Comm: nmbd 
Tainted: P   O   6.5.0-28-generic #29~22.04.1-Ubuntu
Apr 29 17:15:01 cluster1 kernel: [  161.251089] Hardware name: QEMU Standard PC 
(Q35 + ICH9, 2009)/LXD, BIOS unknown 2/2/2022
Apr 29 17:15:01 cluster1 kernel: [  161.251091] RIP: 
0010:rcu_note_context_switch+0x2a7/0x2f0
Apr 29 17:15:01 cluster1 kernel: [  161.251095] Code: 08 f0 83 44 24 fc 00 48 
89 de 4c 89 f7 e8 d1 af ff ff e9 1e fe ff ff 48 c7 c7 d0 60 56 88 c6 05 e6 27 
40 02 01 e8 79 b2 f2 ff
<0f> 0b e9 bd fd ff ff a9 ff ff ff 7f 0f 84 75 fe ff ff 65 48 8b 3c
Apr 29 17:15:01 cluster1 kernel: [  161.251098] RSP: 0018:b9cbc11dbbc8 
EFLAGS: 00010046
Apr 29 17:15:01 cluster1 kernel: [  161.251101] RAX:  RBX: 
941ef7cb3f80 RCX: 
Apr 29 17:15:01 cluster1 kernel: [  161.251103] RDX:  RSI: 
 RDI: 
Apr 29 17:15:01 cluster1 kernel: [  161.251104] RBP: b9cbc11dbbe8 R08: 
 R09: 
Apr 29 17:15:01 cluster1 kernel: [  161.251106] R10:  R11: 
 R12: 
Apr 29 17:15:01 cluster1 kernel: [  161.25] R13: 941d893e9980 R14: 
 R15: 941d80ad7a80
Apr 29 17:15:01 cluster1 kernel: [  161.251113] FS:  7c7dcbdb8a00() 
GS:941ef7c8() knlGS:
Apr 29 17:15:01 cluster1 kernel: [  161.251115] CS:  0010 DS:  ES:  
CR0: 80050033
Apr 29 17:15:01 cluster1 kernel: [  161.251117] CR2: 5a30877ae488 CR3: 
000105888003 CR4: 00170ee0
Apr 29 17:15:01 cluster1 kernel: [  161.251122] Call Trace:
Apr 29 17:15:01 cluster1 kernel: [  161.251128]  
Apr 29 17:15:01 cluster1 kernel: [  161.251133]  ? show_regs+0x6d/0x80
Apr 29 17:15:01 cluster1 kernel: [  161.251145]  ? __warn+0x89/0x160
Apr 29 17:15:01 cluster1 kernel: [  161.251152]  ? 
rcu_note_context_switch+0x2a7/0x2f0
Apr 29 17:15:01 cluster1 kernel: [  161.251155]  ? report_bug+0x17e/0x1b0
Apr 29 17:15:01 cluster1 kernel: [  161.251172]  ? handle_bug+0x46/0x90
Apr 29 17:15:01 cluster1 kernel: [  161.251187]  ? exc_invalid_op+0x18/0x80
Apr 29 17:15:01 cluster1 kernel: [  161.251190]  ? asm_exc_invalid_op+0x1b/0x20
Apr 29 17:15:01 cluster1 kernel: [  161.251202]  ? 
rcu_note_context_switch+0x2a7/0x2f0
Apr 29 17:15:01 cluster1 kernel: [  161.251205]  ? 
rcu_note_context_switch+0x2a7/0x2f0
Apr 29 17:15:01 cluster1 kernel: [  161.251208]  __schedule+0xcc/0x750
Apr 29 17:15:01 cluster1 kernel: [  161.251218]  schedule+0x63/0x110
Apr 29 17:15:01 cluster1 kernel: [  161.251222]  
schedule_hrtimeout_range_clock+0xbc/0x130
Apr 29 17:15:01 cluster1 kernel: [  161.251238]  ? 
__pfx_hrtimer_wakeup+0x10/0x10
Apr 29 17:15:01 cluster1 kernel: [  161.251245]  
schedule_hrtimeout_range+0x13/0x30
Apr 29 17:15:01 cluster1 kernel: [  161.251248]  ep_poll+0x33f/0x390
Apr 29 17:15:01 cluster1 kernel: [  161.251254]  ? 
__pfx_ep_autoremove_wake_function+0x10/0x10
Apr 29 17:15:01 cluster1 kernel: [  161.251257]  do_epoll_wait+0xdb/0x100
Apr 29 17:15:01 cluster1 kernel: [  161.251259]  __x64_sys_epoll_wait+0x6f/0x110
Apr 29 17:15:01 cluster1 kernel: [  161.251265]  do_syscall_64+0x5b/0x90
Apr 29 17:15:01 cluster1 kernel: [  161.251270]  ? do_

[Bug 2064176] Re: LXD fan bridge causes blocked tasks

2024-04-29 Thread Wesley Hershberger
** Attachment added: "apport.linux-image-6.5.0-28-generic._9l2i4n1.apport"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2064176/+attachment/5772647/+files/apport.linux-image-6.5.0-28-generic._9l2i4n1.apport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064176

Title:
  LXD fan bridge causes blocked tasks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2064176/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1597017] Re: mount rules grant excessive permissions

2024-08-15 Thread Wesley Hershberger
Hi, gentle ping on this; is there an ETA for this to land in 22.04? Let
me know if I can help with testing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1597017

Title:
  mount rules grant excessive permissions

To manage notifications about this bug go to:
https://bugs.launchpad.net/apparmor/+bug/1597017/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064717] Re: ceph-volume needs "packaging" module

2024-05-06 Thread Wesley Hershberger
This also affects ceph-volume 19.2.0~git20240301.4c76c50-0ubuntu6 in
Noble.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717

Title:
  ceph-volume needs "packaging" module

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2064717/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2052661] Re: numba 0.58 is not compatible with python 3.12

2024-11-01 Thread Wesley Hershberger
Hi friends,

Since Numba 0.59 has been released with support for python 3.12, are
there plans to include numba in noble-updates/universe?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2052661

Title:
  numba 0.58 is not compatible with python 3.12

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/esda/+bug/2052661/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2081231] [NEW] kernel 6.8.0-40: ext4 online resize on thin-provisioned storage gives 'invalid opcode'

2024-09-19 Thread Wesley Hershberger
Public bug reported:

Hi,

We're seeing failures of an ext4 resize on LVM and Ceph block devices in
the LXD CI; the following call trace happens during resize2fs of an ext4
FS on an LVM lv. I'll also upload an apport report. Let me know if
there's anything else I can provide!

---

[   54.268802] EXT4-fs (dm-8): mounted filesystem 
210714a1-4375-4524-ab2e-019d0859cf5f r/w with ordered data mode. Quota mode: 
none.
[   54.273065] EXT4-fs (dm-8): resizing filesystem from 7168 to 786432 blocks
[   54.274006] [ cut here ]
[   54.274012] kernel BUG at fs/ext4/resize.c:324!
[   54.274773] invalid opcode:  [#1] PREEMPT SMP NOPTI
[   54.275841] CPU: 10 PID: 1397 Comm: resize2fs Tainted: P   O   
6.8.0-40-generic #40~22.04.3-Ubuntu
[   54.282782] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/LXD, BIOS 
unknown 2/2/2022
[   54.285284] RIP: 0010:ext4_alloc_group_tables+0x532/0x540
[   54.286769] Code: c2 f7 da 44 01 e0 8d 48 ff 89 4d c8 44 31 e1 85 d1 75 17 
b9 fd ff ff ff 66 89 4d cc e9 32 fb ff ff 44 8b 45 a0 e9 a8 fe ff ff <0f> 0b 66 
66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90
[   54.291216] RSP: 0018:b691c3c53b78 EFLAGS: 00010202
[   54.292109] RAX: 0018 RBX: 9bce87f5b000 RCX: 0016
[   54.293312] RDX: fff0 RSI: 9bce8186d560 RDI: 9bce822a7800
[   54.294433] RBP: b691c3c53bd8 R08: 0010 R09: 
[   54.295551] R10:  R11:  R12: 0001
[   54.296515] R13: 9bce822a7800 R14: 9bce8186d560 R15: fffc3fe7
[   54.297393] FS:  75726aea3b80() GS:9bcf79d0() 
knlGS:
[   54.298382] CS:  0010 DS:  ES:  CR0: 80050033
[   54.299197] CR2: 75726ac5a230 CR3: 0001192b4000 CR4: 00750ef0
[   54.300157] PKRU: 5554
[   54.300520] Call Trace:
[   54.300734]  
[   54.300910]  ? show_regs+0x6d/0x80
[   54.301191]  ? die+0x37/0xa0
[   54.301674]  ? do_trap+0xd4/0xf0
[   54.302163]  ? do_error_trap+0x71/0xb0
[   54.302675]  ? ext4_alloc_group_tables+0x532/0x540
[   54.303151]  ? exc_invalid_op+0x52/0x80
[   54.303728]  ? ext4_alloc_group_tables+0x532/0x540
[   54.304445]  ? asm_exc_invalid_op+0x1b/0x20
[   54.305092]  ? ext4_alloc_group_tables+0x532/0x540
[   54.305833]  ext4_resize_fs+0x378/0x6d0
[   54.306434]  __ext4_ioctl+0x34e/0x1160
[   54.307028]  ? filename_lookup+0xe4/0x200
[   54.307625]  ? xa_load+0x87/0xf0
[   54.308168]  ext4_ioctl+0xe/0x20
[   54.308697]  __x64_sys_ioctl+0xa0/0xf0
[   54.309328]  x64_sys_call+0xa68/0x24b0
[   54.30]  do_syscall_64+0x81/0x170
[   54.310715]  ? mntput+0x24/0x50
[   54.311339]  ? path_put+0x1e/0x30
[   54.311982]  ? do_faccessat+0x1c2/0x2f0
[   54.312720]  ? syscall_exit_to_user_mode+0x89/0x260
[   54.313640]  ? do_syscall_64+0x8d/0x170
[   54.314424]  ? handle_mm_fault+0xad/0x380
[   54.315080]  ? do_user_addr_fault+0x337/0x670
[   54.315484]  ? irqentry_exit_to_user_mode+0x7e/0x260
[   54.315875]  ? irqentry_exit+0x43/0x50
[   54.316172]  ? clear_bhb_loop+0x15/0x70
[   54.316483]  ? clear_bhb_loop+0x15/0x70
[   54.317223]  ? clear_bhb_loop+0x15/0x70
[   54.317869]  entry_SYSCALL_64_after_hwframe+0x78/0x80
[   54.318699] RIP: 0033:0x75726ad1a94f
[   54.319434] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 
00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <41> 89 c0 
3d 00 f0 ff ff 77 1f 48 8b 44 24 18 64 48 2b 04 25 28 00
[   54.323404] RSP: 002b:7ffd784e7a80 EFLAGS: 0246 ORIG_RAX: 
0010
[   54.324609] RAX: ffda RBX: 0001 RCX: 75726ad1a94f
[   54.325304] RDX: 7ffd784e7b80 RSI: 40086610 RDI: 0004
[   54.325933] RBP: 5b0d59a2c990 R08:  R09: 7ffd784e79b7
[   54.326570] R10:  R11: 0246 R12: 0004
[   54.327122] R13: 5b0d59a2ca40 R14: 5b0d59a2eb00 R15: 
[   54.327672]  
[   54.327974] Modules linked in: dm_snapshot vhost_vsock vhost vhost_iotlb 
nft_masq ipmi_devintf ipmi_msghandler nft_chain_nat nf_nat nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 bridge stp llc nf_tables nfnetlink binfmt_misc 
nls_iso8859_1 zfs(PO) spl(O) intel_rapl_msr intel_rapl_common 
intel_uncore_frequency_common intel_pmc_core intel_vsec pmt_telemetry pmt_class 
kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul polyval_clmulni 
polyval_generic ghash_clmulni_intel sha256_ssse3 dm_thin_pool sha1_ssse3 
dm_persistent_data aesni_intel dm_bio_prison crypto_simd dm_bufio cryptd 
libcrc32c joydev rapl input_leds psmouse serio_raw ahci 
vmw_vsock_virtio_transport 9pnet_virtio lpc_ich virtio_gpu i2c_i801 
vmw_vsock_virtio_transport_common libahci xhci_pci i2c_smbus 9pnet virtio_input 
xhci_pci_renesas virtiofs virtio_dma_buf vsock mac_hid qemu_fw_cfg vmgenid 
dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua nfsd auth_rpcgss nfs_acl 
sch_fq_codel lockd grace efi_pstore sunrpc

[Bug 2095203] Re: `netplan apply` fails in LXD container with physical NIC passthrough

2025-01-22 Thread Wesley Hershberger
** Description changed:

  Hello,
  
  When using physical NIC passthrough in LXD containers [1], netplan fails
  when trying to run `udevadm`.
  
  Using these LXD devices for the container, where enp6s0 is a spare physical 
NIC:
  ```
  devices:
-   eth0:
- name: eth0
- nictype: physical
- parent: enp6s0
- type: nic
-   root:
- path: /
- pool: default
- type: disk
+   eth0:
+ name: eth0
+ nictype: physical
+ parent: enp6s0
+ type: nic
+   root:
+ path: /
+ pool: default
+ type: disk
+ ```
+ 
+ Netplan config (the default):
+ ```
+ network:
+   version: 2
+   ethernets:
+ eth0:
+   dhcp4: true
  ```
  
  This happens when netplan is run in the container:
  ```
  $ sudo netplan apply
  eth0: Failed to write 'move' to 
'/sys/devices/pci:00/:00:01.5/:06:00.0/virtio11/net/eth0/uevent': 
Permission denied
  Traceback (most recent call last):
-   File "/usr/sbin/netplan", line 23, in 
- netplan.main()
-   File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main
- self.run_command()
-   File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
- self.func()
-   File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 62, in run
- self.run_command()
-   File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
- self.func()
-   File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 255, in 
command_apply
- subprocess.check_call(['udevadm', 'trigger', '--action=move', 
'--subsystem-match=net', '--settle'])
-   File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
- raise CalledProcessError(retcode, cmd)
+   File "/usr/sbin/netplan", line 23, in 
+ netplan.main()
+   File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main
+ self.run_command()
+   File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
+ self.func()
+   File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 62, in run
+ self.run_command()
+   File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
+ self.func()
+   File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 255, in 
command_apply
+ subprocess.check_call(['udevadm', 'trigger', '--action=move', 
'--subsystem-match=net', '--settle'])
+   File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
+ raise CalledProcessError(retcode, cmd)
  subprocess.CalledProcessError: Command '['udevadm', 'trigger', 
'--action=move', '--subsystem-match=net', '--settle']' returned non-zero exit 
status 1.
  
  $ apt-cache policy netplan.io
  netplan.io:
-   Installed: 1.1.1-1~ubuntu24.04.1
-   Candidate: 1.1.1-1~ubuntu24.04.1
-   Version table:
-  *** 1.1.1-1~ubuntu24.04.1 500
- 500 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages
- 100 /var/lib/dpkg/status
-  1.0-2ubuntu1.2 500
- 500 http://security.ubuntu.com/ubuntu noble-security/main amd64 
Packages
-  1.0-2ubuntu1 500
- 500 http://archive.ubuntu.com/ubuntu noble/main amd64 Packages
+   Installed: 1.1.1-1~ubuntu24.04.1
+   Candidate: 1.1.1-1~ubuntu24.04.1
+   Version table:
+  *** 1.1.1-1~ubuntu24.04.1 500
+ 500 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages
+ 100 /var/lib/dpkg/status
+  1.0-2ubuntu1.2 500
+ 500 http://security.ubuntu.com/ubuntu noble-security/main amd64 
Packages
+  1.0-2ubuntu1 500
+ 500 http://archive.ubuntu.com/ubuntu noble/main amd64 Packages
  ```
  
  This occurs in Jammy and Noble containers.
  
  A few things here:
  
  udevadm changed its return code logic in Feb 2021 to return errors when
  it fails to trigger devices. LXD does not handle udev in containers the
  way systemd upstream recommends [2][3] (/sys is mounted rw), so udevadm
  will trigger some devices and fail on others in a LXD container.
  
  Snapd ran into this problem when the udevadm change made its way into
  Ubuntu 21.10. They have a reasonable summary of the issue & their fix
  [4]. This boils down to snapd simply ignoring errors from `udevadm
  trigger`.
  
  It should be pretty straightforward to do the same fix for netplan [5],
  but I'd like someone with a little more exposure to the codebase to
  weigh in on this.
  
  Thanks!
  
  [1] 
https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nictype-physical
  [2] https://github.com/systemd/systemd/issues/14431#issuecomment-570198194
  [3] https://www.freedesktop.org/wiki/Software/systemd/ContainerInterface/
  [4] https://github.com/canonical/snapd/pull/11056#pullrequestreview-806332045
  [5] 
https://github.com/canonical/netplan/blob/main/netplan_cli/cli/commands/apply.py#L255

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2095203

Title:
  `netplan apply` fails in LXD container with phys

[Bug 2095203] Re: `netplan apply` fails in LXD container with physical NIC passthrough

2025-01-24 Thread Wesley Hershberger
Hi Danilo, thanks for the note. I've updated the description with the
netplan yaml and opened https://github.com/canonical/netplan/pull/539

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2095203

Title:
  `netplan apply` fails in LXD container with physical NIC passthrough

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/2095203/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2095203] [NEW] `netplan apply` fails in LXD container with physical NIC passthrough

2025-01-17 Thread Wesley Hershberger
Public bug reported:

Hello,

When using physical NIC passthrough in LXD containers [1], netplan fails
when trying to run `udevadm`.

Using these LXD devices for the container, where enp6s0 is a spare physical NIC:
```
devices:
  eth0:
name: eth0
nictype: physical
parent: enp6s0
type: nic
  root:
path: /
pool: default
type: disk
```

This happens when netplan is run in the container:
```
$ sudo netplan apply
eth0: Failed to write 'move' to 
'/sys/devices/pci:00/:00:01.5/:06:00.0/virtio11/net/eth0/uevent': 
Permission denied
Traceback (most recent call last):
  File "/usr/sbin/netplan", line 23, in 
netplan.main()
  File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main
self.run_command()
  File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
self.func()
  File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 62, in run
self.run_command()
  File "/usr/share/netplan/netplan_cli/cli/utils.py", line 332, in run_command
self.func()
  File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 255, in 
command_apply
subprocess.check_call(['udevadm', 'trigger', '--action=move', 
'--subsystem-match=net', '--settle'])
  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['udevadm', 'trigger', '--action=move', 
'--subsystem-match=net', '--settle']' returned non-zero exit status 1.

$ apt-cache policy netplan.io
netplan.io:
  Installed: 1.1.1-1~ubuntu24.04.1
  Candidate: 1.1.1-1~ubuntu24.04.1
  Version table:
 *** 1.1.1-1~ubuntu24.04.1 500
500 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages
100 /var/lib/dpkg/status
 1.0-2ubuntu1.2 500
500 http://security.ubuntu.com/ubuntu noble-security/main amd64 Packages
 1.0-2ubuntu1 500
500 http://archive.ubuntu.com/ubuntu noble/main amd64 Packages
```

This occurs in Jammy and Noble containers.

A few things here:

udevadm changed its return code logic in Feb 2021 to return errors when
it fails to trigger devices. LXD does not handle udev in containers the
way systemd upstream recommends [2][3] (/sys is mounted rw), so udevadm
will trigger some devices and fail on others in a LXD container.

Snapd ran into this problem when the udevadm change made its way into
Ubuntu 21.10. They have a reasonable summary of the issue & their fix
[4]. This boils down to snapd simply ignoring errors from `udevadm
trigger`.

It should be pretty straightforward to do the same fix for netplan [5],
but I'd like someone with a little more exposure to the codebase to
weigh in on this.

Thanks!

[1] 
https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nictype-physical
[2] https://github.com/systemd/systemd/issues/14431#issuecomment-570198194
[3] https://www.freedesktop.org/wiki/Software/systemd/ContainerInterface/
[4] https://github.com/canonical/snapd/pull/11056#pullrequestreview-806332045
[5] 
https://github.com/canonical/netplan/blob/main/netplan_cli/cli/commands/apply.py#L255

** Affects: netplan
 Importance: Undecided
 Status: New

** Affects: netplan.io (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: netplan.io (Ubuntu Jammy)
 Importance: Undecided
 Status: New

** Affects: netplan.io (Ubuntu Noble)
 Importance: Undecided
 Status: New

** Also affects: netplan.io (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2095203

Title:
  `netplan apply` fails in LXD container with physical NIC passthrough

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/2095203/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-20 Thread Wesley Hershberger
All set, thanks.

```
wesley@oracular2:~$ uname -a
Linux oracular2 6.11.0-1006-realtime #6-Ubuntu SMP PREEMPT_RT Mon Feb 17 
15:51:31 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
wesley@oracular2:~$ python3 -c 'import perf; [print(c) for c in perf.cpu_map()]'
0
```

** Tags removed: verification-failed-oracular-linux
** Tags added: verification-done-oracular-linux

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-26 Thread Wesley Hershberger
Thanks Manuel; unfortunately the generic kernel is not affected by this
bug, only the variants (aws, azure, realtime, etc), so we can't verify
with linux-generic.

I don't have permission to access the packages for the RT kernel in the
Noble ppa [1] (linked from [2]) but this bug/fix applies to all variants
so I tested with linux-azure. 1ecc312 (in noble) is clearly applied but
I'm still seeing issues here. I'm going to keep digging into this and
will post what I find.

```
$ uname -a
Linux noble3 6.8.0-1023-azure #28-Ubuntu SMP Wed Feb 19 17:41:34 UTC 2025 
x86_64 x86_64 x86_64 GNU/Linux
wesley@noble3:~$ python3 -c 'import perf; [print(c) for c in perf.cpu_map()]'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3/dist-packages/perf/__init__.py", line 26, in 
raise KernelNotFoundError()
perf.KernelNotFoundError:
WARNING: python perf module not found for kernel 6.8.0-1023-azure

  You may need to install the following package for this specific kernel:
linux-tools-6.8.0-1023-azure

  You may also want to install the following package to keep up to date:
linux-tools-azure
```

[1] https://launchpad.net/~ubuntu-advantage/+archive/ubuntu/realtime-updates/
[2] https://kernel.ubuntu.com/reports/kernel-stable-board/?cycle=s2025.01.13

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-26 Thread Wesley Hershberger
** Tags removed: verification-done-noble-linux
** Tags added: verification-failed-noble-linux

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-26 Thread Wesley Hershberger
Looks like the symlink described in 1ecc312721 (/usr/lib/linux-
tools/6.8.0-1023-azure/lib -> /usr/lib/linux-azure-tools-6.8.0-1023/lib)
doesn't exist with package linux-tools-6.8.0-1023-azure, likely because
the link's target doesn't exist. I checked the corresponding links in
linux-realtime-tools-6.11.0-1006 and they are all present as expected.

Can someone with more familiarity speak to why `/usr/lib/linux-azure-
tools-6.8.0-1023/lib/perf.cpython-312-x86_64-linux-gnu.so` isn't present
in linux-tools-6.8.0-1023-azure? Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-19 Thread Wesley Hershberger
** Tags removed: verification-needed-oracular-linux
** Tags added: verification-failed-oracular-linux

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2089411] Re: python perf module missing in realtime kernel

2025-02-19 Thread Wesley Hershberger
Hi, I'm still seeing this bug with 6.11.0-1006-realtime, which should
contain the patch [1] (thanks for the help Juerg)

```
wesley@oracular2:~$ uname -a
Linux oracular2 6.11.0-1006-realtime #6-Ubuntu SMP PREEMPT_RT Mon Feb 17 
15:51:31 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
wesley@oracular2:~$ python3 -c 'import perf; [print(c) for c in perf.cpu_map()]'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3/dist-packages/perf/__init__.py", line 24, in 
raise KernelNotFoundError()
perf.KernelNotFoundError: WARNING: python perf module not found for kernel 
6.11.0-1006-realtime

You may need to install the following packages for this specific kernel:
  linux-tools-6.11.0-1006-realtime-generic
You may also want to install of the following package to keep up to date:
  linux-tools-generic
wesley@oracular2:~$ apt-cache policy linux-realtime
linux-realtime:
  Installed: 6.11.0-1006.6
  Candidate: 6.11.0-1006.6
  Version table:
 *** 6.11.0-1006.6 500
500 
https://ppa.launchpadcontent.net/canonical-kernel-team/proposed2/ubuntu 
oracular/main amd64 Packages
100 /var/lib/dpkg/status
 6.11.0-1005.5 500
500 http://archive.ubuntu.com/ubuntu oracular-updates/universe amd64 
Packages
500 http://security.ubuntu.com/ubuntu oracular-security/universe amd64 
Packages
 6.11.0-1001.1 500
500 http://archive.ubuntu.com/ubuntu oracular/universe amd64 Packages
```

[1] https://kernel.ubuntu.com/reports/kernel-stable-
board/?cycle=s2025.01.13

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2089411

Title:
  python perf module missing in realtime kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089411/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs