Bad case:
$ ./repro.sh bad
+ '[' bad == bad ']'
+ echo 'Bad case: Using apparmor from proposed'
Bad case: Using apparmor from proposed
+ BADCASE=1
+ lxc stop --force testguest-apparmor-bad
+ lxc delete --force testguest-apparmor-bad
+ lxc launch ubuntu-daily:groovy/amd64 testguest-apparmor-bad --p
It seems it comes down to a change in /lib/apparmor/apparmor.systemd
which now refuses to load profiles when running in a container.
Example with 3.0:
$ /lib/apparmor/apparmor.systemd reload
Not starting AppArmor in container
Example with 2.x
/lib/apparmor/apparmor.systemd reload
Restarting AppA
FYI - other testing might miss this as "starting a guest on groovy"
works with the new versions, but it will be without apparmor. Migrating
from focal or a pre-upgrade groovy shows the issues broken by apparmor
not being enabled.
** Changed in: apparmor (Ubuntu)
Status: Incomplete => New
*
I have backed up this container and its snapshot for later and re-run
the whole automation which got me that bad state.
That allowed me to run my automation again without removing this
container (in case we need it for debugging later). So I ran everything
again to check if it would happen again w
Ok, I have definitely a snapshot left that has "conserved" the bad
state.
$ lxc stop testkvm-groovy-from
$ lxc restore testkvm-groovy-from orig
$ lxc start testkvm-groovy-from
$ lxc exec testkvm-groovy-from
# aa-status
apparmor module is loaded.
15 profiles are loaded.
15 profiles are in enforce m
Hi Christian Bolz o/
I'd have such rules but this isn't the problem here as that would matter only
much later.
I libvirtd itself isn't confined it refuses to go on confining the guests and
that is here the problem.
The current question really comes down to "how did I manage to have
everything bu
I knew from my former tests:
1. apparmor 3.0 = bad
2. downgrading to 2.13.3-7ubuntu6 and back up to 3.0 = good
3. aa-enforce + service restart = good
I checked the logs on the affected systems how this got into the bad
state:
$ grep -E 'configure (lib)?(apparmor|libvirt)' /var/log/dpkg.log
2020-0
Wild _guess_/hint that could explain the behaviour you see: Do you have
(snap?) profiles that have rules with "peer=libvirtd", and fail if
libvirtd is running unconfined (which would need "peer=unconfined" in
the other profile)?
--
You received this bug notification because you are a member of Ub
Yeah and the comment above this function pointed the right way:
Good case (libvirt is enforced):
oot@testkvm-groovy-to:~# aa-status
apparmor module is loaded.
31 profiles are loaded.
31 profiles are in enforce mode.
/snap/snapd/9279/usr/lib/snapd/snap-confine
/snap/snapd/9279/usr/lib/snapd/
This gets me back to a working system
$ aa-enforce /etc/apparmor.d/usr.sbin.libvirtd
$ systemctl restart libvirtd
And this also explains why on the system where I re-installed libvirt things
might have worked.
The re-install runs dh_apparmor which has loaded and enforced it.
--
You received
Sorry my system broke down in various way stalling debugging of this for a few
days.
Back on it ...
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1895967
Title:
3.0.0~beta1-0ubuntu1 in Groovy break
Lookup fails:
(gdb) fin
Run till exit from #0 virSecurityDriverLookup (name=name@entry=0x0,
virtDriver=virtDriver@entry=0x7fffd26ae1b2 "QEMU") at
../../../src/security/security_driver.c:50
virSecurityManagerNew (name=name@entry=0x0,
virtDriver=virtDriver@entry=0x7fffd26ae1b2 "QEMU", flags=flag
This is the failing function
221 /* returns -1 on error or profile for libvirtd is unconfined, 0 if
complain
222 * mode and 1 if enforcing. This is required because at present you cannot
223 * aa_change_profile() from a process that is unconfined.
224
Need to check the init of the bunch in qemuSecurityInit and qemuSecurityNew.
But that happens at daemon start and not later when probing caps.
virQEMUDriverConfigLoadSecurityEntry load this from config and it includes
apparmor in both:
/etc/libvirt/qemu.conf:# security_driver = [ "selinux
for (i = 0; sec_managers[i]; i++) {
...
VIR_DEBUG("Initialized caps for security driver \"%s\" with "
Good:
- apparmor
- dac
Bad:
- none
- dac
In function virQEMUDriverCreateCapabilities.
So it isn't probing apparmor because it isn't even in the list.
That list is from "qemuSecurityGetNested
Good:
(gdb) p *((virSecurityStackDataPtr)(((virQEMUDriverPtr)conn->privateData
)->securityManager->privateData))->itemsHead->securityManager
$7 = {parent = {parent = {parent_instance = {g_type_instance = {g_class =
0x7f430805ddf0}, ref_count = 1, qdata = 0x0}}, lock = {lock = {__data = {__lock
=
It seems once fixed the system is ok and I can't get into the bad state
again :/
I tried on another bad system (withotu changing back to the former version)
1. A restart of the service
2. Trying to force capabilities reset (remove cache) + service restart
None of these got it into the good case,
17 matches
Mail list logo