Thanks for the full dmesg.
It seems to me that:
"unable to set AppArmor profile 'libvirt-81b387d9-1dfc-4f55-8b98-0318f1f94442'"
means there is an issue in loading the profile after your change.

That matches:
 audit: type=1400 audit(1519028363.683:12417): apparmor="DENIED" 
operation="change_profile" info="label not found" error=-2 
profile="/usr/sbin/libvirtd" 
name="libvirt-81b387d9-1dfc-4f55-8b98-0318f1f94442" pid=12949 comm="libvirtd"

It is not getting to the actual restore, it is failing when spawning the
guest to to the changes in the apparmor profile.

I tried to check what you hit:
$ virsh save bionic-test --file /var/tmp/bionic-test.save --verbose
Guest is shut-off and I have
-rw------- 1 root root 527808329 Feb 19 12:34 /var/tmp/bionic-test.save
The restore hits the (silent) denial we discussed.
   #deny /tmp/{,**} r,                                                          
  
   #deny /var/tmp/{,**} r,
Changed the two lines above to a comment.
Then restored again, just worked:
$ virsh restore /var/tmp/bionic-test.save
Domain restored from /var/tmp/bionic-test.save

To quote jdstrand from bug 1403648:
"We should not allow access to /tmp and /var/tmp as that breaks application 
isolation."

That said we are in the following situation:
1. /tmp and /var/tmp are not allowed to be read (apparmor default for app 
isolation)
2. read denies there are silenced via explicit denies in 
/etc/apparmor.d/abstractions/libvirt-qemu
3. I see your point:
3.1 on save libvirt writes to that place (libvirt is allowed to do so, while 
qemu is not)
3.2 on restore qemu wants to read it and is denied.

And you wonder about the asymetric behavior of 3.1 and 3.2.
I agree that it is somewhat unexpected, but wonder what would be better
1. We could also deny /var /tmp for the lbivirt daemon (which intentionally has 
a rather lenient apparmor profile). Then already on the save people would be 
denied, maybe for a new release - but not as an SRU to not break people relying 
on that access working.
2. And on the new release we already have the --bypass-cache fixes you referred 
to to get the restore working there as a workaround - so the benefit of 
preventing libvirt to access there isn't too big either. So forbidding the 
access on "save" for libvirt there would make that useless.

I'm unsure how to continue. To better brain-storm with you on how to
proceed do you have a clear preferred solution (other than the already
included bypass-cache fixes) or is it just "not nice in general" that
the denial should be consistent for save/restore?


Separate to the discussion above:
To find how your modified apparmor profile breaks your guest start you could 
share it - as I mentioned it worked for me right away (no need to restart 
libvirt after changing btw, the one we change it loaded on guest load).

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1719579

Title:
  [Ubuntu 18.04] [libvirt] virsh restore fails from state file saved in
  /var/tmp folder using virsh save

Status in The Ubuntu-power-systems project:
  Fix Released
Status in apparmor package in Ubuntu:
  Invalid
Status in libvirt package in Ubuntu:
  Fix Released

Bug description:
  == Comment: #1 - SEETEENA THOUFEEK <sthou...@in.ibm.com> - 2017-01-17 
00:09:16 ==
  Bala, Please mail me the machine information.

  == Comment: #3 - SEETEENA THOUFEEK <sthou...@in.ibm.com> - 2017-01-17 
02:14:06 ==
  2017-01-16 12:09:37.707+0000: 7024: info : 
virSecurityDACRestoreFileLabelInternal:388 : Restoring DAC user and group on 
'/var/tmp/bala'
  2017-01-16 12:09:37.707+0000: 7024: info : 
virSecurityDACSetOwnershipInternal:290 : Setting DAC user and group on 
'/var/tmp/bala' to '0:0'
  2017-01-16 12:09:37.707+0000: 7024: warning : qemuDomainSaveImageStartVM:6750 
: failed to restore save state label on /var/tmp/bala
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff4ca62b00
  2017-01-16 12:09:37.707+0000: 7024: debug : qemuDomainObjEndAsyncJob:1848 : 
Stopping async job: start (vm=0x3fff4ca535c0 name=virt-tests-vm1-bala)
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectRef:296 : OBJECT_REF: 
obj=0x3fff4ca62b00
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff4ca62b00
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff4ca535c0
  2017-01-16 12:09:37.707+0000: 7024: debug : virThreadJobClear:121 : Thread 
7024 (virNetServerHandleJob) finished job remoteDispatchDomainRestore with 
ret=-1
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff7c002c10
  2017-01-16 12:09:37.707+0000: 7024: debug : virNetServerProgramSendError:153 
: prog=536903814 ver=1 proc=54 type=1 serial=4 msg=0x100133d2590 
rerr=0x3fffa59be3c0
  2017-01-16 12:09:37.707+0000: 7024: debug : virNetMessageEncodePayload:376 : 
Encode length as 172
  2017-01-16 12:09:37.707+0000: 7024: debug : 
virNetServerClientSendMessageLocked:1399 : msg=0x100133d2590 proc=54 len=172 
offset=0
  2017-01-16 12:09:37.707+0000: 7024: info : 
virNetServerClientSendMessageLocked:1407 : RPC_SERVER_CLIENT_MSG_TX_QUEUE: 
client=0x100133d23c0 len=172 prog=536903814 vers=1 proc=54 type=1 status=1 
serial=4
  2017-01-16 12:09:37.707+0000: 7024: debug : 
virNetServerClientCalculateHandleMode:157 : tls=(nil) hs=-1, rx=0x100133d0670 
tx=0x100133d2590
  2017-01-16 12:09:37.707+0000: 7024: debug : 
virNetServerClientCalculateHandleMode:192 : mode=3
  2017-01-16 12:09:37.707+0000: 7024: info : virEventPollUpdateHandle:152 : 
EVENT_POLL_UPDATE_HANDLE: watch=417 events=3
  2017-01-16 12:09:37.707+0000: 7024: debug : virEventPollInterruptLocked:727 : 
Interrupting
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff7c002c10
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x100133caea0
  2017-01-16 12:09:37.707+0000: 7024: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x100133d23c0
  .
  2017-01-16 12:14:28.445+0000: 7019: info : qemuMonitorJSONIOProcessLine:201 : 
QEMU_MONITOR_RECV_EVENT: mon=0x3fff94004d90 event={"timestamp": {"seconds": 
1484568868, "microseconds": 444620}, "event": "MIGRATION", "data": {"status": 
"failed"}}
  2017-01-16 12:14:28.445+0000: 7019: debug : qemuMonitorJSONIOProcessEvent:147 
: mon=0x3fff94004d90 obj=0x100133b5670
  2017-01-16 12:14:28.445+0000: 7019: debug : virJSONValueToString:1762 : 
object=0x100133a8000
  2017-01-16 12:14:28.445+0000: 7019: debug : virJSONValueToStringOne:1691 : 
object=0x100133a8000 type=0 gen=0x100133d1160
  2017-01-16 12:14:28.445+0000: 7019: debug : virJSONValueToStringOne:1691 : 
object=0x100133d2a80 type=2 gen=0x100133d1160
  2017-01-16 12:14:28.445+0000: 7019: debug : virJSONValueToString:1795 : 
result={"status":"failed"}
  2017-01-16 12:14:28.445+0000: 7019: debug : qemuMonitorEmitEvent:1218 : 
mon=0x3fff94004d90 event=MIGRATION
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectRef:296 : OBJECT_REF: 
obj=0x3fff94004d90
  2017-01-16 12:14:28.445+0000: 7019: debug : qemuProcessHandleEvent:629 : 
vm=0x3fff4ca535c0
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectNew:202 : OBJECT_NEW: 
obj=0x100133d2870 classname=virDomainQemuMonitorEvent
  2017-01-16 12:14:28.445+0000: 7019: debug : virObjectEventNew:645 : 
obj=0x100133d2870
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x100133d2870
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectUnref:261 : 
OBJECT_DISPOSE: obj=0x100133d2870
  2017-01-16 12:14:28.445+0000: 7019: debug : 
virDomainQemuMonitorEventDispose:477 : obj=0x100133d2870
  2017-01-16 12:14:28.445+0000: 7019: debug : virObjectEventDispose:121 : 
obj=0x100133d2870
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff94004d90
  2017-01-16 12:14:28.445+0000: 7019: debug : qemuMonitorJSONIOProcessEvent:172 
: handle MIGRATION handler=0x3fff9d7247e0 data=0x100133a8000
  2017-01-16 12:14:28.445+0000: 7019: debug : 
qemuMonitorEmitMigrationStatus:1488 : mon=0x3fff94004d90, status=failed
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectRef:296 : OBJECT_REF: 
obj=0x3fff94004d90
  2017-01-16 12:14:28.445+0000: 7019: debug : 
qemuProcessHandleMigrationStatus:1502 : Migration of domain 0x3fff4ca535c0 
virt-tests-vm1-bala changed state to failed
  2017-01-16 12:14:28.445+0000: 7019: info : virObjectUnref:259 : OBJECT_UNREF: 
obj=0x3fff94004d90
  2017-01-16 12:14:28.445+0000: 7019: debug : qemuMonitorJSONIOProcess:255 : 
Total used 232 bytes out of 232 available in buffer
  2017-01-16 12:14:28.445+0000: 7019: info : virEventPollUpdateHandle:152 : 
EVENT_POLL_UPDATE_HANDLE: watch=430 events=13
  2017-01-16 12:14:28.445+0000: 7023: error : qemuMigrationCheckJobStatus:2641 
: operation failed: job: unexpectedly failed


  this is an apparmor issue and there is no libvirt bug here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1719579/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to