After having slept on that for a night I consider that not a good way.
The current lifecycle of virt-aa-helper just doesn't match to create a local
virConnectPtr context, yet OTOH without one we can't translate the pool as the
code that spawns the guest would.
Before I implement some useless cod
I'm not sure if upstream likes that virt-aa-helper now needs to talk to the
socket, but they can suggest otherwise if they don't.
I now have something that calls the lookup correctly.
It then fails to find the pool:
libvirt: Storage Driver error : Storage pool not found: no storage pool with
ma
virConnectOpenReadOnly and co would initialize the connection, but they would
be env specific.
Also as it seems the header is meant for users of libvirt, but not the local
tools.
I'll report back when I found the proper initialization (e.g. the empty one via
virGetConnect works but is obviously
When the guest gets parsed all that virt-aa-helper initially has is the disk
element.
This has in our case only:
p *(disk->src->srcpool)
$10 = {pool = 0x100236b90 "internal", volume = 0x100236440 "foo", voltype = 0,
pooltype = 0, actualtype = 0, mode = 0}
Nothing in virt-aa-helper yet cares abou
Initially this can be run locally just with virt-aa-helper as any
virt-aa-helper dev.
This mostly is like c#3, but with a service from local build and with libvtool
wrapper to gdb the virt-aa-helper.
Since even this is not the most straight forward thing if you never done
it here a short log how
Back on this, currently trying to build up a case where this can be tested from
git (had some obstacles):
- In the dev system (from local dir) not all apparmor rules apply
- In a container the zfs actions are not all possible
- So we need a KVM driving a 2nd-level KVM for all of this.
0. get a mu
** Changed in: libvirt (Ubuntu Yakkety)
Status: Confirmed => Won't Fix
** Changed in: libvirt (Ubuntu)
Status: Confirmed => In Progress
** Changed in: libvirt (Ubuntu Zesty)
Assignee: ChristianEhrhardt (paelzer) => (unassigned)
--
You received this bug notification because yo
I don't seem to find time shortly, so I drop server-next tag.
But I ensured it is tracked to remind me and make me feel bad :-/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398
Title:
Apparmor
** Tags removed: server-next
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398
Title:
Apparmor prevents using ZFS storage pools
To manage notifications about this bug go to:
https://bugs.launch
** Tags added: virt-aa-helper
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398
Title:
Apparmor prevents using ZFS storage pools
To manage notifications about this bug go to:
https://bugs.launc
No worries, I have a good feeling of how busy you are from the bug
notifications I get. Knowing that you will look into it is already a
great deal, so thanks again.
On 2017-04-05 03:33 AM, ChristianEhrhardt wrote:
> Damn it seems I can't find the hours yet - I really beg your pardon Simon as
> I
Damn it seems I can't find the hours yet - I really beg your pardon Simon as I
like you as an active community member! But I also have PTO next week and I'm
not sure I get to it before that.
I'll assign to myself to not forget about it.
** Changed in: libvirt (Ubuntu Zesty)
Assignee: (unass
FYI - I haven't forgotten, just flooded with other load
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398
Title:
Apparmor prevents using ZFS storage pools
To manage notifications about this bug
Hello Christian,
On 2017-03-30 06:18 AM, ChristianEhrhardt wrote:
> So the following might serve as a temporary workaround adding "/dev/zd[0-9]*
> rw" to /etc/apparmor.d/abstractions/libvirt-qemu.
What I did something similar but less convenient. My goal was to keep
the per-VM isolation so I add
These are mostly notes to remember later:
In such a setup the pool definition has the base pool path
$ virsh pool-dumpxml internal
internal
5e83970c-dc95-41af-bd10-9d9001dc9cba
3170893824
2147573760
1023320064
internal
/dev/zvol/internal
The volume holds the respec
Apparmor can't by design follow symlinks
(https://bugs.launchpad.net/apparmor/+bug/1485055).
So test-inserting into /etc/apparmor.d/abstractions/libvirt-qemu:
- /dev/zvol/internal/foo rw, => still fails
- /dev/zd0 rw, => works (guest sees disk as expected)
So does any generic rule.
So the followi
** Changed in: libvirt (Ubuntu Yakkety)
Status: New => Confirmed
** Changed in: libvirt (Ubuntu Xenial)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398
Ti
Extending your already good testcase description:
# create a simple guest
$ sudo apt-get install uvtool-libvirt zfsutils-linux
$ uvt-simplestreams-libvirt --verbose sync --source
http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=xenial
$ ssh-keygen
$ uvt-kvm create --passwor
Hi Simon,
thanks for your report - so we did not get far enough with bug 1641618 which
only solved things for direct zvols, but not for disks from a pool.
I'm afraid there might be no part generating that yet in the aa-helper,
but I'll look into it and report back here once I know more details.
19 matches
Mail list logo