Hi Alex,
you are right, I installed stable/queens version (I haven't been sure
about master branch compatibility) of nova-lxd via devstack using ZFS.
Meanwhile, I tried to create a new storage for LXD with different names
for LXD pool and ZFS pool, however, the problem was still there. But
when I created "None" storage with a following line
sudo lxc storage create None_pool zfs zfs.pool_name=None
The problem was solved, and I moved to another error -> "Host 'ubuntu'
is not mapped to any cell" (I will build OpenStack one again with the
nova cells service enabled, and I will write about the results. As I
wrote, I used local.conf from github repo. There isn't nova cell enabled),
Best,
Martin
On 19.09.2018 18:17, Alex Kavanagh wrote:
Hi Martin
Okay, some progress which is great!
But, to help you further, a few more details:
1. Which Linux OS are you using?
I'm using Ubuntu 16.04.5 LTS.
1. Is the LXD snap installed? - if so, which version is it? "lxc
--version" will give the answer.
When I realised that the plugin is expecting LXD installed by apt (I
came to this conclusion by
https://github.com/openstack/nova-lxd/search?q=%2Fvar%2Flib%2F&unscoped_q=%2Fvar%2Flib%2F),
I installed LXD/LXC 3.0.1 by "sudo apt install -t xenial-backports lxd lxc".
So is it possible to use a LXD snap installation also? Because, when I
used it, the devstack installation script didn't recognize the LXD
installation, and so "stack.sh" will install LXD version 2...
1. Have you configured a storage pool for LXD? (e.g. have you done
"sudo lxd init" and created a default storage pool). What kind of
storage pool is it? (e.g. dir, btrfs, zfs, etc.)
I am using a zfs storage pool. I configured it with "sudo lxd init".
However, the changing of its name doesn't work properly (maybe LXD snap
version fixed it) and so I created another zfs storage pool by LXC CLI,
however, it wasn't successful, until its name wasn't None.
1. It looks like your using master version of nova-lxd (i.e. via
devstack and the plugin)? I think it's also going to use ZFS? If
so, there's a bug in nova-lxd (that I will fix in the not too
distant future) where unless the ZFS pool is the same string as
the LXD pool for it, then nova-lxd can't find the storage pool.
So, if you run the following commands:
lxc storage list
zpool list
default -> created by "lxd init" (also in previous cases)
stack -> created by "sudo lxc storage create stack zfs
zfs.pool_name=stack_pool"
None_pool -> created by LXC CLI also, just to see whether it will work,
or not.
sudo lxc storage list
+-----------+-------------+--------+----------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+-----------+-------------+--------+----------------------------------+---------+
| None_pool | | zfs | /var/lib/lxd/disks/None_pool.img |
0 |
+-----------+-------------+--------+----------------------------------+---------+
| default | | zfs | /var/lib/lxd/disks/default.img |
2 |
+-----------+-------------+--------+----------------------------------+---------+
| stack | | zfs | /var/lib/lxd/disks/stack.img |
0 |
+-----------+-------------+--------+----------------------------------+---------+
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH
ALTROOT
None 14.9G 514K 14.9G - 0% 0% 1.00x ONLINE -
default 2.98G 307M 2.68G - 9% 10% 1.00x ONLINE -
stack_pool 14.9G 292K 14.9G - 0% 0% 1.00x ONLINE -
That should show the current configuration.
Hope that this helps.
Cheers
Alex.
On Wed, Sep 19, 2018 at 2:48 PM, Martin Bobák <[email protected]
<mailto:[email protected]>> wrote:
Hi Alex,
first off, thank you for your kind reply. I followed your advices.
However, I still have a problem which looks like an incorrect
setting up of a storage pool for lxd. I configured the
/etc/nova-compute.conf as you suggested, but it didn't help.
In my case those lines look like:
[DEFAULT]
compute_driver = nova_lxd.nova.virt.lxd.LXDDriver
[lxd]
allow_live_migration = True
pool = lxd
But the error is following (relevant parts from syslog --> it
looks like lxd variables from the /etc/nova-compute.conf aren't
delivered to the driver, or recognized by it, I tried to use
different names for compute_driver e.g. lxd.LXDDriver,
nova-lxd.nova.virt.lxd.(driver).LXDDriver -> however, it didn't help):
Sep 19 09:15:08 localhost nova-conductor[1956]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-5cda2246-8087-4f49-b9b3-463d29fd7bf8 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=1956)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:08 localhost nova-consoleauth[1984]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-2a9399e1-996d-42f4-899b-62db8d8e5afd #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=1984)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:10 localhost nova-scheduler[2000]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-3d23848b-270b-4718-91dd-3b0417d27d21 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2000)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:11 localhost [email protected]
<mailto:[email protected]>[2030]: #033[00;32mDEBUG
nova.api.openstack.placement.wsgi [#033[00;36m-#033[00;32m]
#033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2294)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:12 localhost [email protected]
<mailto:[email protected]>[1999]: #033[00;32mDEBUG
nova.api.openstack.wsgi_app [#033[01;36mNone
req-1e20e359-2bcc-44e6-9c75-7422c5837c03 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2181)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:14 localhost [email protected]
<mailto:[email protected]>[2030]: #033[00;32mDEBUG
nova.api.openstack.placement.wsgi [#033[00;36m-#033[00;32m]
#033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2292)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:15 localhost [email protected]
<mailto:[email protected]>[1999]: #033[00;32mDEBUG
nova.api.openstack.wsgi_app [#033[01;36mNone
req-6d5745e7-3784-4b8e-8210-e7bf74fd9655 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2182)
log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mcompute_driver =
lxd.LXDDriver#033[00m #033[00;33m{{(pid=1968) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m
ep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mlxd.allow_live_migration =
False#033[00m #033[00;33m{{(pid=1968) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m
Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mlxd.pool = None#033[00m
#033[00;33m{{(pid=1968) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m
Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mlxd.root_dir =
/var/lib/lxd/#033[00m #033[00;33m{{(pid=1968) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m
Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_service.service [#033[01;36mNone
req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mlxd.timeout = -1#033[00m
#033[00;33m{{(pid=1968) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m
.........
.........
Sep 19 09:15:21 localhost nova-compute[1968]: #033[00;32mDEBUG
oslo_concurrency.processutils [#033[01;36mNone
req-87a543da-84db-4fe1-a743-75ca6525ac2a #033[00;36mNone
None#033[00;32m] #033[01;35m#033[00;32mu'sudo nova-rootwrap
/etc/nova/rootwrap.conf zpool list -o size -H None' failed. Not
Retrying.#033[00m #033[00;33m{{(pid=1968) execute
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:457}}#033[00m
Sep 19 09:15:21 localhost nova-compute[1968]: #033[01;31mERROR
nova.compute.manager [#033[01;36mNone
req-87a543da-84db-4fe1-a743-75ca6525ac2a #033[00;36mNone
None#033[01;31m] #033[01;35m#033[01;31mError updating resources
for node ubuntu.#033[00m: ProcessExecutionError: Unexpected error
while running command.
Sep 19 09:15:21 localhost nova-compute[1968]: Command: sudo
nova-rootwrap /etc/nova/rootwrap.conf zpool list -o size -H None
Sep 19 09:15:21 localhost nova-compute[1968]: Exit code: 1
Sep 19 09:15:21 localhost nova-compute[1968]: Stdout: u''
Sep 19 09:15:21 localhost nova-compute[1968]: Stderr: u"cannot
open 'None': no such pool\n"
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mTraceback (most recent
call last):
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova/nova/compute/manager.py", line 7344, in
update_available_resource_for_node
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m
rt.update_available_resource(context, nodename)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova/nova/compute/resource_tracker.py", line 673, in
update_available_resource
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m resources =
self.driver.get_available_resource(nodename)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 1031, in
get_available_resource
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m local_disk_info =
_get_zpool_info(pool_name)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 209, in
_get_zpool_info
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m total =
_get_zpool_attribute('size')
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 201, in
_get_zpool_attribute
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m run_as_root=True)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova/nova/utils.py", line 230, in execute
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m return
RootwrapProcessHelper().execute(*cmd, **kwargs)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/opt/stack/nova/nova/utils.py", line 113, in execute
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m return
processutils.execute(*cmd, **kwargs)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m File
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py",
line 424, in execute
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m cmd=sanitized_cmd)
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mProcessExecutionError:
Unexpected error while running command.
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mCommand: sudo
nova-rootwrap /etc/nova/rootwrap.conf zpool list -o size -H None
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mExit code: 1
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mStdout: u''
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00mStderr: u"cannot open
'None': no such pool\n"
Sep 19 09:15:21 localhost nova-compute[1968]: ERROR
nova.compute.manager #033[01;35m#033[00m
The local.conf used by devstack is looks like:
[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=devstack
# run the services you want to use
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth,placement-api,placement-client
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-meta,q-l3
ENABLED_SERVICES+=,cinder,c-sch,c-api,c-vol
ENABLED_SERVICES+=,horizon
# disabled services
disable_service n-net
# enable nova-lxd
enable_plugin nova-lxd
https://git.openstack.org/openstack/nova-lxd
<https://git.openstack.org/openstack/nova-lxd> stable/queens
Best regards,
Martin.
On 17.09.2018 15:29, Alex Kavanagh wrote:
Hi Martin
On Sun, Sep 16, 2018 at 8:46 PM, Martin Bobák
<[email protected] <mailto:[email protected]>> wrote:
Hi all,
what is the recommended way of nova-lxd plugin installation
on a fresh xenial host running a pure OpenStack devstack
(Queens) installation? I have tried to install the nova-lxd
plugin by pip install, or allowing it during OpenStack
devstack installation, but each attempt lead to the same
result. The plugin is either not recognized or the
installation doesn't finish successfully. I went through the
nova-lxd homepage as well as its github repo, but I wasn't
able to solve the whole installation problem (e.g. I found
out that the installation of the newest version of pylxd
helps with installation of the plugin, however, the plugin
isn't still recognized. So additional configuration is
needed...).
Do you have any thoughts about it?
I'm one of the maintainers for nova-lxd, so hopefully can get you
up and running.
In order for nova-lxd to be configured in nova, the
/etc/nova-compute.conf needs to contain the lines:
[DEFAULT]
compute_driver = nova_lxd.nova.virt.lxd.LXDDriver
This little 'fact' is hidden away in the "nova-compute-lxd"
debian package, unfortunately.
You'll also need to configure an [lxd] section in nova.conf to
control the storage pool in LXD for containers to use when
launching instances.
[lxd]
allow_live_migration = True
pool = {{ storage_pool }}
The storage pool will need to be set up separately in lxd.
--
However, an 'easy' way to test OpenStack with nova-lxd, is to use
charms. We have a number of bundles that work with Juju. For
example we have a deployable bundle for xenial and queens at
https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-lxd-xenial-queens
<https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-lxd-xenial-queens>
which also has some (hopefully) useful instructions on how to get
it going.
Note the instructions say you have to use MaaS, but you should be
able to adjust using it to the hardware you are using.
As an alternative, the openstack-ansible project also supports
nova-lxd, but I don't have any experience with that.
Do come back if you have any further questions; do let me know
how you get on.
Best regards
Alex.
Best,
Martin.
--
Martin Bobák, PhD.
Researcher
Institute of Informatics
Slovak Academy of Sciences
Dubravska cesta 9
<https://maps.google.com/?q=Dubravska+cesta+9&entry=gmail&source=g>,
SK-845 07 Bratislava, Slovakia
Room: 311, Phone: +421 (0)2 5941-1278
E-mail: [email protected] <mailto:[email protected]>
URL: http://www.ui.sav.sk/w/odd/pdip/
<http://www.ui.sav.sk/w/odd/pdip/>
LinkedIn: https://www.linkedin.com/in/martin-bobak/
<https://www.linkedin.com/in/martin-bobak/>
_______________________________________________
lxc-users mailing list
[email protected]
<mailto:[email protected]>
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users>
--
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
_______________________________________________
lxc-users mailing list
[email protected]
<mailto:[email protected]>
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users>
--
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
_______________________________________________
lxc-users mailing list
[email protected]
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Martin Bobák, PhD.
Researcher
Institute of Informatics
Slovak Academy of Sciences
Dubravska cesta 9, SK-845 07 Bratislava, Slovakia
Room: 311, Phone: +421 (0)2 5941-1278
E-mail: [email protected]
URL: http://www.ui.sav.sk/w/odd/pdip/
LinkedIn: https://www.linkedin.com/in/martin-bobak/
_______________________________________________
lxc-users mailing list
[email protected]
http://lists.linuxcontainers.org/listinfo/lxc-users