** Changed in: watcher
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067815
Title:
dbmanage sync fails to run
To manage notifications about this bug go t
** Changed in: watcher
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2086710
Title:
watcher's use of apscheduler is incompatible with python 3.12 and
eve
** Changed in: watcher
Status: Fix Released => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067815
Title:
dbmanage sync fails to run
To manage notifications about this bug go t
this was fixed in 2024.1
2023.2 does not officially support sqlachmey 2.0 which is the cause of
this issue.
https://review.opendev.org/c/openstack/watcher/+/918500 should in deed
fix this if backported
** Changed in: watcher
Status: New => Fix Released
** Also affects: watcher/2023.2
with the merging of
https://github.com/eventlet/eventlet/commit/fcc5ce42d979757396e222d2823d27d57985caec
one of the 2 failures is no longer present and the sqlachemy issue is
now much cleaner.
https://paste.opendev.org/show/b0AwRcsfb1EkDuRyy6jk/
the blocking read in sqlachemy is very much still
From a nova process point of view, this is a minor feature, not a bug. so we
need to track this as a specless blueprint.
in general I'm supportive of the enhancement but we need to track this properly
upstream.
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug
setting this to medium severity
there is an existing race in how the cache is updated.
the workaround is to periodically restart the scheduled to clear the cache.
this looks like it affects all stable releases of OpenStack.
however its unlikely but not impossible that a fix for this can be backpo
** Changed in: nova
Status: Confirmed => Opinion
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1542491
Title:
Scheduler update_aggregates race causes incorrect aggregate
information
To man
** Changed in: nova/train
Assignee: Billy Olsen (billy-olsen) => sean mooney (sean-k-mooney)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1888395
Title:
live migration of a vm using
by the way i also want to see this backported to train upstream so any
review ectra that ye can provide to make that happen more quickly is
great :)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/188839
i tought that cannonical did not reuse upstream project bugs for
tracking the change in teh ubuntu cloud archive?
the convention previously was to file a different bug for the cloud
archive that referenced the upstream bug no?
using the same bug for upstream and downstream kind of make it hard to
for what its worth this has been partially backported downstream in
redhat osp
we backported only the self healying and not the online data migration
which had a bug in it.
so https://review.opendev.org/#/c/591607/ can be safely backport ported
but https://review.opendev.org/#/c/614167/20 has a b
hum i was hoping to indicate this affect focal in some what but not sure how to
do that but this issue
happens with the ubunutu 20.04 version of qemu 4.2
it does not seam to happen with the centos 8 build of the same qemu so i dont
know if there is a delta in packages or if its just a case that i
im not sure that https://review.opendev.org/#/c/707474/ acttully works
or at the very least is a complete fix.
as noted in https://bugs.launchpad.net/nova/+bug/1882521/comments/1
we still see the same threading error.
i think we likely need to patch oslo_concurrency too or look into
another fix.
just adding some more info.
i also deploy openstack rocky on a ubuntu 18.04 host
Linux cloud-5 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
with ubuntu 18.04 l1 guest running
Linux numa-migration-1 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:1
ah that is good to hear.
i assume this will be fixed then before the newton release.
what is the time frame of libvirt 1.3.3 and qemu 2.6?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/832507
Title:
this has been around a really long time now
is the aproch suggested here a suitable solution.
https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1460197
really it is the installation tool change but prehaps we can do someting from
the nova side also.
perhaps just document how to configu
xianghui if you look int he nova compute log it will contain a full copy
of the libvirt xml that it tried to boot the vm with.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1513367
Title:
qemu-syste
18 matches
Mail list logo