This bug was fixed in the package ceph - 18.2.4-0ubuntu1~cloud1
---
ceph (18.2.4-0ubuntu1~cloud1) jammy-bobcat; urgency=medium
.
* d/control: Add python3-{packaging,ceph-common} to (Build-)Depends
as these are undocumented/detected runtime dependencies in
ceph-volume (LP
This bug was fixed in the package ceph - 18.2.4-0ubuntu1~cloud1
---
ceph (18.2.4-0ubuntu1~cloud1) jammy-bobcat; urgency=medium
.
* d/control: Add python3-{packaging,ceph-common} to (Build-)Depends
as these are undocumented/detected runtime dependencies in
ceph-volume (LP
I can confirm that ceph-volume works correctly now:
```
root@client:~# ceph-volume -h
usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level
{debug,info,warning,error,critical}] [--log-path LOG_PATH]
ceph-volume: Deploy Ceph OSDs using different device technologies like lvm or
physical disks.
This has been fixed upstream in the main branch:
https://github.com/ceph/ceph/commit/729fd8e25ff2bfbcf99790d6cd08489d1c4e2ede
prepping update for bobcat now.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/
** Changed in: cloud-archive/bobcat
Status: Confirmed => In Progress
** Changed in: cloud-archive/bobcat
Assignee: (unassigned) => James Page (james-page)
** Changed in: cloud-archive
Status: New => Fix Released
--
You received this bug notification because you are a member o
Note: This issue is more impactful than I initially realised. I was
thinking it was mainly an issue on initial deploy, but if you upgrade
your deployment to 18.2.4 and then reboot a node, the OSDs won't start,
because the ceph-volume tool is needed to activate the OSDs.
** Changed in: cloud-archiv
OK, well we learnt now that only upgrading and not doing a fresh
deployment, and only doing the ceph-mon tests is not enough. Indeed,
let's work on a more concrete/full test plan. I have some strong
thoughts for that so will discuss with you and Utkarsh, etc.
Luciano: In the mean time, can you pri
This issue affects Reef in the Bobcat repo, a working 18.2.0 install
broke when upgraded to the 18.2.4 packages. Workaround is to manually
install python3-packaging.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
Will this fix be made available in Bobcat? That is the only place to
consume Reef AFAIK.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717
Title:
ceph-volume needs "packaging" and "ceph" modules
I think the process I used to test the SRU caused the issue to not be
present in the tests, because I had a model up and running and then used
`add-apt-repository` and `apt update` in the Ceph units, which can cause
additional packages to be installed. The reason for doing this is that
at the time
I suspect the reason this was not picked up in the SRU test, is possibly
related to code from the Squid charm being used in the test instead of
the Reef Charm.
The squid charm merged a "tactical fix" to manually install
python3-packaging in this change:
https://review.opendev.org/c/openstack/charm
I discovered this issue myself (for Reef, 18.2.4) today when running the
zaza integration test for charm-glance-simplestreams-sync against jammy-
bobcat.
According to the SRU, the charm-ceph-osd tests were run, and the package
version was verified. The question is, why did those tests not catch
th
SRU link FTR
https://bugs.launchpad.net/cloud-archive/+bug/2075358
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717
Title:
ceph-volume needs "packaging" and "ceph" modules
To manage notificati
The recent 18.2.4 SRU introduced a regression here, python3-packaging is now
required for reef as well
https://github.com/ceph/ceph/commit/956305eb5caf323cfadb772a9a1f910a90aa7740
ubuntu@juju-69234d-0:~$ ceph-volume -h
Traceback (most recent call last):
File "/usr/sbin/ceph-volume", line 33, in
This bug was fixed in the package ceph -
19.2.0~git20240301.4c76c50-0ubuntu6.1
---
ceph (19.2.0~git20240301.4c76c50-0ubuntu6.1) noble; urgency=medium
[ Luciano Lo Giudice]
* d/control: Add python3-{packaging,ceph-common} to (Build-)Depends
as these are undocumented/detected ru
# ceph-volume --help
usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level
{debug,info,warning,error,critical}] [--log-path LOG_PATH]
ceph-volume: Deploy Ceph OSDs using different device technologies like lvm or
physical disks.
Log Path: /var/log/ceph
Ceph Conf: /etc/ceph/ceph.conf
Available
Hello. I'm testing the proposed 6.1 package on Noble.
ceph-volume, ceph-osd, ceph-mgr, ceph-mon, ceph-radosgw work well. Thank you.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717
Title:
ceph-
This bug was fixed in the package ceph -
19.2.0~git20240301.4c76c50-0ubuntu7
---
ceph (19.2.0~git20240301.4c76c50-0ubuntu7) oracular; urgency=medium
[ Luciano Lo Giudice]
* d/control: Add python3-{packaging,ceph-common} to (Build-)Depends
as these are undocumented/detected run
** Changed in: ceph (Ubuntu Oracular)
Status: Triaged => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064717
Title:
ceph-volume needs "packaging" and "ceph" modules
To manage
Note that ceph in oracular FTBFS due to a broken API in the snappy
package which is pending a transition across Debian and Ubuntu.
** Summary changed:
- ceph-volume needs "packaging" module
+ ceph-volume needs "packaging" and "ceph" modules
** Description changed:
[ Impact ]
ceph-volume too
20 matches
Mail list logo