Hey @Eduardo,
I ran the following test:
- Created an Jammy lxd container
- apt install ceph-mds
- sudo pro attach
- sudo pro enable usg
- sudo apt-get update --yes && sudo apt-get install --yes usg
- sudo usg generate-tailoring cis_level1_server /root/cis-l1.xml
- sudo usg audit --tailoring-
@Christian, thank you for the history.
@Corey, that should be ok for me now. The workaround I can use is to
install qemu-kvm package on all computes, so all computes will have same
emulator binary and they can be compatible for live-migration.
--
You received this bug notification because you ar
Public bug reported:
After upgrading compute nodes from mitaka to newton we are not able to migrate
virtual machine.
We are not able to migrate VMs from recently onboarded compute nodes to old
compute nodes that has been upgraded to newton.
Reason is:
Xenial-mitaka nova-compute (2.13) depends on
Can you test using this configuration of Pacemaker ? :
primitive p_percona ocf:heartbeat:galera \
params
wsrep_cluster_address="gcomm://controller-1,controller-2,controller-3" \
params config="/etc/mysql/my.cnf" \
params datadir="/var/lib/percona-xtradb-cluster" \
Yes you are right Andreas, you need active database writes when you
shutdown them. The resource agent automatically detect which instance
has the last commit and start it as master and resume the replication to
the other instances.
--
You received this bug notification because you are a member of
Hi Andreas,
Thank you very much for the package provided. It works fine for me.
You can reproduce the bug when you deploy hacluster charm as a subordinate
service for percona cluster. It will deploy Pacemaker, put the vip and the
resource agent to manage Percona cluster.
It will use galera reso
Public bug reported:
Galera resource agent is not able to put mysql up and master even if
safe_to_bootstrap flag in grastate.dat is set to 1.
* res_percona_promote_0 on 09fde2-2 'unknown error' (1): call=1373,
status=complete, exitreason='MySQL server failed to start (pid=2432)
(rc=0), please che
In my setup I have 14 containers in a control node, after shutdown and start
the node, some of them started but others still down.
I use juju 2.0.2
xenial 16.04.1
root@C1N4-controller:~# lxc list
+--+-++--++---+
machine 1 log
** Attachment added: "machine-1.log"
https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1640079/+attachment/4805494/+files/machine-1.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bug
I had same issue, but exporting the certificate only didn't resolve it.
There is a new version of pyvmomi library. So I resolved the problem by
downgrading the library version:
sudo pip3 install pyvmomi==6.0.0.2016.4
--
You received this bug notification because you are a member of Ubuntu
Bugs,
** Summary changed:
- Linux container does not take same cpu affinity as kernet's hosts
+ Linux container does not take same cpu configuration as kernet's hosts
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.ne
Public bug reported:
When I configured cpu affinity on a ubuntu 16.04 with kernel version
"4.4.0-36-generic" on a host containing 3 cpus:
on /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=0"
update-grub
reboot
The output of "cat /proc/self/status" :
ubuntu@ubuntu:~$ cat /proc/self
12 matches
Mail list logo