I would like to add I have also seen this when using Ceph as a backend
on a Pike deployment.

I have fixed several VMs, by performing the following process (note this
can potentially wreck the VM, so be careful):

Shutdown VM
Create RBD snapshot (for backup purposes)
Export the RBD disk as an image
Then, setup the image as a loop device
Run FSCK on the loop device

Weirdly, a lot of times this wasn't enough, so I then had to:
Mount the loop device on a (Ceph) host
Umount the loop device

Re-import the image back into Ceph, overwriting the existing image (or
move existing, whatever)

This then allowed me to continue booting VM as normal.


I have tried using a recovery image on the VM and running fsck against
the RBD device, but to no avail. Hopefully this may aid in investigation
or help someone out.

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1773449

Title:
  VMs do not survive host reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1773449/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to