severity 1032706 important thanks Hi,
"Sperl, Mario" <ma...@sperl.eu.org> wrote on 11/03/2023 at 09:30:06+0100: > Package: lxc > Version: 1:5.0.2-1 > Severity: grave > Justification: causes data loss > > Dear Maintainer, > > *** Reporter, please consider answering these questions, where appropriate *** > > * What led up to the situation? I tried to generate a snapshot with > lxc-snapshot for a test container that is more than 1G of size. Snapshot > generation does not show any problems but the restore does only restore 1G so > the container is not able to start after restore. > * What exactly did you do (or not do) that was effective (or > ineffective)? One can create a new loop file with correct size and copy > the snapshot contents in here but this renders this command absolute useless > when using loop backend > * What was the outcome of this action? Container cannot start > * What outcome did you expect instead? A container that does start > normally after restoring. Lowering the severity for multiple reasons: 1. The data is not lost, only the changes after the latest snapshot are, as the snapshots are not deleted by a call to restore. The snapshot is actually a full-fledged lxc directory under snaps, that you can reuse almost directly. I admit not losing the changes after the latest snapshot would be better, but I feel that this sole point is not enough to keep the bug as 'grave'; 2. A snapshot should not be restored inplace, as suggested by the command's manpage. The -N option is only useful for restoration and allows one to create a new container based on the snapshot. It's actually this feature that doesn't work when the rootfs is on a loop device ; 3. This bug is tied specifically to a backend little to no user use, other filesystems seem to produce the proper result. If it comes to that, I'd rather remove the loop feature than having LXC out of bookworm. I'll still try to have a proper upstream solution offer before the release. -- PEB
signature.asc
Description: PGP signature