I did some testing around https://salsa.debian.org/cloud-team/debian-vagrant-images/-/tree/1TBv2 (not merged in master yet) and I am still reluctant to merge the branch. I am OK to bump the default disk size to something like 40GB but not to 1TB.
The problem with the disk size of 1TB is as such: when you do a lot of write / erase cycles, the deletion of blocks is not propagated to the qcow2 backing disk image, so even though the OS in the VM reports only 2GB of block usage, the disk image could grow to 1TB, without the user knowing it. I could reproduce this behavior running `fio` in the guest in a loop. I find this behavior dangerous. At that point I see three possibilities: - you add to your pull request a change of the virtualized disk controller from virtio-blk to virtio-scsi and to the default libvirt vagrantfile the "unmap" option so that deletion of blocks in the guest are propagated om host storage - you're fine with a disk image size of 40, or let's say 80GB - you use a shared folder for the builds. I just noticed vagrant-libvirt has also support for virtio-fs which according to its author has native host performance. If they are security concerns, let's discuss that in details and involve upstream if needed. virtio-fs is mature enough that it's use in production for Kata Containers in Kubernetes and OpenShift Sandboxed containers in the Red Hat Kubernetes offering. What do you think ?