According to the btrfs document[1], adding another disk to the existing
btrfs file system is equivalent to create a raid with Raid 1 on Meta,
Raid 0 on Data group (mkfs.btrfs -m raid1 -d raid0).

So when wiping one of the disks out, it will corrupt the data but the meta 
should be OK:
# btrfs fi show
Label: none  uuid: 3e7e0fb5-5fec-4938-bc20-0e5dfdf466ff
        Total devices 2 FS bytes used 128.00KiB
        devid    1 size 512.00MiB used 88.00MiB path /dev/loop0
        *** Some devices missing

This behaviour in this test case has been changed in the kernel with
commit 4330e183c9537df20952d4a9ee142c536fb8ae54, we should be able to
mount the drive with 'rw' in degraded mode, unless the data chunk was
destroyed.

[1]
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices

** Changed in: ubuntu-kernel-tests
     Assignee: (unassigned) => Po-Hsu Lin (cypressyew)

** Changed in: ubuntu-kernel-tests
       Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1809870

Title:
  2365dd3ca02bbb6d3412404482e1d85752549953 in ubuntu_btrfs_kernel_fixes
  failed on B

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/1809870/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to