Public bug reported:

I experience this with Ubuntu 21.04. I've not tested on older releases.
I have a btrfs RAID1 array and simulated a failed disk and then replaced
it with the `btrfs replace` feature part of btrfs-progs after mounting
the remaining disk with the degraded,noatime mount options.

For clarity: The disk that is being used in btrfs replace is sda1, sdb1
is the existing disk with the data that is mounted degraded.

Unfortunately, when you begin the replacement, btrfs begins slowly
allocating new empty single chunks on the disk that was mounted degraded
(sdb1) as soon as the btrfs replace begins, seemingly equal to the
amount of raid1 chunks that were already allocated + 1. As the replace
continues, this value increases:

Data,single: Size:107.00GiB, Used:5.25MiB (0.00%)
   /dev/sdb1     107.00GiB

... several minutes later as btrfs replace continues to run ...

Data,single: Size:177.00GiB, Used:5.25MiB (0.00%)
   /dev/sdb1     177.00GiB

I have also posted this on the Btrfs kernel mailing list here for more
details and output from commands: https://lore.kernel.org/linux-
btrfs/cafmvigfq+xotjo_578lvsvycd3sblcv_ap6a+b0u+ybapu2...@mail.gmail.com/T/#t

It's speculated this could be an Ubuntu specific issue. What is certain
is I've done this process three times with this array on Ubuntu 21.04
and it happens every time, so I can reproduce it 100%. I believe this is
a bug with the Ubuntu kernel, but I suppose it could be related to
btrfs-progs. What is certain is the btrfs replace triggers the issue.

I'm happy to provide any more additional details as best I can. Thanks,
-Jonah

** Affects: linux (Ubuntu)
     Importance: Undecided
         Status: Incomplete


** Tags: btrfs

** Description changed:

  I experience this with Ubuntu 21.04. I have a btrfs RAID1 array and
  simulated a failed disk and then replaced it with the `btrfs replace`
  feature part of btrfs-progs after mounting the remaining disk with the
  degraded,noatime mount options.
  
+ For clarity: The disk that is being used in btrfs replace is sda1, sdb1
+ is the existing disk with the data that is mounted degraded.
+ 
  Unfortunately, when you begin the replacement, btrfs begins slowly
  allocating new empty single chunks on the disk that was mounted degraded
- as soon as the btrfs replace begins, seemingly equal to the amount of
- raid1 chunks that were already allocated + 1. As the replace continues,
- this value increases:
+ (sdb1) as soon as the btrfs replace begins, seemingly equal to the
+ amount of raid1 chunks that were already allocated + 1. As the replace
+ continues, this value increases:
  
  Data,single: Size:107.00GiB, Used:5.25MiB (0.00%)
-    /dev/sdb1     107.00GiB
+    /dev/sdb1     107.00GiB
  
  ... several minutes later as btrfs replace continues to run ...
  
  Data,single: Size:177.00GiB, Used:5.25MiB (0.00%)
-    /dev/sdb1     177.00GiB
+    /dev/sdb1     177.00GiB
  
  I have also posted this on the Btrfs kernel mailing list here for more
  details and output from commands: https://lore.kernel.org/linux-
  btrfs/cafmvigfq+xotjo_578lvsvycd3sblcv_ap6a+b0u+ybapu2...@mail.gmail.com/T/#t
  
  It's speculated this could be an Ubuntu specific issue. What is certain
  is I've done this process three times with this array on Ubuntu 21.04
  and it happens every time, so I can reproduce it 100%. I believe this is
  a bug with the Ubuntu kernel, but I suppose it could be related to
  btrfs-progs. What is certain is the btrfs replace triggers the issue.
  
  I'm happy to provide any more additional details as best I can. Thanks,
  -Jonah

** Description changed:

- I experience this with Ubuntu 21.04. I have a btrfs RAID1 array and
- simulated a failed disk and then replaced it with the `btrfs replace`
- feature part of btrfs-progs after mounting the remaining disk with the
- degraded,noatime mount options.
+ I experience this with Ubuntu 21.04. I've not tested on older releases.
+ I have a btrfs RAID1 array and simulated a failed disk and then replaced
+ it with the `btrfs replace` feature part of btrfs-progs after mounting
+ the remaining disk with the degraded,noatime mount options.
  
  For clarity: The disk that is being used in btrfs replace is sda1, sdb1
  is the existing disk with the data that is mounted degraded.
  
  Unfortunately, when you begin the replacement, btrfs begins slowly
  allocating new empty single chunks on the disk that was mounted degraded
  (sdb1) as soon as the btrfs replace begins, seemingly equal to the
  amount of raid1 chunks that were already allocated + 1. As the replace
  continues, this value increases:
  
  Data,single: Size:107.00GiB, Used:5.25MiB (0.00%)
     /dev/sdb1     107.00GiB
  
  ... several minutes later as btrfs replace continues to run ...
  
  Data,single: Size:177.00GiB, Used:5.25MiB (0.00%)
     /dev/sdb1     177.00GiB
  
  I have also posted this on the Btrfs kernel mailing list here for more
  details and output from commands: https://lore.kernel.org/linux-
  btrfs/cafmvigfq+xotjo_578lvsvycd3sblcv_ap6a+b0u+ybapu2...@mail.gmail.com/T/#t
  
  It's speculated this could be an Ubuntu specific issue. What is certain
  is I've done this process three times with this array on Ubuntu 21.04
  and it happens every time, so I can reproduce it 100%. I believe this is
  a bug with the Ubuntu kernel, but I suppose it could be related to
  btrfs-progs. What is certain is the btrfs replace triggers the issue.
  
  I'm happy to provide any more additional details as best I can. Thanks,
  -Jonah

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1925284

Title:
  Btrfs: Disk replacement causes massive allocation of empty single
  chunks while degraded

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I experience this with Ubuntu 21.04. I've not tested on older
  releases. I have a btrfs RAID1 array and simulated a failed disk and
  then replaced it with the `btrfs replace` feature part of btrfs-progs
  after mounting the remaining disk with the degraded,noatime mount
  options.

  For clarity: The disk that is being used in btrfs replace is sda1,
  sdb1 is the existing disk with the data that is mounted degraded.

  Unfortunately, when you begin the replacement, btrfs begins slowly
  allocating new empty single chunks on the disk that was mounted
  degraded (sdb1) as soon as the btrfs replace begins, seemingly equal
  to the amount of raid1 chunks that were already allocated + 1. As the
  replace continues, this value increases:

  Data,single: Size:107.00GiB, Used:5.25MiB (0.00%)
     /dev/sdb1     107.00GiB

  ... several minutes later as btrfs replace continues to run ...

  Data,single: Size:177.00GiB, Used:5.25MiB (0.00%)
     /dev/sdb1     177.00GiB

  I have also posted this on the Btrfs kernel mailing list here for more
  details and output from commands: https://lore.kernel.org/linux-
  btrfs/cafmvigfq+xotjo_578lvsvycd3sblcv_ap6a+b0u+ybapu2...@mail.gmail.com/T/#t

  It's speculated this could be an Ubuntu specific issue. What is
  certain is I've done this process three times with this array on
  Ubuntu 21.04 and it happens every time, so I can reproduce it 100%. I
  believe this is a bug with the Ubuntu kernel, but I suppose it could
  be related to btrfs-progs. What is certain is the btrfs replace
  triggers the issue.

  I'm happy to provide any more additional details as best I can. Thanks,
  -Jonah

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1925284/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to