Hi Thimo, Recently, Xiao Ni, the original author of the Raid10 block discard patchset, has posted a new revision of the patchset to the linux-raid mailing list for feedback.
Xiao has fixed the two bugs that caused the regression. The first was incorrectly calculating the start offset for block discard for the second and extra disks. The second bug was an incorrect stripe size for far layouts. The new patches are: https://www.spinics.net/lists/raid/msg67208.html https://www.spinics.net/lists/raid/msg67212.html https://www.spinics.net/lists/raid/msg67213.html https://www.spinics.net/lists/raid/msg67209.html https://www.spinics.net/lists/raid/msg67210.html https://www.spinics.net/lists/raid/msg67211.html Now, at some point in the future I do want to try and SRU these patches to the Ubuntu kernel, but only when they are ready. I was wondering if you would be interested in helping to test these new patches, since you have a lot of experience with Raid10. If you have some time, and a dedicated spare server, read comment 13 in the below bug which contains instructions to install test kernels I have built. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578/comments/13 This is entirely optional, and don't feel that you are obligated to test. We just want to get more eyes on the patches and some wider testing done, and to give feedback back to Xiao, the author, and to Song Liu, the Raid subsystem maintainer about the performance and safety of these patches. I have tested the test kernels with the regression reproducer from this bug, and the mismatch count is always 0, and all fsck -f comes back clean for all disks. If you have some spare time and a spare server, I would really appreciate help testing these kernels. Thanks! Matthew -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1907262 Title: raid10: discard leads to corrupted file system Status in linux package in Ubuntu: Fix Released Status in linux source package in Trusty: Invalid Status in linux source package in Xenial: Invalid Status in linux source package in Bionic: Fix Released Status in linux source package in Focal: Fix Released Status in linux source package in Groovy: Fix Released Bug description: Seems to be closely related to https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578 After updating the Ubuntu 18.04 kernel from 4.15.0-124 to 4.15.0-126 the fstrim command triggered by fstrim.timer causes a severe number of mismatches between two RAID10 component devices. This bug affects several machines in our company with different HW configurations (All using ECC RAM). Both, NVMe and SATA SSDs are affected. How to reproduce: - Create a RAID10 LVM and filesystem on two SSDs mdadm -C -v -l10 -n2 -N "lv-raid" -R /dev/md0 /dev/nvme0n1p2 /dev/nvme1n1p2 pvcreate -ff -y /dev/md0 vgcreate -f -y VolGroup /dev/md0 lvcreate -n root -L 100G -ay -y VolGroup mkfs.ext4 /dev/VolGroup/root mount /dev/VolGroup/root /mnt - Write some data, sync and delete it dd if=/dev/zero of=/mnt/data.raw bs=4K count=1M sync rm /mnt/data.raw - Check the RAID device echo check >/sys/block/md0/md/sync_action - After finishing (see /proc/mdstat), check the mismatch_cnt (should be 0): cat /sys/block/md0/md/mismatch_cnt - Trigger the bug fstrim /mnt - Re-Check the RAID device echo check >/sys/block/md0/md/sync_action - After finishing (see /proc/mdstat), check the mismatch_cnt (probably in the range of N*10000): cat /sys/block/md0/md/mismatch_cnt After investigating this issue on several machines it *seems* that the first drive does the trim correctly while the second one goes wild. At least the number and severity of errors found by a USB stick live session fsck.ext4 suggests this. To perform the single drive evaluation the RAID10 was started using a single drive at once: mdadm --assemble /dev/md127 /dev/nvme0n1p2 mdadm --run /dev/md127 fsck.ext4 -n -f /dev/VolGroup/root vgchange -a n /dev/VolGroup mdadm --stop /dev/md127 mdadm --assemble /dev/md127 /dev/nvme1n1p2 mdadm --run /dev/md127 fsck.ext4 -n -f /dev/VolGroup/root When starting these fscks without -n, on the first device it seems the directory structure is OK while on the second device there is only the lost+found folder left. Side-note: Another machine using HWE kernel 5.4.0-56 (after using -53 before) seems to have a quite similar issue. Unfortunately the risk/regression assessment in the aforementioned bug is not complete: the workaround only mitigates the issues during FS creation. This bug on the other hand is triggered by a weekly service (fstrim) causing severe file system corruption. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1907262/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp