= bionic verification =
ubuntu@ip-172-30-0-117:~$ sudo mdadm --create /dev/md0 --run --metadata=default 
--level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
mdadm: array /dev/md0 started.
ubuntu@ip-172-30-0-117:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid0 xvde[3] xvdd[2] xvdc[1] xvdb[0]
      29323264 blocks super 1.2 512k chunks
      
unused devices: <none>

## kernel & mdadm upgrade

ubuntu@ip-172-30-0-117:~$ dmesg | grep raid
[    4.086107] md/raid0:md0: cannot assemble multi-zone RAID0 with 
default_layout setting
[    4.092253] md/raid0: please set raid0.default_layout to 1 or 2
[    4.452725] raid6: avx2x4   gen() 24200 MB/s
[    4.500724] raid6: avx2x4   xor() 15657 MB/s
[    4.548725] raid6: avx2x2   gen() 21010 MB/s
[    4.596724] raid6: avx2x2   xor() 13248 MB/s
[    4.644724] raid6: avx2x1   gen() 18005 MB/s
[    4.692726] raid6: avx2x1   xor() 12606 MB/s
[    4.740726] raid6: sse2x4   gen() 13375 MB/s
[    4.788725] raid6: sse2x4   xor()  8309 MB/s
[    4.836730] raid6: sse2x2   gen() 11042 MB/s
[    4.884725] raid6: sse2x2   xor()  7264 MB/s
[    4.932730] raid6: sse2x1   gen()  9288 MB/s
[    4.980723] raid6: sse2x1   xor()  6578 MB/s
[    4.983827] raid6: using algorithm avx2x4 gen() 24200 MB/s
[    4.987535] raid6: .... xor() 15657 MB/s, rmw enabled
[    4.991018] raid6: using avx2x2 recovery algorithm
ubuntu@ip-172-30-0-117:~$ cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md0 : inactive xvdd[2] xvde[3] xvdb[0] xvdc[1]
      29323264 blocks super 1.2
       
unused devices: <none>
ubuntu@ip-172-30-0-117:~$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0
ubuntu@ip-172-30-0-117:~$ sudo mdadm --assemble /dev/md0 -U layout-alternate 
/dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
mdadm: /dev/md0 has been started with 4 drives.
ubuntu@ip-172-30-0-117:~$ cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid0 xvdb[0] xvde[3] xvdd[2] xvdc[1]
      29323264 blocks super 1.2 512k chunks
      
unused devices: <none>

## reboot
ubuntu@ip-172-30-0-117:~$ dmesg | grep raid
[    3.793154] raid6: avx2x4   gen() 24292 MB/s
[    3.841155] raid6: avx2x4   xor() 15646 MB/s
[    3.889157] raid6: avx2x2   gen() 20570 MB/s
[    3.937155] raid6: avx2x2   xor() 13351 MB/s
[    3.985155] raid6: avx2x1   gen() 18190 MB/s
[    4.033154] raid6: avx2x1   xor() 12469 MB/s
[    4.081153] raid6: sse2x4   gen() 13399 MB/s
[    4.129153] raid6: sse2x4   xor()  8358 MB/s
[    4.177151] raid6: sse2x2   gen() 10984 MB/s
[    4.225157] raid6: sse2x2   xor()  7224 MB/s
[    4.273158] raid6: sse2x1   gen()  9335 MB/s
[    4.321157] raid6: sse2x1   xor()  6578 MB/s
[    4.323567] raid6: using algorithm avx2x4 gen() 24292 MB/s
[    4.326583] raid6: .... xor() 15646 MB/s, rmw enabled
[    4.329350] raid6: using avx2x2 recovery algorithm
ubuntu@ip-172-30-0-117:~$ cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid0 xvde[3] xvdb[0] xvdc[1] xvdd[2]
      29323264 blocks super 1.2 512k chunks
      
unused devices: <none>
ubuntu@ip-172-30-0-117:~$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

ubuntu@ip-172-30-0-117:~$ sudo mdadm --create /dev/md0 --run --metadata=default 
--level=0 --raid-devices=3 /dev/xvdb /dev/xvdd /dev/xvde
mdadm: /dev/xvdb appears to be part of a raid array:
       level=raid0 devices=4 ctime=Fri Dec  6 22:20:04 2019
mdadm: /dev/xvdd appears to be part of a raid array:
       level=raid0 devices=4 ctime=Fri Dec  6 22:20:04 2019
mdadm: /dev/xvde appears to be part of a raid array:
       level=raid0 devices=4 ctime=Fri Dec  6 22:20:04 2019
mdadm: array /dev/md0 started.
ubuntu@ip-172-30-0-117:~$ sudo mdadm --detail /dev/md0 | grep Layout
            Layout : original


** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850540

Title:
  multi-zone raid0 corruption

Status in Release Notes for Ubuntu:
  New
Status in linux package in Ubuntu:
  Confirmed
Status in mdadm package in Ubuntu:
  Fix Released
Status in linux source package in Precise:
  New
Status in mdadm source package in Precise:
  New
Status in linux source package in Trusty:
  Confirmed
Status in mdadm source package in Trusty:
  Confirmed
Status in linux source package in Xenial:
  Confirmed
Status in mdadm source package in Xenial:
  Confirmed
Status in linux source package in Bionic:
  Confirmed
Status in mdadm source package in Bionic:
  Fix Committed
Status in linux source package in Disco:
  Confirmed
Status in mdadm source package in Disco:
  Fix Committed
Status in linux source package in Eoan:
  Confirmed
Status in mdadm source package in Eoan:
  Fix Committed
Status in linux source package in Focal:
  Confirmed
Status in mdadm source package in Focal:
  Fix Released
Status in mdadm package in Debian:
  Fix Released

Bug description:
  Bug 1849682 tracks the temporarily revert of the fix for this issue,
  while this bug tracks the re-application of that fix once we have a
  full solution.

  [Impact]
  (cut & paste from https://marc.info/?l=linux-raid&m=157360088014027&w=2)
  An unintentional RAID0 layout change was introduced in the v3.14 kernel. This 
effectively means there are 2 different layouts Linux will use to write data to 
RAID0 arrays in the wild - the “pre-3.14” way and the “3.14 and later” way. 
Mixing these layouts by writing to an array while booted on these different 
kernel versions can lead to corruption.

  Note that this only impacts RAID0 arrays that include devices of
  different sizes. If your devices are all the same size, both layouts
  are equivalent, and your array is not at risk of corruption due to
  this issue.

  Unfortunately, the kernel cannot detect which layout was used for
  writes to pre-existing arrays, and therefore requires input from the
  administrator. This input can be provided via the kernel command line
  with the raid0.default_layout=<N> parameter, or by setting the
  default_layout module parameter when loading the raid0 module. With a
  new enough version of mdadm (>= 4.2, or equivalent distro backports),
  you can set the layout version when assembling a stopped array. For
  example:

  mdadm --stop /dev/md0
  mdadm --assemble -U layout-alternate /dev/md0 /dev/sda1 /dev/sda2
  See the mdadm manpage for more details. Once set in this manner, the layout 
will be recorded in the array and will not need to be explicitly specified in 
the future.

  (The mdadm part of this SRU is for the above support ^)

  [Test Case]
  = mdadm =
  Confirm that a multi-zone raid0 created w/ older mdadm is able to be started 
on a fixed kernel by setting a layout.
  1) Ex: w/ old kernel/mdadm:
    mdadm --create /dev/md0 --run --metadata=default \
          --level=0 --raid-devices=2 /dev/vdb1 /dev/vdc1
  2) Reboot onto fixed kernel & update mdadm
  3) sudo mdadm --stop /dev/md0 &&
     sudo mdadm --assemble -U layout-alternate \
       /dev/md0 /dev/vdb1 /dev/vdc1
  4) Confirm that the array autostarts on reboot
  5) Confirm that w/ new kernel & new mdadm, a user can create and start an 
array in a backwards-compatible fashion (i.e. w/o an explicit layout).
  6) Verify that 'mdadm --detail /dev/md0' displays the layout

  = linux =
  Similar to above, but using kernel command line options.

  [Regression Risk]
  The kernel side of things will break starting pre-existing arrays. That's 
intentional.

  Although I've done due-diligence to check for backwards compatibility
  issues, the mdadm side may still present some.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/1850540/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to