I tried to recreate.
Note: as it was reported on power + Xenial this test was done on ppc64el Xenial 
as of today

To do so with as much debugging as possible I created a normal Xenial KVM Guest 
via
$ uvt-kvm create --cpu 4 --password=ubuntu paelzer-testlvm-xenial release=xenial
Then I added a few more disks to be used as PVs
$ sudo qemu-img create -f qcow2 test-lvm-disk1.qcow2 8
And added those to the Guest.

The guest then initially looks like:
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    253:0    0    8G  0 disk 
vdb    253:16   0    8G  0 disk 
vdc    253:32   0    8G  0 disk 
vdd    253:48   0  366K  0 disk 
vde    253:64   0    8G  0 disk 
|-vde1 253:65   0    8G  0 part /
`-vde2 253:66   0    8M  0 part 

Then the usual flow is
1. fdisk, create partition set LVM partition type (8e)
$ sudo fdisk /dev/vd[abc]
2. Full PVs on all the three disks
$ sudo pvcreate /dev/vd[abc]
3. vgcreate a single VG out of all of the PVs
$ sudo vgcreate vg /dev/vda1 /dev/vdb1 /dev/vdc1

At this point it looks like this:
$ sudo vgdisplay vg
  --- Volume group ---
  VG Name               vg
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               23.99 GiB
  PE Size               4.00 MiB
  Total PE              6141
  Alloc PE / Size       0 / 0   
  Free  PE / Size       6141 / 23.99 GiB
  VG UUID               1QMFbn-5DAW-T9IE-Fdfd-9RZK-t8gl-5nte5r

Ok, create normal as well as thin LVs out of that now.
First of all thin provisioning is not mainstream, the dependency is only a 
suggest, so install the tools
$ sudo apt-get install thin-provisioning-tools
Then create the normal LV
$ sudo lvcreate -L 5G --name lv_normal vg
And finally a thin LV
$ sudo lvcreate --size 10G --virtualsize 5G --thinpool mythinpool --name 
lv_thin vg
Lets go harder and overprovision the thinpool
$ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin2 vg
$ sudo lvcreate --virtualsize 5G --thinpool mythinpool --name lv_thin3 vg

With that in place my LVs look like:
$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg/lv_normal
  LV Name                lv_normal
  VG Name                vg
  LV UUID                aCtNC0-gbx1-uHoB-3dC8-dfhl-NBxd-Axm879
  LV Write Access        read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 12:54:02 +0000
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Name                mythinpool
  VG Name                vg
  LV UUID                UCKy8A-ovc6-Qh9n-wrEM-c2s2-myvD-hpLhb0
  LV Write Access        read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 13:04:03 +0000
  LV Pool metadata       mythinpool_tmeta
  LV Pool data           mythinpool_tdata
  LV Status              available
  # open                 4
  LV Size                10.00 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.72%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3
   
  --- Logical volume ---
  LV Path                /dev/vg/lv_thin
  LV Name                lv_thin
  VG Name                vg
  LV UUID                ay7Clp-78K7-8UoZ-ueZZ-WYTy-IQlF-oQkjdT
  LV Write Access        read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 13:04:03 +0000
  LV Pool name           mythinpool
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Mapped size            0.00%
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:5
   
  --- Logical volume ---
  LV Path                /dev/vg/lv_thin2
  LV Name                lv_thin2
  VG Name                vg
  LV UUID                Zj5BfS-Cbm1-9TgI-gwk3-vdBG-wwH9-cSmC0B
  LV Write Access        read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 13:05:11 +0000
  LV Pool name           mythinpool
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Mapped size            0.00%
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6
   
  --- Logical volume ---
  LV Path                /dev/vg/lv_thin3
  LV Name                lv_thin3
  VG Name                vg
  LV UUID                bSPGZH-Thwx-M8qx-QDS2-gIUz-50us-hGCRGu
  LV Write Access        read/write
  LV Creation host, time paelzer-testlvm-xenial, 2017-01-25 13:05:16 +0000
  LV Pool name           mythinpool
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Mapped size            0.00%
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:7

$ ll /dev/mapper/
total 0
drwxr-xr-x  2 root root     220 Jan 25 13:05 ./
drwxr-xr-x 16 root root    4240 Jan 25 13:05 ../
crw-------  1 root root 10, 236 Jan 25 12:20 control
lrwxrwxrwx  1 root root       7 Jan 25 12:54 vg-lv_normal -> ../dm-0
lrwxrwxrwx  1 root root       7 Jan 25 13:04 vg-lv_thin -> ../dm-5
lrwxrwxrwx  1 root root       7 Jan 25 13:05 vg-lv_thin2 -> ../dm-6
lrwxrwxrwx  1 root root       7 Jan 25 13:05 vg-lv_thin3 -> ../dm-7
lrwxrwxrwx  1 root root       7 Jan 25 13:04 vg-mythinpool -> ../dm-4
lrwxrwxrwx  1 root root       7 Jan 25 13:04 vg-mythinpool-tpool -> ../dm-3
lrwxrwxrwx  1 root root       7 Jan 25 13:04 vg-mythinpool_tdata -> ../dm-2
lrwxrwxrwx  1 root root       7 Jan 25 13:04 vg-mythinpool_tmeta -> ../dm-1

Now lets put it to use
for dev in lv_normal lv_thin lv_thin2 lv_thin3;
do
  sudo mkfs.ext4 /dev/mapper/vg-${dev}
  sudo mkdir -p /mnt/${dev}
  sudo mount /dev/mapper/vg-${dev} /mnt/${dev}
  sudo touch /mnt/${dev}/foobar
done

Things are properly mounted
$ mount | grep lv
/dev/mapper/vg-lv_normal on /mnt/lv_normal type ext4 (rw,relatime,data=ordered)
/dev/mapper/vg-lv_thin on /mnt/lv_thin type ext4 
(rw,relatime,stripe=16,data=ordered)
/dev/mapper/vg-lv_thin2 on /mnt/lv_thin2 type ext4 
(rw,relatime,stripe=16,data=ordered)
/dev/mapper/vg-lv_thin3 on /mnt/lv_thin3 type ext4 
(rw,relatime,stripe=16,data=ordered)

All files written normally
$ ls -laF /mnt/*/foobar
-rw-r--r-- 1 root root 0 Jan 25 13:10 /mnt/lv_normal/foobar
-rw-r--r-- 1 root root 0 Jan 25 13:10 /mnt/lv_thin/foobar
-rw-r--r-- 1 root root 0 Jan 25 13:10 /mnt/lv_thin2/foobar
-rw-r--r-- 1 root root 0 Jan 25 13:10 /mnt/lv_thin3/foobar

lsblk holds the thin LVs as it should
$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                       253:0    0    8G  0 disk 
`-vda1                    253:1    0    8G  0 part 
  |-vg-lv_normal          252:0    0    5G  0 lvm  /mnt/lv_normal
  |-vg-mythinpool_tmeta   252:1    0   12M  0 lvm  
  | `-vg-mythinpool-tpool 252:3    0   10G  0 lvm  
  |   |-vg-mythinpool     252:4    0   10G  0 lvm  
  |   |-vg-lv_thin        252:5    0    5G  0 lvm  /mnt/lv_thin
  |   |-vg-lv_thin2       252:6    0    5G  0 lvm  /mnt/lv_thin2
  |   `-vg-lv_thin3       252:7    0    5G  0 lvm  /mnt/lv_thin3
  `-vg-mythinpool_tdata   252:2    0   10G  0 lvm  
    `-vg-mythinpool-tpool 252:3    0   10G  0 lvm  
      |-vg-mythinpool     252:4    0   10G  0 lvm  
      |-vg-lv_thin        252:5    0    5G  0 lvm  /mnt/lv_thin
      |-vg-lv_thin2       252:6    0    5G  0 lvm  /mnt/lv_thin2
      `-vg-lv_thin3       252:7    0    5G  0 lvm  /mnt/lv_thin3
vdb                       253:16   0    8G  0 disk 
`-vdb1                    253:17   0    8G  0 part 
  `-vg-mythinpool_tdata   252:2    0   10G  0 lvm  
    `-vg-mythinpool-tpool 252:3    0   10G  0 lvm  
      |-vg-mythinpool     252:4    0   10G  0 lvm  
      |-vg-lv_thin        252:5    0    5G  0 lvm  /mnt/lv_thin
      |-vg-lv_thin2       252:6    0    5G  0 lvm  /mnt/lv_thin2
      `-vg-lv_thin3       252:7    0    5G  0 lvm  /mnt/lv_thin3
vdc                       253:32   0    8G  0 disk 
`-vdc1                    253:33   0    8G  0 part 
vdd                       253:48   0  366K  0 disk 
vde                       253:64   0    8G  0 disk 
|-vde1                    253:65   0    8G  0 part /
`-vde2                    253:66   0    8M  0 part

Now rebooting to test the reported issue.
But after reboot all looks sane, lsblk, lvdisplay, dev/mapper they all are as 
expected.

Of course things are not mounted, but I didn't create any fstab entries, so
for dev in lv_normal lv_thin lv_thin2 lv_thin3; do sudo mount 
/dev/mapper/vg-${dev} /mnt/${dev}; done

All mount just fine and the file is still there.

You reported "I was able to replicate the issue. I created VG and LV and 
rebooted the system. Found that devicemapper entries for corresponding devices 
are missing after reboot. However, LVM commands like 'vgdisplay' and 
'lvdisplay' show proper info, but 'lsblk' doesn't show the device's LVM related 
info after reboot."
Could you either
1. report your exact steps on a fresh system to cause this
or
2. modify the steps I reported until the issue shows up


** Changed in: docker (Ubuntu)
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1657646

Title:
  VG/LV are not available in /dev/mapper/ after reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/docker/+bug/1657646/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to