Package: lvm2
Version: 2.03.11-2.1
Severity: normal

Dear Maintainer,

We have two similar machines working as database master & replica. They both
use same HDD drives and Areca ARC-1261 HW raid controllers, with same LVM
volume groups set up.

I've noticed that during boot, and before rebooting, both of these machines
produce error for the SAME block, but only when LVM snapshots exists:

```
# salt -L master.domain.com,replica.domain.com cmd.run "zgrep 'Buffer I/O 
error' /var/log/kern*"              
replica.domain.com:                                                             
                                                 
    /var/log/kern.log:May  3 10:44:59 replica kernel: [3114072.061835] Buffer 
I/O error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log:May  3 10:54:16 replica kernel: [    2.544139] Buffer I/O 
error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log:May  3 10:54:16 replica kernel: [    6.653298] Buffer I/O 
error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log.1:Apr  6 11:33:24 replica kernel: [784199.317321] Buffer 
I/O error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log.1:Apr  6 11:34:31 replica kernel: [784266.011713] Buffer 
I/O error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log.3.gz:Mar 28 09:39:26 replica kernel: [1551875.380127] 
Buffer I/O error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log.3.gz:Mar 28 09:43:27 replica kernel: [    2.565393] 
Buffer I/O error on dev dm-5, logical block 861061104, async page read
    /var/log/kern.log.3.gz:Mar 28 09:43:27 replica kernel: [    6.635266] 
Buffer I/O error on dev dm-5, logical block 861061104, async page read
master.domain.com:
    /var/log/kern.log:May  3 10:43:59 master kernel: [3070183.772497] Buffer 
I/O error on dev dm-6, logical block 861061104, async page read 
    /var/log/kern.log.4.gz:Mar 28 10:16:18 master kernel: [905062.938666] 
Buffer I/O error on dev dm-6, logical block 861061104, async page read
    /var/log/kern.log.4.gz:Mar 28 21:56:28 master kernel: [    1.570582] Buffer 
I/O error on dev dm-6, logical block 861061104, async page read
    /var/log/kern.log.4.gz:Mar 28 21:56:28 master kernel: [    5.298972] Buffer 
I/O error on dev dm-6, logical block 861061104, async page read
```

After removing snapshot and rebooting machine the error no longer appear.

Oldest error was found on February (as far I could see in backups), probably
during some big database upgrade when snapshots are usually used:

```
kern.log.1:Feb 17 08:04:18 replica kernel: [    1.470778] Buffer I/O error on 
dev dm-3, logical block 861061104, async page read
```

So I am not sure if this started after some recent kernel upgrade, for example.

We did not discover any "real" issues (data corruption, instability, etc), only
this log message. Not sure how critical it is.


-- System Information:
Debian Release: 11.3
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 5.10.0-14-amd64 (SMP w/4 CPU threads)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) (ignored: LC_ALL 
set to en_US.UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages lvm2 depends on:
ii  dmeventd                  2:1.02.175-2.1
ii  dmsetup                   2:1.02.175-2.1
ii  init-system-helpers       1.60
ii  libaio1                   0.3.112-9
ii  libblkid1                 2.36.1-8+deb11u1
ii  libc6                     2.31-13+deb11u3
ii  libdevmapper-event1.02.1  2:1.02.175-2.1
ii  libedit2                  3.1-20191231-2+b1
ii  libselinux1               3.1-3
ii  libsystemd0               247.3-7
ii  libudev1                  247.3-7
ii  lsb-base                  11.1.0

Versions of packages lvm2 recommends:
ii  thin-provisioning-tools  0.9.0-1

lvm2 suggests no packages.

-- no debconf information

Reply via email to