Package: mdadm Version: 2.6.7.1-1 Severity: normal
I have two similar server, using Lenny on an Intel S3200 mainboard with integrated SATA controller set to AHCI mode. I used MD to create 2 raid sets (root and swap) that share the same phisical disks, so when the auto check starts, md1 is delayed until md0 is finished. The problem is that while md0 is checking (and md1 is delayed) I get a lot of errors, one every 120 seconds, like the one pasted here: ============================================================ [36007.726476] md: data-check of RAID array md0 [36007.726476] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [36007.726476] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check. [36007.726476] md: using 128k window, over a total of 289065472 blocks. [36007.726476] md: delaying data-check of md1 until md0 has finished (they share one or more physical units) [36167.934923] INFO: task md1_resync:5528 blocked for more than 120 seconds. [36167.934979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [36167.935061] md1_resync D ffff81014adb7e70 0 5528 2 [36167.935116] ffff81014adb7db0 0000000000000046 d28e17083e92bfe9 bff4963fc8cab76e [36167.935205] ffff81013c0a87d0 ffff81021f326e20 ffff81013c0a8a58 0000000100000001 [36167.935294] 0000000000000282 0000000000000003 ffff81014adb7db0 ffffffff8022adc9 [36167.935355] Call Trace: [36167.935434] [<ffffffff8022adc9>] __wake_up+0x38/0x4f [36167.935487] [<ffffffffa01289ad>] :md_mod:md_do_sync+0x224/0x908 [36167.935541] [<ffffffff8020a857>] __switch_to+0x96/0x35e [36167.935590] [<ffffffff8022f07f>] hrtick_set+0x88/0xf7 [36167.935639] [<ffffffff802461a9>] autoremove_wake_function+0x0/0x2e [36167.935695] [<ffffffffa012b4b1>] :md_mod:md_thread+0xd7/0xed [36167.935723] [<ffffffffa012b3da>] :md_mod:md_thread+0x0/0xed [36167.935723] [<ffffffff80246083>] kthread+0x47/0x74 [36167.935723] [<ffffffff80230196>] schedule_tail+0x27/0x5c [36167.935723] [<ffffffff8020cf28>] child_rip+0xa/0x12 [36167.935723] [<ffffffff8024603c>] kthread+0x0/0x74 [36167.935723] [<ffffffff8020cf1e>] child_rip+0x0/0x12 ============================================================ I suppose that this error should be suppressed by default, since it's normal that md1 does not get checked until md0 has finished. I suppose also that it should be OK for this error to show up if the md data-check task hangs for some other reason. -- Package-specific info: --- mount output /dev/md0 on / type ext3 (rw,noatime,user_xattr,errors=remount-ro) tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) procbususb on /proc/bus/usb type usbfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620) /dev/md2 on /ud1 type ext3 (rw,user_xattr) /dev/sde1 on /mnt type ext3 (rw) --- mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR redalert # definitions of existing MD arrays ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ab444f7e:6f7106ee:ebc21b33:afebd24c ARRAY /dev/md1 level=raid1 num-devices=2 UUID=38e5ed2a:3c479c6f:c893c8f7:9b01a54e ARRAY /dev/md2 level=raid1 num-devices=2 UUID=573ceded:81bd690e:5d97480d:a52ed51d # This file was auto-generated on Tue, 16 Dec 2008 18:16:09 +0100 # by mkconf $Id$ --- /proc/mdstat: Personalities : [raid1] md2 : active raid1 sdc1[0] sdd1[1] 488383936 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 2996032 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 290037376 blocks [2/2] [UU] unused devices: <none> --- /proc/partitions: major minor #blocks name 8 0 293036184 sda 8 1 290037478 sda1 8 2 2996122 sda2 8 16 293036184 sdb 8 17 290037478 sdb1 8 18 2996122 sdb2 8 32 488386584 sdc 8 33 488384001 sdc1 8 48 488386584 sdd 8 49 488384001 sdd1 9 0 290037376 md0 9 1 2996032 md1 9 2 488383936 md2 8 64 117220824 sde 8 65 117218241 sde1 --- initrd.img-2.6.26-1-amd64: 39957 blocks etc/mdadm etc/mdadm/mdadm.conf sbin/mdadm lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid456.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/md-mod.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/linear.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/multipath.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid10.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid1.ko lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid0.ko scripts/local-top/mdadm --- /proc/modules: raid1 24192 3 - Live 0xffffffffa0122000 md_mod 80164 4 raid1, Live 0xffffffffa010d000 --- /var/log/syslog: --- volume detail: /dev/sda1: Magic : a92b4efc Version : 00.90.00 UUID : ab444f7e:6f7106ee:ebc21b33:afebd24c Creation Time : Tue Dec 16 18:00:15 2008 Raid Level : raid1 Used Dev Size : 290037376 (276.60 GiB 297.00 GB) Array Size : 290037376 (276.60 GiB 297.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Mon Feb 9 16:43:46 2009 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 3b05ff1 - correct Events : 19 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 8 17 1 active sync /dev/sdb1 -- /dev/sda2: Magic : a92b4efc Version : 00.90.00 UUID : 38e5ed2a:3c479c6f:c893c8f7:9b01a54e Creation Time : Tue Dec 16 18:00:23 2008 Raid Level : raid1 Used Dev Size : 2996032 (2.86 GiB 3.07 GB) Array Size : 2996032 (2.86 GiB 3.07 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Update Time : Mon Feb 9 16:01:14 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 14f422bd - correct Events : 12 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2 -- /dev/sdb1: Magic : a92b4efc Version : 00.90.00 UUID : ab444f7e:6f7106ee:ebc21b33:afebd24c Creation Time : Tue Dec 16 18:00:15 2008 Raid Level : raid1 Used Dev Size : 290037376 (276.60 GiB 297.00 GB) Array Size : 290037376 (276.60 GiB 297.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Mon Feb 9 16:43:46 2009 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 3b06003 - correct Events : 19 Number Major Minor RaidDevice State this 1 8 17 1 active sync /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 8 17 1 active sync /dev/sdb1 -- /dev/sdb2: Magic : a92b4efc Version : 00.90.00 UUID : 38e5ed2a:3c479c6f:c893c8f7:9b01a54e Creation Time : Tue Dec 16 18:00:23 2008 Raid Level : raid1 Used Dev Size : 2996032 (2.86 GiB 3.07 GB) Array Size : 2996032 (2.86 GiB 3.07 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Update Time : Mon Feb 9 16:01:14 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 14f422cf - correct Events : 12 Number Major Minor RaidDevice State this 1 8 18 1 active sync /dev/sdb2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2 -- /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 573ceded:81bd690e:5d97480d:a52ed51d Creation Time : Tue Dec 16 18:00:30 2008 Raid Level : raid1 Used Dev Size : 488383936 (465.76 GiB 500.11 GB) Array Size : 488383936 (465.76 GiB 500.11 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Feb 9 16:41:20 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 34e01767 - correct Events : 24 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 -- /dev/sdd1: Magic : a92b4efc Version : 00.90.00 UUID : 573ceded:81bd690e:5d97480d:a52ed51d Creation Time : Tue Dec 16 18:00:30 2008 Raid Level : raid1 Used Dev Size : 488383936 (465.76 GiB 500.11 GB) Array Size : 488383936 (465.76 GiB 500.11 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Feb 9 16:41:20 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 34e01779 - correct Events : 24 Number Major Minor RaidDevice State this 1 8 49 1 active sync /dev/sdd1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 -- --- /proc/cmdline root=/dev/md0 ro --- grub: kernel /boot/vmlinuz-2.6.26-1-amd64 root=/dev/md0 ro kernel /boot/vmlinuz-2.6.26-1-amd64 root=/dev/md0 ro single -- System Information: Debian Release: 5.0 APT prefers testing APT policy: (500, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 2.6.26-1-amd64 (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/bash Versions of packages mdadm depends on: ii debconf 1.5.24 Debian configuration management sy ii libc6 2.7-16 GNU C Library: Shared libraries ii lsb-base 3.2-20 Linux Standard Base 3.2 init scrip ii makedev 2.3.1-88 creates device files in /dev ii udev 0.125-7 /dev/ and hotplug management daemo Versions of packages mdadm recommends: ii exim4-daemon-heavy [mail-tran 4.69-9 Exim MTA (v4) daemon with extended ii module-init-tools 3.4-1 tools for managing Linux kernel mo mdadm suggests no packages. -- debconf information: mdadm/autostart: true mdadm/mail_to: root mdadm/initrdstart_msg_errmd: mdadm/initrdstart: all mdadm/initrdstart_msg_errconf: mdadm/initrdstart_notinconf: false mdadm/initrdstart_msg_errexist: mdadm/initrdstart_msg_intro: mdadm/autocheck: true mdadm/initrdstart_msg_errblock: mdadm/start_daemon: true -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org