reopen 603319
found 603319 1.0.0.rc16-4
tags 603319 patch
thanks

Hello,

It has come to my attention (refer to forwarded messages below) that my 
previous patch was not complete. It only supported the case when DDF1 virtual 
drive (VD) had no custom human-readable name assigned to it. However, dmraid-
active would fail to bring up all named VDs because dmraid will use that name 
in place of GUID [1] to generate a device identifier.

I attach a patch against dmraid-activate in 1.0.0.rc16-4 which should fix this 
problem and refer to device by either name or GUID using the same logic as 
dmraid would.

Like previously [2], feel free to do a maintainer upload or I will NMU in two 
weeks from now.

P.S. Severity is still RC because we dealing with regressions from Lenny here 
and dmraid is somewhat critical package once one chooses to use it.

[1] See 1.0.0.rc16/lib/format/ddf/ddf1.c:687
[2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=603319#19

----------  Forwarded Message  ----------

Subject: Problems with DDF1 support in dmraid-activate
Date: pirmadienis 06 Gruodis 2010, 01:20:17
From: "Ian R. Justman"
To: Modestas Vainius <mo...@debian.org>


Hello, there.

I am working on a machine I have which uses DDF1 metadata for the array 
I have created on two drives.  The machine is a SuperMicro H8DI3+-F with 
an AMD SP5100 (server version of SB700) southbridge which uses the 
"ahci" driver.  It has an Adaptec "Embedded RAID" BIOS.  In this case, 
the array has two WDC 750GB drives in a RAID 1 configuration.  When I 
created the array using the BIOS configuration utility, it asked for a 
name, so I gave it one, in this case, "Fucked", and then initialized 
that array.

When I bring up whatever arrays are attached, I see the following:

r...@haruhi:~# dmraid -ay
RAID set "ddf1_Fucked" was activated
r...@haruhi:~# ls /dev/mapper
control  ddf1_Fucked

I can then run whatever partitioning program I wish on 
/dev/mapper/ddf1_Fucked and create whatever partitions I need.



However, the modified activation script you furnished for Debian (I'm 
running Ubuntu which removed your code in the package they uploaded for 
Natty) does not activate that array on my system.

Here's a dump from one of the constituent devices:

r...@haruhi:~# dmraid -i -n /dev/sdb
/dev/sdb (ddf1):
DDF1 anchor at 1465149167 with tables in little-endian format.
DDF1 Header at 0x1618660
0x000 signature:        0xDE11DE11
0x004 crc:              0xE001B35B
0x008 guid:             "De.....CDe...g..<j......" [44 65 c6 16 02 10 92 
43 44 6                                          5 c6 16 c0 67 c6 16 3c 
6a c6 16 ff ff ff ff]
0x020 rev:              "02.00.00" [30 32 2e 30 30 2e 30 30]
0x028 seqnum:           -1
0x02c timestamp:        0xFFFFFFFF
0x030 open:             0xFF
0x031 foreign:          0xFF
0x032 grouping:         0xFF
0x060 primary header:   1465149156
0x068 secondary header: 18446744073709551615
0x070 header type:      0x0
0x074 workspace len:    32768
0x078 workspace lba:    1465116388
0x080 max pd:           15
0x082 max vd:           4
0x084 max part:         1
0x086 vd_config len:    2
0x088 max_primary_elts: 65535
0x0c0 adapter_offset:   1
0x0c4 adapter_len:      1
0x0c8 pd_offset:        2
0x0cc pd_len:           2
0x0d0 vd_offset:        4
0x0d4 vd_len:           1
0x0d8 config_offset:    5
0x0dc config_len:       4
0x0e0 disk_data_offset: 9
0x0e4 disk_data_len:    1
0x0e8 badblock_offset:  -1
0x0ec badblock_len:     0
0x0f0 diag_offset:      -1
0x0f4 diag_len:         0
0x0f8 vendor_offset:    10
0x0fc vendor_len:       1
DDF1 Header at 0x16188d0
0x000 signature:        0xDE11DE11
0x004 crc:              0x6B8C351E
0x008 guid:             "De.....CDe...g..<j......" [44 65 c6 16 02 10 92 
43 44 6                                          5 c6 16 c0 67 c6 16 3c 
6a c6 16 ff ff ff ff]
0x020 rev:              "02.00.00" [30 32 2e 30 30 2e 30 30]
0x028 seqnum:           -1
0x02c timestamp:        0xFFFFFFFF
0x030 open:             0xFF
0x031 foreign:          0xFF
0x032 grouping:         0xFF
0x060 primary header:   1465149156
0x068 secondary header: 18446744073709551615
0x070 header type:      0x1
0x074 workspace len:    32768
0x078 workspace lba:    1465116388
0x080 max pd:           15
0x082 max vd:           4
0x084 max part:         1
0x086 vd_config len:    2
0x088 max_primary_elts: 65535
0x0c0 adapter_offset:   1
0x0c4 adapter_len:      1
0x0c8 pd_offset:        2
0x0cc pd_len:           2
0x0d0 vd_offset:        4
0x0d4 vd_len:           1
0x0d8 config_offset:    5
0x0dc config_len:       4
0x0e0 disk_data_offset: 9
0x0e4 disk_data_len:    1
0x0e8 badblock_offset:  -1
0x0ec badblock_len:     0
0x0f0 diag_offset:      -1
0x0f4 diag_len:         0
0x0f8 vendor_offset:    10
0x0fc vendor_len:       1
Adapter Data at 0x1618ae0
0x000 signature:        0xAD111111
0x004 crc:              0xAC230C36
0x008 guid:             "De...g..<j...l..ADPT...." [44 65 c6 16 c0 67 c6 
16 3c 6                                          a c6 16 b8 6c c6 16 41 
44 50 54 ff ff ff ff]
0x020 pci vendor:       0x1002
0x022 pci device:       0x4392
0x024 pci subvendor:    0x15D9
0x026 pci subdevice:    0xD180
Disk Data at 0x1618cf0
0x000 signature:        0x33333333
0x004 crc:              0x6601DEC1
0x008 guid:             "     WD-WCAV5919l..Z...." [20 20 20 20 20 57 44 
2d 57 4                                          3 41 56 35 39 31 39 6c 
0d c4 5a ff ff ff ff]
0x020 reference:                0x63639FB8
0x024 forced_ref_flag:  255
0x025 forced_guid_flag: 0
Physical Drive Header at 0x1618f00
0x000 signature:        0x22222222
0x004 crc:              0xEFBA5BBB
0x008 num drives:       2
0x00a max drives:       15
Physical Drive at 0x1618f40
0x000 guid:             "     WD-WCAV5919..,Z...." [20 20 20 20 20 57 44 
2d 57 4                                          3 41 56 35 39 31 39 ba 
ca 2c 5a ff ff ff ff]
0x018 reference #:      0x63639FB8
0x01c type:             0x2
0x01e state:            0x1
0x020 size:             1464776704
0x028 path info:        ".................." [01 00 00 00 ff ff ff ff ff 
ff ff f                                          f ff ff ff ff ff ff]
Physical Drive at 0x1618f80
0x000 guid:             "     WD-WCAV5963<.,Z...." [20 20 20 20 20 57 44 
2d 57 4                                          3 41 56 35 39 36 33 3c 
dc 2c 5a ff ff ff ff]
0x018 reference #:      0x6363AEB2
0x01c type:             0x2
0x01e state:            0x1
0x020 size:             1464776704
0x028 path info:        ".................." [02 00 00 00 ff ff ff ff ff 
ff ff f                                          f ff ff ff ff ff ff]
Virtual Drive Header at 0x1619310
0x000 signature:        0xDDDDDDDD
0x004 crc:              0xE421581
0x008 num drives:       1
0x00a max drives:       4
Virtual Drive at 0x1619350
0x000 guid:             "@50Z...C        ..cc:5JE" [40 35 30 5a 02 10 92 
43 20 2                                          0 20 20 20 20 20 20 ba 
85 63 63 3a 35 4a 45]
0x018 vd #:             0x0
0x01c type:             0xFFFFFFFF
0x020 state:            0x0
0x021 init state:       0x2
0x030 name:             "Fucked.........." [46 75 63 6b 65 64 00 00 00 
00 00 00                                           00 00 00 00]
Virtual Drive Config Record at 0x1619720
0x000 signature:        0xEEEEEEEE
0x004 crc:              0x78027417
0x008 guid:             "@50Z...C        ..cc:5JE" [40 35 30 5a 02 10 92 
43 20 2                                          0 20 20 20 20 20 20 ba 
85 63 63 3a 35 4a 45]
0x020 timestamp:        0x2
0x024 seqnum:           2
0x040 primary count:    2
0x042 stripe size:      7KiB
0x043 raid level:       1
0x044 raid qualifier:   0
0x045 secondary count:  1
0x046 secondary number: 0
0x047 secondary level:  255
0x060 spare 0:          0xFFFFFFFF
0x064 spare 1:          0xFFFFFFFF
0x068 spare 2:          0xFFFFFFFF
0x06c spare 3:          0xFFFFFFFF
0x070 spare 4:          0xFFFFFFFF
0x074 spare 5:          0xFFFFFFFF
0x078 spare 6:          0xFFFFFFFF
0x07c spare 7:          0xFFFFFFFF
0x080 cache policy:     0x0
0x088 bg task rate:     16
0x048 sector count:     1464776704
0x050 size:             1464776704
Drive map:
0: 63639FB8 @ 0
1: 6363AEB2 @ 0

If you need the one from sdc, please let me know.

If I extract just the awk script and pipe that information through that 
script, I get the following:

r...@haruhi:~/tmp/sbin# dmraid -i -n /dev/sdb | mawk -f blah.awk
4035305a021092432020202020202020ba8563633a354a45

I get the same when I use "sdc" instead of "sdb".

Then when I try to use that GUID according to how dmraid-activate wants 
to use it, I get this:

r...@haruhi:~/tmp/sbin# dmraid -ay 
ddf1_4035305a021092432020202020202020ba8563633a354a45
ERROR: ddf1: wrong # of devices in RAID set "ddf1_Fucked" [1/2] on /dev/sdc
ERROR: ddf1: wrong # of devices in RAID set "ddf1_Fucked" [1/2] on /dev/sdb
ERROR: either the required RAID set not found or more options required
no raid sets and with names: 
"ddf1_4035305a021092432020202020202020ba8563633a354a45"

It looks like this device wants me to give it its name:

r...@haruhi:~# dmraid -ay ddf1_Fucked
RAID set "ddf1_Fucked" was activated

If you need anything further or have further suggestions, please let me 
know.

Thanks.

--Ian.

-- 
Ian R. Justman
UNIX hacker.  Anime fan.  Any questions?
ianj (at) ian-justman.com

-----------------------------------------

----------  Forwarded Message  ----------

Subject: Re: Problems with DDF1 support in dmraid-activate
Date: pirmadienis 06 Gruodis 2010, 02:26:29
From: "Ian R. Justman"
To: Modestas Vainius <mo...@debian.org>


UPDATE:  If I recreate the array without a name, it works fine.

--Ian.
-----------------------------------------
From 9a91ef772cb72856704b89ff83c9db71196081c1 Mon Sep 17 00:00:00 2001
From: Modestas Vainius <mo...@debian.org>
Date: Mon, 6 Dec 2010 12:05:34 +0200
Subject: [PATCH] dmraid-active: handle the case DDF1 virtual drive has a name.

Further improve dmraid-activate DDF1 awk snippet to handle the case when
virtual drive (VD) has a human-readable name. In that case, dmraid will use
that name instead of the VD GUID when generating a device for the respective
raid subset.

Since human-readable names might contain spaces, make appropriate (but ugly
looking) tweaks to IFS variable as needed. We can't use `while read` since that
would fork a new shell and make global variables unavailable for
activate_array().
---
 debian/dmraid-activate |  113 +++++++++++++++++++++++++++++++++++++----------
 1 files changed, 89 insertions(+), 24 deletions(-)

diff --git a/debian/dmraid-activate b/debian/dmraid-activate
index 7e73473..f420289 100644
--- a/debian/dmraid-activate
+++ b/debian/dmraid-activate
@@ -116,13 +116,18 @@ log_error()
 	fi
 }
 
-ddf1_virtual_drive_guids()
+ddf1_virtual_drive_names()
 {
-	ddf1_awk_script=$(cat <<'EOF'
+	ddf1_awk_script="$(cat <<'EOF'
 BEGIN {
     section = ""
     disk_ref = ""
     guid_i = 0
+
+    # Heximal to decimal conversion array
+    for (i = 0; i <= 9; i++) hex2dec[i] = i
+    hex2dec["a"] = 10; hex2dec["b"] = 11; hex2dec["c"] = 12
+    hex2dec["e"] = 13; hex2dec["d"] = 14; hex2dec["f"] = 15;
 }
 
 function section_begins(name)
@@ -132,6 +137,42 @@ function section_begins(name)
     drive_map = 0
 }
 
+function extract_vd_guid(line,      g)
+{
+    g = substr(line, match(line,/\[[0-9a-f ]+\]$/)+1, RLENGTH-2)
+    gsub(/ /, "", g)
+    # IF LSI, do timestamp substitution to get persistent name, see
+    # 19_ddf1_lsi_persistent_name.patch
+    if (g ~ /^4c5349/)
+        g = substr(g, 1, 32) "47114711" substr(g, 41)
+    return g
+}
+
+function extract_vd_name(line,     hex, n, max, i, d1, d2, sed)
+{
+    n = tolower(substr(line, match(line,/\[[0-9a-f ]+\]$/)+1, RLENGTH-2))
+    max = split(n, hex, / /)
+
+    if (max <= 0 || hex[0] == "00") return ""
+
+    # Convert name from hex to string (16 bytes)
+    n = ""
+    for (i = 1; i <= max; i++) {
+        d1 = hex2dec[substr(hex[i], 1, 1)]
+        d2 = hex2dec[substr(hex[i], 2, 1)]
+        if ((d1 + d2) == 0) break
+        n = n sprintf("%c", d1 * 16 + d2)
+    }
+    # Shell-escape single quotes in the name
+    gsub(/'/,"'\\''", n)
+    # Finally strip non-graph chars from the end of the string
+    # mawk does not support character classes. Use sed.
+    sed = "echo '" n "' | sed 's/[^[:graph:]]\+$//'"
+    sed | getline n
+    close(sed)
+    return n
+}
+
 {
     if (!/^0x/ && / at /) {
         # Section begins
@@ -140,31 +181,45 @@ function section_begins(name)
         disk_ref = $3
         sub(/^0x/, "", disk_ref)
     } else if (disk_ref) {
-        if (section == "Virtual Drive Config Record" && /^0x008 guid:/) {
-            vd_guid = substr($0, match($0,/\[[0-9a-f ]+\]$/)+1, RLENGTH-2)
-            gsub(/ /, "", vd_guid)
-            # IF LSI, do timestamp substitution to get persistent name, see
-            # 19_ddf1_lsi_persistent_name.patch
-            if (vd_guid ~ /^4c5349/)
-                vd_guid = substr(vd_guid, 1, 32) "47114711" substr(vd_guid, 41)
-        } else if (drive_map) {
-            # 0: 4BCBB980 @ 0
-            if ($2 == disk_ref) {
-                guids[guid_i] = vd_guid
-                guid_i++
+        # We need to parse 'Virtual Drive' sections in order to extract VD
+        # names
+        if (section == "Virtual Drive") {
+            if (/^0x000 guid:/) {
+                vd_guid = extract_vd_guid($0)
+            } else if (/^0x030 name:/) {
+                vd_name = extract_vd_name($0)
+                if (vd_name)
+                    vd_names[vd_guid] = vd_name
+            }
+        } else if (section == "Virtual Drive Config Record") {
+            if (/^0x008 guid:/) {
+                vd_guid = extract_vd_guid($0)
+            } else if (drive_map) {
+                # 0: 4BCBB980 @ 0
+                if ($2 == disk_ref) {
+                    guids[guid_i] = vd_guid
+                    guid_i++
+                }
+            } else if (vd_guid) {
+                drive_map = /^Drive map:/
             }
-        } else if (vd_guid) {
-            drive_map = /^Drive map:/
         }
     }
 }
 END {
-    # Print discovered virtual drive GUIDs which belong to this physical drive
-    for (guid in guids)
-        print guids[guid]
+    # Print discovered virtual drive names (or GUIDs) which belong to this
+    # physical drive
+    for (guid_i in guids) {
+        guid = guids[guid_i]
+        if (guid in vd_names) {
+            print vd_names[guid]
+        } else {
+            print guid
+        }
+    }
 }
 EOF
-)
+)"
 	dmraid -i -n "$1" | awk "$ddf1_awk_script"
 }
 
@@ -193,6 +248,9 @@ if [ -z "$Raid_Name" ]; then
 	exit 0
 fi
 
+newline="
+"
+
 case "$Raid_Name" in
 	isw_*)
 		# We need a special case for isw arrays, since it is possible to have several
@@ -208,13 +266,20 @@ case "$Raid_Name" in
 		;;
 	.ddf1_disks)
 		# Dummy name for the main DDF1 group. Needs special handling to
-		# find RAID subsets for this physical drive
-		Ddf1_guids=`ddf1_virtual_drive_guids "/dev/$Node_Name"`
+		# find RAID subsets (name or GUID) for this physical drive
+		Ddf1_names=`ddf1_virtual_drive_names "/dev/$Node_Name"`
 
-		for ddf1_guid in $Ddf1_guids
+		# Returned names might contain space characters. Therefore
+		# split fields at new line. Use $IFS to avoid forking a new shell
+		save_IFS="$IFS"
+		IFS="$newline"
+		for ddf1_name in $Ddf1_names
 		do
-			activate_array "ddf1_${ddf1_guid}"
+			IFS="$save_IFS"
+			activate_array "ddf1_${ddf1_name}"
+			IFS="$newline"
 		done
+		IFS="$save_IFS"
 		break
 		;;
 	*)
-- 
1.7.2.3

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to