Hi Walter,

please scratch that. The email was written based on very short
experience with wapbl on a small 10gb partition. Now I've performed
testing on full 500gb partition all tests done using checksumming SR
RAID1. The tests differ only in mount option.

rsync: 12m15s (default) -> 7m40s (softdep) -> 6m35s (async) -> 6m5s (wapbl)
rm: 5m35s (default) -> 48s (softdep) -> 14s (wapbl) -> 13s (async)

so again, fantastic results. To kind of excuse me, I need to add that
my fear was caused by seeing huge numbers of small data written using
wapbl. After rsync/find/rm cycle I usually detach SR RAID1 drive and
this prints some statistic for me. For wapbl based run it looks:

RAID1C write statistics in format len (counter/collisions): 512
(917/917), 8192 (75667/293919), 65536 (10941/0), 32768 (204849/0),
4096 (593479/3174400), 12288 (33279/95801), 17408 (88/88), 64512
 (279/279), 20480 (12702/21085), 16384 (19738/43332), 28672
(7026/7636), 24576 (9129/11694), 50176 (7/7), 43520 (1/1), 22016
(2/2), 37888 (5/5), 27136 (2/2), 31744 (5/5), 61952 (5/5), 27648
(4/4), 14848 (4/4), 3072 (8/8), 29184 (6/6), 36352 (2/2), 10240 (3/3),
33280 (3/3), 32256 (4/4), 62976 (3/3), 53760 (8/8), 11776 (3/3), 43008
(5/5), 25088 (3/3), 40448 (6/6), 17920 (4/4), 41472 (3/3), 54784
(2/2), 55296 (3/3), 47104 (3/3), 18432 (12/12), 41984 (5/5), 10752
(2/2), 59904 (3/3), 33792 (1/1), 37376 (2/2), 30208 (1/1), 35328
(1/1), 19456 (6/6), 46080 (2/2), 22528 (6/6), 13312 (6/6), 49152
(5/5), 5120 (3/3), 23552 (5/5), 26112 (3/3), 51200 (7/7), 6144 (6/6),
59392 (4/4), 1024 (1/1), 9728 (2/2), 15872 (5/5), 38912 (1/1), 11264
(4/4), 18944 (29/29), 51712 (27/27), 52224 (5/5), 56320 (6/6), 9216
(4/4), 25600 (5/5), 39936 (5/5), 61440 (5/5), 35840 (3/3), 60928
(4/4), 4608 (2/2), 1536 (3/3), 47616 (3/3), 44032 (2/2), 57856 (3/3),
7680 (2/2), 3584 (2/2), 65024 (2/2), 15360 (2/2), 44544 (2/2), 5632
(4/4), 2560 (2/2), 58880 (3/3), 56832 (1/1), 49664 (2/2), 24064 (2/2),
60416 (4/4), 45056 (3/3), 40960 (3/3), 14336 (3/3), 36864 (2/2), 26624
(1/1), 42496 (4/4), 23040 (4/4), 12800 (4/4), 52736 (2/2), 55808
(2/2), 7168 (7/7), 58368 (6/6), 63488 (2/2), 34816 (1/1), 50688 (2/2),
31232 (3/3), 34304 (2/2), 2048 (1/1), 20992 (1/1), 48128 (3/3), 8704
(1/1), 53248 (3/3), 64000 (2/2), 62464 (3/3), 46592 (3/3), 6656 (2/2),
13824 (2/2), 45568 (2/2), 30720 (5/5), 39424 (2/2), 28160 (3/3), 29696
(1/1), 16896 (2/2),
RAID1C read statistics in format len (counter): 512 (62), 32768
(826676), 8192 (28), 12288 (14), 4096 (14), 65536 (18086),

Please note that collision, where there are not 0, you need to
subtract from the collision number the number of I/Os. This is slight
issue in my statistics collecting. From the log you can see that all
"unusal" operation shows number as (X/X) which means there was not
collision and the write was done in a slow way as read chksum block,
write data, write chksum block.

FYI: statistics from async run looks:

RAID1C write statistics in format len (counter/collisions): 512 (2/2),
8192 (75645/293154), 65536 (1154/0), 32768 (203966/0), 4096
(576745/3047106), 12288 (32958/95767), 16384 (19722/43407), 2048
0 (12696/21230), 28672 (7020/7591), 24576 (9125/11989),
RAID1C read statistics in format len (counter): 512 (75), 2048 (4),
32768 (826693), 8192 (32), 12288 (16), 4096 (16), 65536 (18166),

so you can see that wapbl do all sorts of unusal data lenght
operations, but does not hurt SR RAID1C by excess number of collisions
with different I/O. You can see such excess number of collisions in
case of 4096 bytes lenght write I/O: 4096 (593479/3174400).

So well, wapbl looks fantastic so far. I'm now hammering this on top
of SR RAID1C and will let you know about any issues...

Thanks!
Karel

On Sat, Nov 21, 2015 at 12:41 PM, Karel Gardas <gard...@gmail.com> wrote:
> RAID1. So here is my question: is there any possibility to convince
> current WAPBL code to write transasction into log in 32k blocks with
> 32k alignment? I can of course hack the code if you advice where to
> test that, I've just so far not been able to find the magic constant
> of commit size or so.

Reply via email to