This is also repeatable on my pool of slower spinning metal disks, srv: top - 22:59:43 up 71 days, 22:35, 3 users, load average: 18.88, 17.98, 10.68 Tasks: 804 total, 20 running, 605 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.1 us, 58.0 sy, 0.0 ni, 41.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 13191572+total, 14715308 free, 11000072+used, 7199696 buff/cache KiB Swap: 5970940 total, 5950460 free, 20480 used. 20842640 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1523 root 20 0 0 0 0 R 98.3 0.0 512:59.56 arc_reclaim
11330 root 20 0 0 0 0 R 57.1 0.0 0:04.91 arc_prune
11388 root 20 0 0 0 0 S 57.1 0.0 0:02.48 arc_prune
1522 root 20 0 0 0 0 R 56.8 0.0 282:45.37 arc_prune
11259 root 20 0 0 0 0 S 56.8 0.0 0:08.04 arc_prune
11323 root 20 0 0 0 0 R 56.8 0.0 0:05.20 arc_prune
11395 root 20 0 0 0 0 R 56.8 0.0 0:02.23 arc_prune
11400 root 20 0 0 0 0 S 56.8 0.0 0:01.96 arc_prune
11350 root 20 0 0 0 0 R 56.1 0.0 0:03.92 arc_prune
11375 root 20 0 0 0 0 S 56.1 0.0 0:02.86 arc_prune
11373 root 20 0 0 0 0 S 55.8 0.0 0:02.91 arc_prune
11379 root 20 0 0 0 0 S 55.1 0.0 0:02.65 arc_prune
11396 root 20 0 0 0 0 R 54.8 0.0 0:02.06 arc_prune
11378 root 20 0 0 0 0 R 54.5 0.0 0:02.67 arc_prune
11399 root 20 0 0 0 0 R 54.1 0.0 0:01.92 arc_prune
11406 root 20 0 0 0 0 R 54.1 0.0 0:01.68 arc_prune
11407 root 20 0 0 0 0 R 52.5 0.0 0:01.59 arc_prune
11409 root 20 0 0 0 0 R 46.5 0.0 0:01.41 arc_prune
11410 root 20 0 0 0 0 S 45.9 0.0 0:01.39 arc_prune
11417 root 20 0 0 0 0 S 40.3 0.0 0:01.22 arc_prune
11421 root 20 0 0 0 0 R 36.3 0.0 0:01.10 arc_prune
11424 root 20 0 0 0 0 S 32.7 0.0 0:00.99 arc_prune
11428 root 20 0 0 0 0 R 28.4 0.0 0:00.86 arc_prune
11429 root 20 0 0 0 0 S 27.4 0.0 0:00.83 arc_prune
11430 root 20 0 0 0 0 S 22.8 0.0 0:00.69 arc_prune
11433 root 20 0 0 0 0 S 20.1 0.0 0:00.61 arc_prune
11434 root 20 0 0 0 0 R 17.5 0.0 0:00.53 arc_prune
11439 root 20 0 0 0 0 R 11.2 0.0 0:00.34 arc_prune
11440 root 20 0 0 0 0 R 10.2 0.0 0:00.31 arc_prune
11442 root 20 0 0 0 0 S 6.6 0.0 0:00.20 arc_prune
11445 root 20 0 0 0 0 R 3.0 0.0 0:00.09 arc_prune
8021 sarnold 20 0 58756 19168 4792 D 1.7 0.0 0:03.33 dpkg-source
11446 root 20 0 0 0 0 R 1.7 0.0 0:00.05 arc_prune
sarnold@wopr:/tmp$ sudo -s
[sudo] password for sarnold:
root@wopr:/tmp# echo 3 > /proc/sys/vm/drop_caches
top - 23:01:37 up 71 days, 22:37, 3 users, load average: 6.43, 14.26, 10.22
Tasks: 823 total, 2 running, 637 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.3 us, 4.7 sy, 0.0 ni, 93.3 id, 0.5 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 13191572+total, 11750942+free, 14105432 used, 300868 buff/cache
KiB Swap: 5970940 total, 5950460 free, 20480 used. 11703465+avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5021 sarnold 20 0 30492 10356 5612 S 10.2 0.0 0:03.16 unpack.py
29956 sarnold 20 0 50232 16696 4832 R 3.0 0.0 0:00.09 dpkg-source
8511 root 1 -19 0 0 0 S 2.3 0.0 14:36.23 z_wr_iss
8515 root 1 -19 0 0 0 S 2.3 0.0 14:37.99 z_wr_iss
8516 root 1 -19 0 0 0 S 2.3 0.0 14:36.83 z_wr_iss
8517 root 1 -19 0 0 0 S 2.3 0.0 14:36.77 z_wr_iss
8520 root 1 -19 0 0 0 S 2.3 0.0 14:36.52 z_wr_iss
8523 root 1 -19 0 0 0 S 2.3 0.0 14:37.16 z_wr_iss
8500 root 1 -19 0 0 0 S 2.0 0.0 14:37.04 z_wr_iss
8501 root 1 -19 0 0 0 S 2.0 0.0 14:36.62 z_wr_iss
8504 root 1 -19 0 0 0 S 2.0 0.0 14:36.89 z_wr_iss
8505 root 1 -19 0 0 0 S 2.0 0.0 14:36.22 z_wr_iss
8507 root 1 -19 0 0 0 S 2.0 0.0 14:37.31 z_wr_iss
8508 root 1 -19 0 0 0 S 2.0 0.0 14:36.84 z_wr_iss
8509 root 1 -19 0 0 0 S 2.0 0.0 14:36.55 z_wr_iss
8512 root 1 -19 0 0 0 S 2.0 0.0 14:37.00 z_wr_iss
8514 root 1 -19 0 0 0 S 2.0 0.0 14:36.59 z_wr_iss
8518 root 1 -19 0 0 0 S 2.0 0.0 14:36.75 z_wr_iss
8521 root 1 -19 0 0 0 S 2.0 0.0 14:36.21 z_wr_iss
8522 root 1 -19 0 0 0 S 2.0 0.0 14:36.66 z_wr_iss
2 root 20 0 0 0 0 S 1.6 0.0 48:17.74 kthreadd
1531 root 39 19 0 0 0 S 1.6 0.0 26:38.12 dbuf_evict
8502 root 1 -19 0 0 0 S 1.6 0.0 14:36.83 z_wr_iss
8506 root 1 -19 0 0 0 S 1.6 0.0 14:37.19 z_wr_iss
8510 root 1 -19 0 0 0 S 1.6 0.0 14:37.02 z_wr_iss
8513 root 1 -19 0 0 0 S 1.6 0.0 14:37.31 z_wr_iss
8519 root 1 -19 0 0 0 S 1.6 0.0 14:37.35 z_wr_iss
8489 root 0 -20 0 0 0 S 1.3 0.0 13:52.77 z_null_int
8503 root 1 -19 0 0 0 S 1.3 0.0 14:36.46 z_wr_iss
1271 root 0 -20 0 0 0 S 1.0 0.0 23:33.65
spl_dynamic_tas
2287 root 20 0 42620 4628 3188 R 1.0 0.0 0:00.44 top
8491 root 0 -20 0 0 0 S 1.0 0.0 11:57.43 z_rd_int_0
8525 root 0 -20 0 0 0 S 1.0 0.0 5:44.43 z_wr_int_0
readops on the srv pool jumped from roughly 40 to roughly 2k, before settling
down again around 500-1000 riops.
Thanks
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1814983
Title:
zfs poor sustained read performance from ssd pool
Status in zfs-linux package in Ubuntu:
New
Bug description:
Hello,
I'm seeing substantially slower read performance from an ssd pool than
I expected.
I have two pools on this computer; one ('fst') is four sata ssds, the
other ('srv') is nine spinning metal drives.
With a long-running ripgrep process on the fst pool, performance
started out really good and grew to astonishingly good (iirc ~30kiops,
as measured by zpool iostat -v 1). However after a few hours the
performance has dropped to 30-40 iops. top reports an arc_reclaim and
many arc_prune processes to be consuming most of the CPU time.
I've included a screenshot of top, some output from zpool iostat -v 1,
and arc_summary, with "===" to indicate the start of the next
command's output:
===
top (memory in gigabytes):
top - 16:27:53 up 70 days, 16:03, 3 users, load average: 35.67, 35.81, 35.58
Tasks: 809 total, 19 running, 612 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 58.1 sy, 0.0 ni, 39.2 id, 2.6 wa, 0.0 hi, 0.0 si, 0.0
st
GiB Mem : 125.805 total, 0.620 free, 96.942 used, 28.243 buff/cache
GiB Swap: 5.694 total, 5.688 free, 0.006 used. 27.840 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1523 root 20 0 0.0m 0.0m 0.0m R 100.0 0.0 290:52.26
arc_reclaim
4484 root 20 0 0.0m 0.0m 0.0m R 56.2 0.0 1:18.79 arc_prune
6225 root 20 0 0.0m 0.0m 0.0m R 56.2 0.0 1:11.92 arc_prune
7601 root 20 0 0.0m 0.0m 0.0m S 56.2 0.0 2:50.25 arc_prune
30891 root 20 0 0.0m 0.0m 0.0m S 56.2 0.0 1:33.08 arc_prune
3057 root 20 0 0.0m 0.0m 0.0m S 55.9 0.0 9:00.95 arc_prune
3259 root 20 0 0.0m 0.0m 0.0m R 55.9 0.0 3:16.84 arc_prune
24008 root 20 0 0.0m 0.0m 0.0m S 55.9 0.0 1:55.71 arc_prune
1285 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 3:20.52 arc_prune
5345 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 1:15.99 arc_prune
30121 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 1:35.50 arc_prune
31192 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 6:17.16 arc_prune
32287 root 20 0 0.0m 0.0m 0.0m S 55.6 0.0 1:28.02 arc_prune
32625 root 20 0 0.0m 0.0m 0.0m R 55.6 0.0 1:27.34 arc_prune
22572 root 20 0 0.0m 0.0m 0.0m S 55.3 0.0 10:02.92 arc_prune
31989 root 20 0 0.0m 0.0m 0.0m R 55.3 0.0 1:28.03 arc_prune
3353 root 20 0 0.0m 0.0m 0.0m R 54.9 0.0 8:58.81 arc_prune
10252 root 20 0 0.0m 0.0m 0.0m R 54.9 0.0 2:36.37 arc_prune
1522 root 20 0 0.0m 0.0m 0.0m S 53.9 0.0 158:42.45 arc_prune
3694 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 1:20.79 arc_prune
13394 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 10:35.78 arc_prune
24592 root 20 0 0.0m 0.0m 0.0m R 53.9 0.0 1:54.19 arc_prune
25859 root 20 0 0.0m 0.0m 0.0m S 53.9 0.0 1:51.71 arc_prune
8194 root 20 0 0.0m 0.0m 0.0m S 53.6 0.0 0:54.51 arc_prune
18472 root 20 0 0.0m 0.0m 0.0m R 53.6 0.0 2:08.73 arc_prune
29525 root 20 0 0.0m 0.0m 0.0m R 53.6 0.0 1:35.81 arc_prune
32291 root 20 0 0.0m 0.0m 0.0m S 53.6 0.0 1:28.00 arc_prune
3156 root 20 0 0.0m 0.0m 0.0m R 53.3 0.0 3:17.68 arc_prune
6224 root 20 0 0.0m 0.0m 0.0m S 53.3 0.0 1:11.80 arc_prune
9788 root 20 0 0.0m 0.0m 0.0m S 53.3 0.0 0:46.00 arc_prune
10341 root 20 0 0.0m 0.0m 0.0m R 53.3 0.0 2:36.23 arc_prune
11881 root 20 0 0.0m 0.0m 0.0m S 53.0 0.0 2:31.57 arc_prune
24030 root 20 0 0.0m 0.0m 0.0m R 52.6 0.0 1:55.44 arc_prune
===
zpool iostat -v 1 output (for a while):
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 68 0
650K 0
mirror 588G 340G 31 0
331K 0
sdj - - 20 0
179K 0
sdk - - 10 0
152K 0
mirror 588G 340G 36 0
319K 0
sdl - - 17 0
132K 0
sdm - - 18 0
187K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 35
187K 144K
mirror 443G 2.29T 0 0
63.8K 0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0
63.8K 0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0
0 0
mirror 443G 2.29T 0 17
0 71.8K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 5
0 23.9K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 5
0 23.9K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 5
0 23.9K
mirror 443G 2.29T 1 17
124K 71.8K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 5
124K 23.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 5
0 23.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 5
0 23.9K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 110 0
1.07M 0
mirror 588G 340G 59 0
634K 0
sdj - - 28 0
303K 0
sdk - - 30 0
331K 0
mirror 588G 340G 50 0
459K 0
sdl - - 28 0
303K 0
sdm - - 21 0
155K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 229
183K 1.00M
mirror 443G 2.29T 2 73
183K 335K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 2 24
183K 112K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 24
0 112K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 23
0 112K
mirror 443G 2.29T 0 77
0 347K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 25
0 116K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 25
0 116K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 25
0 116K
mirror 443G 2.29T 0 77
0 347K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 25
0 116K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 25
0 116K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 25
0 116K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 29 0
403K 0
mirror 588G 340G 12 0
171K 0
sdj - - 7 0
79.7K 0
sdk - - 4 0
91.7K 0
mirror 588G 340G 16 0
231K 0
sdl - - 6 0
128K 0
sdm - - 9 0
104K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 0 66
63.8K 359K
mirror 443G 2.29T 0 21
0 120K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 6
0 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 7
0 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 6
0 39.9K
mirror 443G 2.29T 0 21
0 120K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 7
0 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 6
0 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 6
0 39.9K
mirror 443G 2.29T 0 22
63.8K 120K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 7
63.8K 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 6
0 39.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 7
0 39.9K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 97 0
797K 0
mirror 588G 340G 58 0
474K 0
sdj - - 27 0
263K 0
sdk - - 30 0
211K 0
mirror 588G 340G 38 0
323K 0
sdl - - 23 0
203K 0
sdm - - 14 0
120K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 176
187K 789K
mirror 443G 2.29T 0 58
59.8K 263K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 19
59.8K 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 18
0 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 19
0 87.7K
mirror 443G 2.29T 0 59
0 263K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 19
0 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 19
0 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 19
0 87.7K
mirror 443G 2.29T 1 57
128K 263K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 18
128K 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 19
0 87.7K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 18
0 87.7K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 70 0
426K 0
mirror 588G 340G 38 0
263K 0
sdj - - 21 0
128K 0
sdk - - 16 0
135K 0
mirror 588G 340G 31 0
163K 0
sdl - - 10 0
67.7K 0
sdm - - 20 0
95.6K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 1 46
116K 2.36M
mirror 443G 2.29T 0 0
59.8K 0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0
59.8K 0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0
0 0
mirror 443G 2.29T 0 37
0 2.31M
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 12
0 789K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 11
0 789K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 12
0 789K
mirror 443G 2.29T 0 8
55.8K 47.8K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 0 2
55.8K 15.9K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 2
0 15.9K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 2
0 15.9K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 108 0
614K 0
mirror 588G 340G 50 0
299K 0
sdj - - 32 0
203K 0
sdk - - 17 0
95.6K 0
mirror 588G 340G 57 0
315K 0
sdl - - 30 0
155K 0
sdm - - 26 0
159K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 68
191K 311K
mirror 443G 2.29T 0 8
0 47.8K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 2
0 15.9K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 2
0 15.9K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 2
0 15.9K
mirror 443G 2.29T 0 29
0 132K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 9
0 43.8K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 9
0 43.8K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 9
0 43.8K
mirror 443G 2.29T 2 29
191K 132K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 2 9
191K 43.8K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 9
0 43.8K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 9
0 43.8K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 66 0
379K 0
mirror 588G 340G 26 0
144K 0
sdj - - 12 0
63.8K 0
sdk - - 13 0
79.7K 0
mirror 588G 340G 39 0
235K 0
sdl - - 19 0
120K 0
sdm - - 19 0
116K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 166
183K 754K
mirror 443G 2.29T 0 55
0 251K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 18
0 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 17
0 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 18
0 83.7K
mirror 443G 2.29T 0 54
0 251K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 17
0 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 18
0 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 17
0 83.7K
mirror 443G 2.29T 2 55
183K 251K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 2 18
183K 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 17
0 83.7K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 18
0 83.7K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 126 0
698K 0
mirror 588G 340G 64 0
335K 0
sdj - - 37 0
195K 0
sdk - - 26 0
140K 0
mirror 588G 340G 61 0
363K 0
sdl - - 34 0
207K 0
sdm - - 26 0
155K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 3 274
239K 1.23M
mirror 443G 2.29T 1 91
120K 418K
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 1 30
120K 139K
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 29
0 139K
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 30
0 139K
mirror 443G 2.29T 0 91
0 418K
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 30
0 139K
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 29
0 139K
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 30
0 139K
mirror 443G 2.29T 1 91
120K 418K
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 30
120K 139K
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 30
0 139K
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 29
0 139K
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
capacity operations
bandwidth
pool alloc free read write
read write
------------------------------------------- ----- ----- ----- -----
----- -----
fst 1.15T 679G 70 0
442K 0
mirror 588G 340G 36 0
215K 0
sdj - - 18 0
95.6K 0
sdk - - 17 0
119K 0
mirror 588G 340G 33 0
227K 0
sdl - - 15 0
123K 0
sdm - - 17 0
104K 0
------------------------------------------- ----- ----- ----- -----
----- -----
srv 1.30T 6.86T 2 0
187K 0
mirror 443G 2.29T 0 0
63.7K 0
ata-HGST_HUS724030ALA640_PN2234P8KTWJYY - - 0 0
63.7K 0
ata-HGST_HUS724030ALA640_PN2234P9G620TW - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G66E2U - - 0 0
0 0
mirror 443G 2.29T 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G69TKU - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G69TXU - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G69U2U - - 0 0
0 0
mirror 443G 2.29T 1 0
123K 0
ata-HGST_HUS724030ALA640_PN2234P9G6EBUU - - 1 0
123K 0
ata-HGST_HUS724030ALA640_PN2234P9G6ESAU - - 0 0
0 0
ata-HGST_HUS724030ALA640_PN2234P9G6G70U - - 0 0
0 0
logs - - - -
- -
nvme0n1p1 900K 19.9G 0 0
0 0
cache - - - -
- -
nvme0n1p2 334G 764G 0 0
0 0
------------------------------------------- ----- ----- ----- -----
----- -----
===
arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Wed Feb 06 16:34:09 2019
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 142.80m
Mutex Misses: 14.89m
Evict Skips: 21.87b
ARC Size: 86.88% 54.65 GiB
Target Size: (Adaptive) 78.61% 49.45 GiB
Min Size (Hard Limit): 6.25% 3.93 GiB
Max Size (High Water): 16:1 62.90 GiB
ARC Size Breakdown:
Recently Used Cache Size: 25.05% 5.55 GiB
Frequently Used Cache Size: 74.95% 16.62 GiB
ARC Hash Breakdown:
Elements Max: 7.46m
Elements Current: 60.33% 4.50m
Collisions: 43.79m
Chain Max: 8
Chains: 504.81k
ARC Total accesses: 2.51b
Cache Hit Ratio: 95.24% 2.39b
Cache Miss Ratio: 4.76% 119.32m
Actual Hit Ratio: 92.99% 2.33b
Data Demand Efficiency: 94.16% 486.53m
Data Prefetch Efficiency: 86.47% 29.68m
CACHE HITS BY CACHE LIST:
Anonymously Used: 2.20% 52.51m
Most Recently Used: 37.35% 891.38m
Most Frequently Used: 60.29% 1.44b
Most Recently Used Ghost: 0.10% 2.32m
Most Frequently Used Ghost: 0.06% 1.45m
CACHE HITS BY DATA TYPE:
Demand Data: 19.20% 458.13m
Prefetch Data: 1.08% 25.66m
Demand Metadata: 78.22% 1.87b
Prefetch Metadata: 1.51% 36.00m
CACHE MISSES BY DATA TYPE:
Demand Data: 23.80% 28.40m
Prefetch Data: 3.37% 4.02m
Demand Metadata: 66.03% 78.79m
Prefetch Metadata: 6.80% 8.12m
L2 ARC Summary: (HEALTHY)
Low Memory Aborts: 233
Free on Write: 27.52k
R/W Clashes: 0
Bad Checksums: 0
IO Errors: 0
L2 ARC Size: (Adaptive) 364.94 GiB
Compressed: 91.59% 334.23 GiB
Header Size: 0.08% 307.98 MiB
L2 ARC Breakdown: 119.32m
Hit Ratio: 1.42% 1.69m
Miss Ratio: 98.58% 117.63m
Feeds: 6.01m
L2 ARC Writes:
Writes Sent: 100.00% 279.55k
DMU Prefetch Efficiency: 1.89b
Hit Ratio: 2.24% 42.49m
Miss Ratio: 97.76% 1.85b
ZFS Tunable:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 104857600
dbuf_cache_max_shift 5
dmu_object_alloc_chunk_shift 7
ignore_hole_birth 1
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_noprefetch 1
l2arc_norw 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 524288
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
metaslabs_per_vdev 200
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_verify_data 1
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
spa_slop_shift 5
zfetch_array_rd_sz 1048576
zfetch_max_distance 8388608
zfetch_max_streams 8
zfetch_min_sec_reap 2
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 10
zfs_admin_snapshot 1
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 0
zfs_arc_meta_adjust_restarts 4096
zfs_arc_meta_limit 0
zfs_arc_meta_limit_percent 75
zfs_arc_meta_min 0
zfs_arc_meta_prune 10000
zfs_arc_meta_strategy 1
zfs_arc_min 0
zfs_arc_min_prefetch_lifespan 0
zfs_arc_p_aggressive_disable 1
zfs_arc_p_dampener_disable 1
zfs_arc_p_min_shift 0
zfs_arc_pc_percent 0
zfs_arc_shrink_shift 0
zfs_arc_sys_free 0
zfs_autoimport_disable 1
zfs_compressed_arc_enabled 1
zfs_dbgmsg_enable 0
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_deadman_checktime_ms 5000
zfs_deadman_enabled 1
zfs_deadman_synctime_ms 1000000
zfs_dedup_prefetch 0
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync 67108864
zfs_dmu_offset_next_sync 0
zfs_expire_snapshot 300
zfs_flags 0
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_max_blocks 100000
zfs_free_min_time_ms 1000
zfs_immediate_write_sz 32768
zfs_max_recordsize 1048576
zfs_mdcomp_disable 0
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_mg_fragmentation_threshold 85
zfs_mg_noalloc_threshold 0
zfs_multihost_fail_intervals 5
zfs_multihost_history 0
zfs_multihost_import_intervals 10
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 30
zfs_prefetch_disable 0
zfs_read_chunk_size 1048576
zfs_read_history 0
zfs_read_history_hits 0
zfs_recover 0
zfs_resilver_delay 2
zfs_resilver_min_time_ms 3000
zfs_scan_idle 50
zfs_scan_min_time_ms 1000
zfs_scrub_delay 4
zfs_send_corrupt_data 0
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 5
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_top_maxinflight 32
zfs_txg_history 0
zfs_txg_timeout 5
zfs_vdev_aggregation_limit 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_cache_bshift 16
zfs_vdev_cache_max 16384
zfs_vdev_cache_size 0
zfs_vdev_max_active 1000
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl [fastest] original
scalar sse2 ssse3 avx2
zfs_vdev_read_gap_limit 32768
zfs_vdev_scheduler noop
zfs_vdev_scrub_max_active 2
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_write_gap_limit 4096
zfs_zevent_cols 80
zfs_zevent_console 0
zfs_zevent_len_max 512
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zil_replay_disable 0
zil_slog_bulk 786432
zio_delay_max 30000
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_taskq_batch_pct 75
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 32
zvol_volmode 1
Thanks
ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: zfsutils-linux 0.7.5-1ubuntu16.4
ProcVersionSignature: Ubuntu 4.15.0-39.42-generic 4.15.18
Uname: Linux 4.15.0-39-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.9-0ubuntu7.5
Architecture: amd64
Date: Wed Feb 6 16:26:17 2019
InstallationDate: Installed on 2016-04-04 (1038 days ago)
InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Beta amd64
(20160325)
ProcEnviron:
TERM=rxvt-unicode-256color
PATH=(custom, no user)
XDG_RUNTIME_DIR=<set>
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: zfs-linux
UpgradeStatus: Upgraded to bionic on 2018-08-16 (174 days ago)
modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission
denied: '/etc/sudoers.d/zfs']
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1814983/+subscriptions
--
Mailing list: https://launchpad.net/~kernel-packages
Post to : [email protected]
Unsubscribe : https://launchpad.net/~kernel-packages
More help : https://help.launchpad.net/ListHelp

