Hi Philip,

Philip <phlo...@gmail.com> writes:

> Package: btrfsmaintenance
> Version: 0.4.2-1
> Followup-For: Bug #908467
>
> Dear Maintainer,
>
> I just saw, that independent of the scheduling class, which is set at start of
> the scrubbing, the kernel worker processes seem to have always a scheduling
> priority of TS and not IDL as you can see with the following ps:
>
> # p|grep scrub
> 16288 root       - IDL  3.3  0.0    9  00:04 btrfs scrub start -Bd -c 3
> /usr/local/share/backup
> 18331 root       0 TS   0.7  0.0    1  07:49 [kworker/u16:4-btrfs-scrub]
> 18126 root       0 TS   0.6  0.0    1  07:30 [kworker/u16:2-btrfs-scrub]
> 15827 root       0 TS   0.5  0.0    1  06:46 [kworker/u16:7-btrfs-scrub]
> 18022 root       0 TS   0.5  0.0    1  07:09 
> [kworker/u16:25-btrfs-scrubparity]
> 18025 root       0 TS   0.5  0.0    1  07:09 
> [kworker/u16:31-btrfs-scrubparity]

So raid5/6 profile.

> 26890 root       0 TS   0.5  0.0    1  08:39 [kworker/u16:9-btrfs-scrub]
> 30744 root       0 TS   0.5  0.0    1  05:03 [kworker/u16:28-btrfs-scrub]
> 18021 root       0 TS   0.4  0.0    1  07:09 [kworker/u16:23-btrfs-scrub]
> 18065 root       0 TS   0.4  0.0    1  07:17 [kworker/u16:27-btrfs-scrub]
> 27865 root       0 TS   0.4  0.0    1  08:45 [kworker/u16:12-btrfs-scrub]
> 30836 root       0 TS   0.4  0.0    1  05:09 [kworker/u16:1-btrfs-scrub]
> 31648 root       0 TS   0.4  0.0    1  06:05 [kworker/u16:24-btrfs-scrub]
> 14798 root       - IDL  0.0  0.0    1  00:00 /bin/sh
> /usr/share/btrfsmaintenance/btrfs-scrub.sh
> 14809 root       - IDL  0.0  0.0    1  00:00 /bin/sh
> /usr/share/btrfsmaintenance/btrfs-scrub.sh
>
> Do you think there's a reason for that or could we just re-ionice them?
>

I do not recommend attempting to renice them, because raid5/6 is still
fragile and bug-prone.  On that topic, if the objective is to survive
one disk failure, raid1 profile (aka: 2 copies on x disks) is better in
every way to the raid5 profile, and for the purposes of this bug the
load on the system will be less while scrubbing.  If you need 3 copies,
and have a new enough kernel, and want something in between the
stability of raid1 and raid6 profiles, then raid1c3 is worth a try.  I
believe you'll be more satisfied with its performance than raid5/6,
which are the worst performing profiles.

From what I remember of upstream threads on this topic it's not possible
to renice those threads...  IIRC there is more info at the forwarded
URL, but it might have been on LKML.


Regards,
Nicholas

Attachment: signature.asc
Description: PGP signature

Reply via email to