But doesn't that also limit the maximum that the system will use for all
ZFS anything? So you'd be cutting all ZFS performance, not just limiting
what's used during scrub?

On Thu, Sep 27, 2012 at 11:54 AM, Udo Grabowski (IMK) <[email protected]
> wrote:

> On 27/09/2012 17:44, Reginald Beardsley wrote:
>
>>
>>  From observed behavior, it appears that the scrub is consuming too large
>> a share of DRAM (12 GB in this case).
>>
>
> in /etc/system: (then reboot)
> set zfs:zfs_arc_max = 0x200000000  (=8 GB)
>
> cuts it to roughly that value. Also, you can set zfs primarycache and
> secondarycache properties to metadata or none to lower the cache
> impacts (if workload is not affected too badly by this). On smaller
> machines (~6-12 GB), we set "metadata" and arcsize to less than 1GB.
> Scrub always hits you badly on smaller machines (weak IO), so we do
> this regularily via a cronjob only on sundays.
> --
> Dr.Udo Grabowski    Inst.f.Meteorology a.Climate Research IMK-ASF-SAT
> www-imk.fzk.de/asf/sat/**grabowski/<http://www-imk.fzk.de/asf/sat/grabowski/>
> www.imk-asf.kit.edu/english/**sat.php<http://www.imk-asf.kit.edu/english/sat.php>
> KIT - Karlsruhe Institute of Technology            http://www.kit.edu
> Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> [email protected]
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>


-- 
Seconds to the drop, but it seems like hours.

http://www.openmedia.ca
https://robbiecrash.me
_______________________________________________
OpenIndiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to