On 6/26/20, Ronald Klop <[email protected]> wrote: > > Van: Bob Bishop <[email protected]> > Datum: vrijdag, 26 juni 2020 17:18 > Aan: Peter Jeremy <[email protected]> > CC: Donald Wilde <[email protected]>, freebsd-stable > <[email protected]> > Onderwerp: Re: swap space issues >> >> >> >> > On 26 Jun 2020, at 11:23, Peter Jeremy <[email protected]> wrote: >> > >> > On 2020-Jun-25 11:30:31 -0700, Donald Wilde <[email protected]> wrote: >> >> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER): >> >> >> >> Device 1K-blocks Used Avail Capacity >> >> /dev/ada0s1b 33554432 0 33554432 0% >> >> /dev/ada0s1d 33554432 0 33554432 0% >> >> Total 67108864 0 67108864 0% >> > >> > I strongly suggest you don't have more than one swap device on spinning >> > rust - the VM system will stripe I/O across the available devices and >> > that will give particularly poor results when it has to seek between >> > the >> > partitions. >> Based on all recommendations on this thread (thanks, guys!), I've rebuilt my i3 mule with exactly one 16G partition, as it has only 'spinning rust' <haha> for a disk. My loader.conf has kern.maxswzone=4200000 and ccache is fully active and working for both root on tcsh and users on sh.
I have yet to try synth again. I'm doing buildworld/buildkernel for 12-STABLE, but evidence so far is good. 'top -t' is actually happy, showing 16G (grog?), so I'll try firing up synth after another hour or so on the latest fetch of the ports tree. Happy coder, me! :D -- Don Wilde **************************************************** * What is the Internet of Things but a system * * of systems including humans? * ****************************************************
# Device Mountpoint FStype Options Dump Pass# /dev/ada0s1a / ufs rw 1 1 /dev/ada0s1b none swap sw 0 0 /dev/ada0s1d /exp ufs rw 2 2 fdesc /dev/fd fdescfs rw 0 0 proc /proc procfs rw 0 0
_______________________________________________ [email protected] mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[email protected]"
