On Fri, 16 Feb 2024 07:40:08 -0800, Robert Mustacchi wrote:
> Sorry for the trouble here, we'll work on improving the docs. I assume
> your system was set to have a number of default max jobs based on the
> number of CPUs that you have and that parallelism is part of the DRAM
> demands. How many CPUs were showing up just so I can make sure to take
> some notes.

much thanks for the elaborate and polite reply!

i used a dedicated vm running oi for gate to which i assigned 12 cores. in this 
case that means 12 threads as well from the vm's point of view.
however due to the circumstances i added an override to illumos.sh by 
hardcoding "maxjobs" to different values for testing. when i ran into this 
issue the first time however, everything was default so i used whatever 
illumos.sh deemed appropriate for 12 cores/threads.

> While the effect doesn't change, just a small note that tmpfs is going
> to all be in RAM at the end of the day, but it does require a swap
> reservation, which is where some of that disconnect is happening. So it
> won't actually end up getting sent out to a swap partition in general.

fair, otherwise it would have been even worse.

> We'll look at that. I think the challenge is that some folks don't want
> to send I/O to devices more than is necessary

understandable but that might not be much of a problem anymore these days.
in fairness, i don't know how that plays out on a big box like e.g. an amd 
epyc. at tripple digit thread numbers it might become a problem again.
anyway, as mentioned, easy to solve either way but adding a small hint to the 
docs.

> Thanks for reaching out about this and sorry for the trouble.

thanks to you and happy to help

------------------------------------------
illumos: illumos-discuss
Permalink: 
https://illumos.topicbox.com/groups/discuss/T5d96b425786a2696-Mad69a0fe9fd10bc7030e57e6
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription

Reply via email to