David,
There are several possible answers depending on what you hope to
accomplish. What exactly is the issue that you're trying to solve? Do
you mean that you have users who need, say, 8 GB of RAM per core but you
only have 4 GB of RAM per core on the system and you want a way to
account fo
Also I recommend setting:
*CoreSpecCount*
Number of cores reserved for system use. These cores will not be
available for allocation to user jobs. Depending upon the
*TaskPluginParam* option of *SlurmdOffSpec*, Slurm daemons (i.e.
slurmd and slurmstepd) may either be confined to these
You can actually spoof the number of cores and RAM on a node by using
the config_override option. I've used that before for testing
purposes. Mind you core binding and other features like that will not
work if you start spoofing the number of cores and ram, so use with caution.
-Paul Edmon-
Le jeudi 6 janvier 2022 à 22:39, David Henkemeyer
a écrit :
> All,
>
> When my team used PBS, we had several nodes that had a TON of CPUs, so many,
> in fact, that we ended up setting np to a smaller value, in order to not
> starve the system of memory.
>
> What is the best way to do this with
Hi David,
On 1/6/22 22:39, David Henkemeyer wrote:
When my team used PBS, we had several nodes that had a TON of CPUs, so
many, in fact, that we ended up setting np to a smaller value, in order to
not starve the system of memory.
What is the best way to do this with Slurm? I tried modifying
All,
When my team used PBS, we had several nodes that had a TON of CPUs, so
many, in fact, that we ended up setting np to a smaller value, in order to
not starve the system of memory.
What is the best way to do this with Slurm? I tried modifying # of CPUs in
the slurm.conf file, but I noticed th