Hi,
I came to same conclusion and spotted similar bits of the code where code could
be changed to get what was required. Without a new variable it will be tricky
to implement properly due to way those existing variables are used and defined.
Maybe a PeakMem variable in Slurm accounting databa
Hi,
We have had similar questions from users regarding how best to find out the
high memory peak of a job since they may run a job and get a not very useful
value for variables in sacct such as the MaxRSS since Slurm didn’t poll during
the use of its maximum memory usage.
With Cgroupv1 looking
Hi,
The problem comes from if the login nodes (or submission hosts) have different
ulimits – maybe the submission hosts are VMs and not physical servers. Then
the ulimits will be passed from submission hosts in Slurm to the jobs compute
node by default which can results in different settings b
Hi,
When we first migrated to Slurm from PBS one of the strangest issues we hit was
that ulimit settings are inherited from the submission host which could explain
the different between ssh'ing into the machine (and the default ulimit being
applied) and with running a job via srun.
You could u
Hi,
I have found that SLURM_SHARDS_ON_NODE is not an environment variable in prolog
or epilog. Is ther ea reason for this? I thought it would be handy to know if
shards have been requested in a prolog/epilog.
Tom
--
Thomas Green Senior Programmer
ARCCA, Redwood Buildi