On Mon, Feb 08, 2021 at 12:36:06PM +0100, Ansgar Esztermann-Kirchner wrote:

> Of course, one could use different partitions for different nodes, and
> then submit individual jobs with CPU requests tailored to one such
> partition, but I'd prefer a more flexible approach where a given job
> could run on any large enough node.

After scouring the docs once more, I've noticed DefaultCpusPerGpu,
which seems to be exactly what I was looking for: jobs request a
number of GPUs, but no CPUs; and Slurm will assign an appropriate
number of CPUs. The only disadvantage is the fact that this is a
partition parameter, so to retain full flexibility, jobs will have to
mention all partitions (since there is no wildcard); but this
shouldn;t be a problem for us since we have an automated submission
tool that can take care of this.

I have run some simple tests to ensure the parameter works as
expected, but more thorough testing needs to be done.



A.

-- 
Ansgar Esztermann
Sysadmin Dep. Theoretical and Computational Biophysics
http://www.mpibpc.mpg.de/grubmueller/esztermann

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to