ntrol show node" for this host displays "Gres=(null)", and any attempts to
submit a job with --gpus=1 results in "srun: error: Unable to allocate resources: Requested node
configuration is not available".
Any idea what might be wrong?
Thanks,
~~ bnacar
--
Quirin Loh
:rtx5000 1010
gres mps:rtx6000 1011
gres gpu:rtx8000 1012
gres gpu:titan 1013
gresgpu:rtx_5000 1014
gresgpu:rtx_6000 1015
gresgpu:rtx_8000 1016
gres gpu:rtx_a6000 1017
--
Quirin Lohr
Systemadministration
Technische Universität M
.
Any chance to get the old variables back? I use them in my prolog scripts...
Thanks
Quirin
--
Quirin Lohr
Systemadministration
Technische Universität München
Fakultät für Informatik
Lehrstuhl für Bildverarbeitung und Künstliche Intelligenz
Boltzmannstrasse 3
85748 Garching
Tel. +49 89 289
//use_1gpu_10.out
StdIn=/dev/null
StdOut=//use_1gpu_10.out
Power=
GresEnforceBind=No
TresPerNode=gpu:1
--
Quirin Lohr
Systemadministration
Technische Universität München
Fakultät für Informatik
Lehrstuhl für Bildverarbeitung und Mustererkennung
Boltzmannstrasse 3
85748 Garching
y than 6GB” or “at
least 1000 cores”... Is there a way to set attributes on the GRES
resources so a user may do these sorts of constraints? (Can’t find
anything on Google)
I run Slurm clusters here using ver’s 16.05.4 and 17.11.7.
Thanks,
Will
--
Quirin Lohr
Systemadministration
Hi all,
we have a slurm cluster running on nodes with 2x18 cores, 256GB RAM and
8 GPUs. Is there a way to reserve a bare minimum of two CPUs and 8GB RAM
for each GPU, so a high-CPU job cannot render the GPUs "unusable"?
Thanks in advance
Quirin
--
Quirin Lohr
Systemadministration