Re: [slurm-users] [EXTERNAL] Re: Managing shared memory (/dev/shm) usage per job?

2022-04-07 Thread Mark Coatsworth
Thanks so much Greg! That looks like the solution we want, but like John I'm also unfamiliar with spank plugins. I guess that will have to change. Mark On Wed, Apr 6, 2022 at 7:54 AM John Hanks wrote: > Thanks, Greg! This looks like the right way to do this. I will have to > stop putting off le

Re: [slurm-users] sinfo : Format NodeHost truncation

2022-04-07 Thread Brian Andrus
Use the formatting commands. From the manpage for sinfo: The format of each field is "%[[.]size]type[suffix]" size     Minimum field size. If no size is specified, whatever is needed to print the information will be used. .     Indicates the output should be right justified

[slurm-users] sinfo : Format NodeHost truncation

2022-04-07 Thread Nicholas Yue
Hi, I am spinning up an MPI/Slurm cluster on AWS I am attempting to script the node names discovery for any given cluster I tried `sinfo --Node --Format=NodeHost` sinfo truncates the hostname to the first 20 characters but AWS creates hosts with longer names. Is there some workaround

Re: [slurm-users] Strange memory limit behavior with --mem-per-gpu

2022-04-07 Thread Paul Raines
Basically, it appears using --mem-per-gpu instead of just --mem gives you unlimited memory for your job. $ srun --account=sysadm -p rtx8000 -N 1 --time=1-10:00:00 --ntasks-per-node=1 --cpus-per-task=1 --gpus=1 --mem-per-gpu=8G --mail-type=FAIL --pty /bin/bash rtx-07[0]:~$ find /sys/fs/cgroup/me