Running Slurm 20.02 on Centos 7.7 with Bright Cluster 8.2. I'm wondering
how the below sbatch file is sharing a GPU.
MPS is running on the head node:
ps -auwx|grep mps
root 108581 0.0 0.0 12780 812 ?Ssl Mar23 0:27
/cm/local/apps/cuda-driver/libs/440.33.01/bin/nvidia-cuda-mps-co
Hey Sudeep,
Which flags to sreport have you tried? Which information was missing?
Regards,
Alex
On Thu, Apr 2, 2020 at 10:29 PM Sudeep Narayan Banerjee <
snbaner...@iitgn.ac.in> wrote:
> Dear Steven: Yes, but am unable to get the desired data. Not sure which
> flags to use.
>
> Thanks & Regard
Hi Marcus,
the essence of the code looks like
in job_submitl.lua script, it execute an external script
os.execute("/etc/slutm/test.sh".." "..job_desc.partition)
and the external test.sh executes the following command to get the
partition summary for further processing.
sinfo -h -p $1 -s
But,