Hello,

I hopefully have a quick question.

I have compiled Slurm RPMs on a CentOS system with nvidia drivers installed so 
that I can utilize AutoDetect=nvml configuration in our GPU nodes’ gres.conf. 
All seems to be going well on the GPU nodes since I have done that. I was 
unable to install the slurm RPM on the control/master node as the RPM required 
libnvidia-ml.so to be installed. The control/master and other compute nodes 
don’t have any nvidia cards attached to them, so I believed installing the 
drivers just to satisfy this requirement might not be the best idea. I 
recreated the RPM without the drivers present to get around this and everything 
has been working great as far as I can tell.

I am now working on adding pmix support that I didn’t properly add initially 
and am encountering this situation again. I figured I would send up a flag and 
see if maybe I am going about this the wrong way. Is it typical to have to 
compile the slurm RPMs for different types of nodes or am I completely going 
about this the wrong way?

Thanks in advance!

Jason

Reply via email to