That is why we switched to tarball installations with version directories as 
suggested by schedmd. No deb/rpm installations any more. 

--
Bas van der Vlies
| Operations, Support & Development | SURFsara | Science Park 140 | 1098 XG  
Amsterdam
| T +31 (0) 20 800 1300  | bas.vandervl...@surf.nl | www.surf.nl |




> On 24 Sep 2020, at 21:31, Dana, Jason T. <jason.d...@jhuapl.edu> wrote:
> 
> Hello,
>  
> I hopefully have a quick question.
>  
> I have compiled Slurm RPMs on a CentOS system with nvidia drivers installed 
> so that I can utilize AutoDetect=nvml configuration in our GPU nodes’ 
> gres.conf. All seems to be going well on the GPU nodes since I have done 
> that. I was unable to install the slurm RPM on the control/master node as the 
> RPM required libnvidia-ml.so to be installed. The control/master and other 
> compute nodes don’t have any nvidia cards attached to them, so I believed 
> installing the drivers just to satisfy this requirement might not be the best 
> idea. I recreated the RPM without the drivers present to get around this and 
> everything has been working great as far as I can tell.
>  
> I am now working on adding pmix support that I didn’t properly add initially 
> and am encountering this situation again. I figured I would send up a flag 
> and see if maybe I am going about this the wrong way. Is it typical to have 
> to compile the slurm RPMs for different types of nodes or am I completely 
> going about this the wrong way?
>  
> Thanks in advance! 
>  
> Jason

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to