I can assure you it was easier for you to filter slurm from your repos than it
was for me to make them available to both epel7 and epel8.
No good deed goes unpunished I guess.On Saturday, January 23, 2021,
07:03:08 AM EST, Ole Holm Nielsen wrote:
We use the EPEL yum repository on our C
On Saturday, 23 January 2021 9:54:11 AM PST Paul Raines wrote:
> Now rtx-08 which has only 4 GPUs seems to always get all 4 uses.
> But the others seem to always only get half used (except rtx-07
> which somehow gets 6 used so another wierd thing).
>
> Again if I submit non-GPU jobs, they end up
Yes, I meant job 38692. Sorry.
I am still having the problem. I suspect it has something to do with
the GPU configuration as this does not happen on my non-GPU node partitions.
Also, if I submit non-GPU jobs to the rtx8000 partition here, they
use up all the cores on the nodes just fine.
The u
We use the EPEL yum repository on our CentOS 7 nodes. Today EPEL
surprisingly delivers Slurm 20.11.2 RPMs, and the daily yum updates
(luckily) fail with some errors:
--> Running transaction check
---> Package slurm.x86_64 0:20.02.6-1.el7 will be updated
--> Processing Dependency: slurm(x86-64)
Chiming in on Michael's suggestion.
You can specify the same hostname in the slurm.conf but for the on-premise
nodes you either set the DNS or the /etc/hosts entry to the local (=private) IP
address.
For the cloud nodes you set DNS or the hosts entry to the publicly reachable IP.
example /etc/h