I ran some other tests and got the nearly the same results. That 4
minutes in my previous post means about 50% overhead. So, 24000
minutes on direct run is about 35000 minutes via slurm. I will post
with details later. the methodology I used is
1- Submit a job to a specific node (compute-0-0) via
I think that will limit other nodes to 20 too. Isn't that?
Currently computes have 32 cores per node and I want all 32 cores. The head
node also has 32 core but I want to include only 20 cores.
On Sun, Apr 22, 2018, 03:53 Chris Samuel wrote:
>
> All you need to do is add "MaxCPUsPerNode=20" to
On Sunday, 22 April 2018 4:41:43 AM AEST Mahmood Naderan wrote:
> Since our head node has 32 cores, I want to add some cores to a
> partition. If I edit the parts file like this
>
> PartitionName=SPEEDY AllowAccounts=em1 Nodes=compute-0-[2-4],rocks7
>
> then it will include all cores.
All you n
Hi,
Since our head node has 32 cores, I want to add some cores to a
partition. If I edit the parts file like this
PartitionName=SPEEDY AllowAccounts=em1 Nodes=compute-0-[2-4],rocks7
then it will include all cores. I think I have to edit slurm.conf like this then
NodeName=rocks7 NodeAddr=10.1.1.1