I have only one server and two data analysis pipelines, one for standard
jobs and other one for high priority job that can be triggered sometimes.
My first solution was to split the CPU of the server in two partition, one
for each pipeline.
A more complex (but i suppose better) solution could be
I'm announcing a "slurmacct" script/tool as an alternative to the Slurm
accounting report tool "sreport".
It's available on Github:
https://github.com/OleHolmNielsen/Slurm_tools/tree/master/slurmacct
This tool prints some job statistics which we used to get from our old
Torque system (see th
On 01/02/2018 12:59 PM, Nicolò Parmiggiani wrote:
My problem is that i have for instance 100 CPU, and i want to create two
partition each with 50 CPU maximum usage. In this way i can submit job
to both partitions independently.
I wonder what you really want to achieve? Why do you want to divi
My problem is that i have for instance 100 CPU, and i want to create two
partition each with 50 CPU maximum usage. In this way i can submit job to
both partitions independently.
2018-01-02 11:29 GMT+01:00 Nicolò Parmiggiani
:
> Hi,
>
> how can i limit the number of CPU that a partition can use?
On 01/02/2018 11:29 AM, Nicolò Parmiggiani wrote:
how can i limit the number of CPU that a partition can use?
For instance when a partition reach its maximum CPUs number you can
submit new job but they are put in queue.
I would think that you can't use more CPUs than you have got! A
resou
Hi,
how can i limit the number of CPU that a partition can use?
For instance when a partition reach its maximum CPUs number you can submit
new job but they are put in queue.
Thank you.